modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-28 06:27:35
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
523 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-28 06:27:22
card
stringlengths
11
1.01M
deepghs/anime_portrait
deepghs
2023-10-10T15:11:56Z
0
1
null
[ "onnx", "art", "image-classification", "dataset:deepghs/anime_portrait", "license:openrail", "region:us" ]
image-classification
2023-10-09T08:08:37Z
--- license: openrail datasets: - deepghs/anime_portrait metrics: - accuracy - f1 pipeline_tag: image-classification tags: - art --- | Name | FLOPS | Params | Accuracy | AUC | Confusion | Labels | |:-------------------------:|:-------:|:--------:|:----------:|:------:|:-----------------------------------------------------------------------------------------------------------------:|:----------------------------:| | caformer_s36_v0 | 22.10G | 37.22M | 98.10% | 0.9977 | [confusion](https://huggingface.co/deepghs/anime_portrait/blob/main/caformer_s36_v0/plot_confusion.png) | `person`, `halfbody`, `head` | | mobilenetv3_small_v0_dist | 0.16G | 1.51M | 97.56% | 0.9969 | [confusion](https://huggingface.co/deepghs/anime_portrait/blob/main/mobilenetv3_small_v0_dist/plot_confusion.png) | `person`, `halfbody`, `head` | | mobilenetv3_v0_dist | 0.63G | 4.18M | 97.98% | 0.9983 | [confusion](https://huggingface.co/deepghs/anime_portrait/blob/main/mobilenetv3_v0_dist/plot_confusion.png) | `person`, `halfbody`, `head` |
waldie/Mistral-Pygmalion-7b-8bpw-h8-exl2
waldie
2023-10-10T15:11:41Z
11
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "Mistral", "Pygmalion", "llama-2", "llama-2-7b", "en", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-10-10T14:50:02Z
--- license: cc-by-nc-nd-4.0 language: - en library_name: transformers pipeline_tag: text-generation tags: - Mistral - Pygmalion - llama-2 - llama-2-7b --- quant of [Delcos'](https://huggingface.co/Delcos) [MistralPy-7b](https://huggingface.co/Delcos/Mistral-Pygmalion-7b) ``` python3 convert.py \ -i /input/Delcos_Mistral-Pygmalion-7b/ \ -c /input/wikitext/0000.parquet \ -o /output/temp/ \ -cf /output/8bpw/ \ -b 8.0 \ -hb 8 ```
aiknight87/llama-2-7b-hf-tuned-200
aiknight87
2023-10-10T15:11:16Z
1
0
peft
[ "peft", "llama", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "4-bit", "bitsandbytes", "region:us" ]
null
2023-10-10T14:15:47Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.0.dev0
alienverarslan/llama-2-7B-32K-instruct-7209-web-articles-fine-tuned
alienverarslan
2023-10-10T15:07:32Z
5
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "custom_code", "en", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-09-22T17:06:49Z
--- language: - en library_name: transformers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
waldie/Mistral-Pygmalion-7b-4bpw-h6-exl2
waldie
2023-10-10T14:47:14Z
10
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "Mistral", "Pygmalion", "llama-2", "llama-2-7b", "en", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-10-10T14:35:50Z
--- license: cc-by-nc-nd-4.0 language: - en library_name: transformers pipeline_tag: text-generation tags: - Mistral - Pygmalion - llama-2 - llama-2-7b --- quant of [Delcos'](https://huggingface.co/Delcos) [MistralPy-7b](https://huggingface.co/Delcos/Mistral-Pygmalion-7b) ``` python3 convert.py \ -i /input/Delcos_Mistral-Pygmalion-7b/ \ -c /input/wikitext/0000.parquet \ -o /output/temp/ \ -cf /output/4bpw/ \ -b 4.0 \ -hb 6 ```
HuangLab/CELL-E_2_HPA_Finetuned_2560
HuangLab
2023-10-10T14:44:55Z
0
2
pytorch
[ "pytorch", "biology", "microscopy", "text-to-image", "transformers", "license:mit", "region:us" ]
text-to-image
2023-05-13T00:37:16Z
--- license: mit library_name: pytorch tags: - biology - microscopy - text-to-image - transformers metrics: - accuracy --- [![Huang Lab](images/huanglogo.jpeg)](huanglab.ucsf.edu) # CELL-E 2 ## Model description [![CELL-E_2](images/architecture.png)](https://bohuanglab.github.io/CELL-E_2/) CELL-E 2 is the second iteration of the original [CELL-E](https://www.biorxiv.org/content/10.1101/2022.05.27.493774v1) model which utilizes an amino acid sequence and nucleus image to make predictions of subcellular protein localization with respect to the nucleus. CELL-E 2 is novel bidirectional transformer that can generate images depicting protein subcellular localization from the amino acid sequences (and *vice versa*). CELL-E 2 not only captures the spatial complexity of protein localization and produce probability estimates of localization atop a nucleus image, but also being able to generate sequences from images, enabling *de novo* protein design. We trained on the [Human Protein Atlas](https://www.proteinatlas.org) (HPA) and the [OpenCell](https://opencell.czbiohub.org) datasets. CELL-E 2 utilizes pretrained amino acid embeddings from [ESM-2](https://github.com/facebookresearch/esm). Localization is predicted as a binary image atop the provided nucleus. The logit values are weighted against these binary images to produce a heatmap of expected localization. ## Spaces We have two spaces available where you can run predictions on your own data! - [Image Prediction](https://huggingface.co/spaces/HuangLab/CELL-E_2-Image_Prediction) - [Sequence Prediction](https://huggingface.co/spaces/HuangLab/CELL-E_2-Sequence_Prediction) ## Model variations We have made several versions of CELL-E 2 available. The naming scheme follows the structure ```training set_hidden size``` where the hidden size is set to the embedding dimension of the pretrained ESM-2 model. We annotate the most useful models under Notes, however other models can be used if memory constraints are present. Since these models share similarities with BERT, the embeddings from any of these models may be benefical for downstream tasks. **HPA Models**: HPA models are trained on the HPA dataset. They are best for general purpose predictions as they include a variety of cell types. | Model | Size | Notes |------------------------|--------------------------------|-------| | [`HPA_480`](https://huggingface.co/HuangLab/CELL-E_2_HPA_480) | 4.73 GB | **Best for Image Prediction** | | [`HPA_640`](https://huggingface.co/HuangLab/CELL-E_2_HPA_640) | 6.31 GB | | | [`HPA_1280`](https://huggingface.co/HuangLab/CELL-E_2_HPA_1280) | 10.8 GB | | | [`HPA_2560`](https://huggingface.co/HuangLab/CELL-E_2_HPA_2560) | 17.5 GB | **Best for Sequence Prediction** | **OpenCell Models**: OpenCell models are trained on the OpenCell dataset. These only contain HEK cells and should ideally only be used for predictions on HEK cells. They perform well on image prediction but the generate heatmaps contain little information. | Model | Size | Notes |------------------------|--------------------------------|-------| | [`OpenCell_480`](https://huggingface.co/HuangLab/CELL-E_2_OpenCell_480) | 4.73 GB | | | [`OpenCell_640`](https://huggingface.co/HuangLab/CELL-E_2_OpenCell_640) | 6.31 GB | | | [`OpenCell_1280`](https://huggingface.co/HuangLab/CELL-E_2_OpenCel_1280) | 10.8 GB | | | [`OpenCell_2560`](https://huggingface.co/HuangLab/CELL-E_2_OpenCell_2560) | 17.5 GB | **Best for Sequence Prediction** | **Finetuned HPA Models**: These models were used the HPA models as checkpoints, but then were finetuned on the OpenCell dataset. We found that they improve image generation capabilities, but did not necessary see an improvement in sequence prediction. | Model | Size | Notes |------------------------|--------------------------------|-------| | [`HPA_480`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_480) | 4.73 GB | **Best for Image Prediction** | | [`HPA_640`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_640) | 6.31 GB | | | [`HPA_1280`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_1280) | 10.8 GB | | | [`HPA_2560`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_2560) | 17.5 GB | | To reduce download size, we removed the ESM-2 model from the checkpoint. This should be downloaded the first time the code is run, but is otherwise something to be aware of if loading into other projects. ### How to use The full codebase is available on [GitHub](https://github.com/BoHuangLab/CELL-E_2). Download the model and make sure ```nuclues_vqgan.yaml```, ```threshold_vqgan.yaml```, ```config.yaml```, and ```model.ckpt``` are present. ``` Here is how to use this model to do sequence prediction: ```python configs = OmegaConf.load(configs/config.yaml); model = instantiate_from_config(configs.model).to(device); model.sample(text=sequence, condition=nucleus) ``` ### BibTeX entry and citation info ```bibtex @inproceedings{ anonymous2023translating, title={CELL-E 2: Translating Proteins to Pictures and Back with a Bidirectional Text-to-Image Transformer}, author={Emaad Khwaja, Yun S. Song, Aaron Agarunov, and Bo Huang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=YSMLVffl5u} } ``` ### Contact We are an interdisciplinary lab based at [UCSF](https://www.ucsf.edu). We are particularly seeking talents in optical engineering, machine learning, and cellular microscopy. [Please reach out to Bo if you're interested in collaborating!](http://huanglab.ucsf.edu/Contact.html)
HuangLab/CELL-E_2_HPA_2560
HuangLab
2023-10-10T14:44:50Z
6
0
pytorch
[ "pytorch", "biology", "microscopy", "text-to-image", "transformers", "license:mit", "region:us" ]
text-to-image
2023-05-13T00:13:44Z
--- license: mit library_name: pytorch tags: - biology - microscopy - text-to-image - transformers metrics: - accuracy --- [![Huang Lab](images/huanglogo.jpeg)](huanglab.ucsf.edu) # CELL-E 2 ## Model description [![CELL-E_2](images/architecture.png)](https://bohuanglab.github.io/CELL-E_2/) CELL-E 2 is the second iteration of the original [CELL-E](https://www.biorxiv.org/content/10.1101/2022.05.27.493774v1) model which utilizes an amino acid sequence and nucleus image to make predictions of subcellular protein localization with respect to the nucleus. CELL-E 2 is novel bidirectional transformer that can generate images depicting protein subcellular localization from the amino acid sequences (and *vice versa*). CELL-E 2 not only captures the spatial complexity of protein localization and produce probability estimates of localization atop a nucleus image, but also being able to generate sequences from images, enabling *de novo* protein design. We trained on the [Human Protein Atlas](https://www.proteinatlas.org) (HPA) and the [OpenCell](https://opencell.czbiohub.org) datasets. CELL-E 2 utilizes pretrained amino acid embeddings from [ESM-2](https://github.com/facebookresearch/esm). Localization is predicted as a binary image atop the provided nucleus. The logit values are weighted against these binary images to produce a heatmap of expected localization. ## Spaces We have two spaces available where you can run predictions on your own data! - [Image Prediction](https://huggingface.co/spaces/HuangLab/CELL-E_2-Image_Prediction) - [Sequence Prediction](https://huggingface.co/spaces/HuangLab/CELL-E_2-Sequence_Prediction) ## Model variations We have made several versions of CELL-E 2 available. The naming scheme follows the structure ```training set_hidden size``` where the hidden size is set to the embedding dimension of the pretrained ESM-2 model. We annotate the most useful models under Notes, however other models can be used if memory constraints are present. Since these models share similarities with BERT, the embeddings from any of these models may be benefical for downstream tasks. **HPA Models**: HPA models are trained on the HPA dataset. They are best for general purpose predictions as they include a variety of cell types. | Model | Size | Notes |------------------------|--------------------------------|-------| | [`HPA_480`](https://huggingface.co/HuangLab/CELL-E_2_HPA_480) | 4.73 GB | **Best for Image Prediction** | | [`HPA_640`](https://huggingface.co/HuangLab/CELL-E_2_HPA_640) | 6.31 GB | | | [`HPA_1280`](https://huggingface.co/HuangLab/CELL-E_2_HPA_1280) | 10.8 GB | | | [`HPA_2560`](https://huggingface.co/HuangLab/CELL-E_2_HPA_2560) | 17.5 GB | **Best for Sequence Prediction** | **OpenCell Models**: OpenCell models are trained on the OpenCell dataset. These only contain HEK cells and should ideally only be used for predictions on HEK cells. They perform well on image prediction but the generate heatmaps contain little information. | Model | Size | Notes |------------------------|--------------------------------|-------| | [`OpenCell_480`](https://huggingface.co/HuangLab/CELL-E_2_OpenCell_480) | 4.73 GB | | | [`OpenCell_640`](https://huggingface.co/HuangLab/CELL-E_2_OpenCell_640) | 6.31 GB | | | [`OpenCell_1280`](https://huggingface.co/HuangLab/CELL-E_2_OpenCel_1280) | 10.8 GB | | | [`OpenCell_2560`](https://huggingface.co/HuangLab/CELL-E_2_OpenCell_2560) | 17.5 GB | **Best for Sequence Prediction** | **Finetuned HPA Models**: These models were used the HPA models as checkpoints, but then were finetuned on the OpenCell dataset. We found that they improve image generation capabilities, but did not necessary see an improvement in sequence prediction. | Model | Size | Notes |------------------------|--------------------------------|-------| | [`HPA_480`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_480) | 4.73 GB | **Best for Image Prediction** | | [`HPA_640`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_640) | 6.31 GB | | | [`HPA_1280`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_1280) | 10.8 GB | | | [`HPA_2560`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_2560) | 17.5 GB | | To reduce download size, we removed the ESM-2 model from the checkpoint. This should be downloaded the first time the code is run, but is otherwise something to be aware of if loading into other projects. ### How to use The full codebase is available on [GitHub](https://github.com/BoHuangLab/CELL-E_2). Download the model and make sure ```nuclues_vqgan.yaml```, ```threshold_vqgan.yaml```, ```config.yaml```, and ```model.ckpt``` are present. ``` Here is how to use this model to do sequence prediction: ```python configs = OmegaConf.load(configs/config.yaml); model = instantiate_from_config(configs.model).to(device); model.sample(text=sequence, condition=nucleus) ``` ### BibTeX entry and citation info ```bibtex @inproceedings{ anonymous2023translating, title={CELL-E 2: Translating Proteins to Pictures and Back with a Bidirectional Text-to-Image Transformer}, author={Emaad Khwaja, Yun S. Song, Aaron Agarunov, and Bo Huang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=YSMLVffl5u} } ``` ### Contact We are an interdisciplinary lab based at [UCSF](https://www.ucsf.edu). We are particularly seeking talents in optical engineering, machine learning, and cellular microscopy. [Please reach out to Bo if you're interested in collaborating!](http://huanglab.ucsf.edu/Contact.html)
HuangLab/CELL-E_2_HPA_Finetuned_640
HuangLab
2023-10-10T14:44:41Z
3
0
pytorch
[ "pytorch", "biology", "microscopy", "text-to-image", "transformers", "license:mit", "region:us" ]
text-to-image
2023-05-13T00:37:00Z
--- license: mit library_name: pytorch tags: - biology - microscopy - text-to-image - transformers metrics: - accuracy --- [![Huang Lab](images/huanglogo.jpeg)](huanglab.ucsf.edu) # CELL-E 2 ## Model description [![CELL-E_2](images/architecture.png)](https://bohuanglab.github.io/CELL-E_2/) CELL-E 2 is the second iteration of the original [CELL-E](https://www.biorxiv.org/content/10.1101/2022.05.27.493774v1) model which utilizes an amino acid sequence and nucleus image to make predictions of subcellular protein localization with respect to the nucleus. CELL-E 2 is novel bidirectional transformer that can generate images depicting protein subcellular localization from the amino acid sequences (and *vice versa*). CELL-E 2 not only captures the spatial complexity of protein localization and produce probability estimates of localization atop a nucleus image, but also being able to generate sequences from images, enabling *de novo* protein design. We trained on the [Human Protein Atlas](https://www.proteinatlas.org) (HPA) and the [OpenCell](https://opencell.czbiohub.org) datasets. CELL-E 2 utilizes pretrained amino acid embeddings from [ESM-2](https://github.com/facebookresearch/esm). Localization is predicted as a binary image atop the provided nucleus. The logit values are weighted against these binary images to produce a heatmap of expected localization. ## Spaces We have two spaces available where you can run predictions on your own data! - [Image Prediction](https://huggingface.co/spaces/HuangLab/CELL-E_2-Image_Prediction) - [Sequence Prediction](https://huggingface.co/spaces/HuangLab/CELL-E_2-Sequence_Prediction) ## Model variations We have made several versions of CELL-E 2 available. The naming scheme follows the structure ```training set_hidden size``` where the hidden size is set to the embedding dimension of the pretrained ESM-2 model. We annotate the most useful models under Notes, however other models can be used if memory constraints are present. Since these models share similarities with BERT, the embeddings from any of these models may be benefical for downstream tasks. **HPA Models**: HPA models are trained on the HPA dataset. They are best for general purpose predictions as they include a variety of cell types. | Model | Size | Notes |------------------------|--------------------------------|-------| | [`HPA_480`](https://huggingface.co/HuangLab/CELL-E_2_HPA_480) | 4.73 GB | **Best for Image Prediction** | | [`HPA_640`](https://huggingface.co/HuangLab/CELL-E_2_HPA_640) | 6.31 GB | | | [`HPA_1280`](https://huggingface.co/HuangLab/CELL-E_2_HPA_1280) | 10.8 GB | | | [`HPA_2560`](https://huggingface.co/HuangLab/CELL-E_2_HPA_2560) | 17.5 GB | **Best for Sequence Prediction** | **OpenCell Models**: OpenCell models are trained on the OpenCell dataset. These only contain HEK cells and should ideally only be used for predictions on HEK cells. They perform well on image prediction but the generate heatmaps contain little information. | Model | Size | Notes |------------------------|--------------------------------|-------| | [`OpenCell_480`](https://huggingface.co/HuangLab/CELL-E_2_OpenCell_480) | 4.73 GB | | | [`OpenCell_640`](https://huggingface.co/HuangLab/CELL-E_2_OpenCell_640) | 6.31 GB | | | [`OpenCell_1280`](https://huggingface.co/HuangLab/CELL-E_2_OpenCel_1280) | 10.8 GB | | | [`OpenCell_2560`](https://huggingface.co/HuangLab/CELL-E_2_OpenCell_2560) | 17.5 GB | **Best for Sequence Prediction** | **Finetuned HPA Models**: These models were used the HPA models as checkpoints, but then were finetuned on the OpenCell dataset. We found that they improve image generation capabilities, but did not necessary see an improvement in sequence prediction. | Model | Size | Notes |------------------------|--------------------------------|-------| | [`HPA_480`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_480) | 4.73 GB | **Best for Image Prediction** | | [`HPA_640`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_640) | 6.31 GB | | | [`HPA_1280`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_1280) | 10.8 GB | | | [`HPA_2560`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_2560) | 17.5 GB | | To reduce download size, we removed the ESM-2 model from the checkpoint. This should be downloaded the first time the code is run, but is otherwise something to be aware of if loading into other projects. ### How to use The full codebase is available on [GitHub](https://github.com/BoHuangLab/CELL-E_2). Download the model and make sure ```nuclues_vqgan.yaml```, ```threshold_vqgan.yaml```, ```config.yaml```, and ```model.ckpt``` are present. ``` Here is how to use this model to do sequence prediction: ```python configs = OmegaConf.load(configs/config.yaml); model = instantiate_from_config(configs.model).to(device); model.sample(text=sequence, condition=nucleus) ``` ### BibTeX entry and citation info ```bibtex @inproceedings{ anonymous2023translating, title={CELL-E 2: Translating Proteins to Pictures and Back with a Bidirectional Text-to-Image Transformer}, author={Emaad Khwaja, Yun S. Song, Aaron Agarunov, and Bo Huang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=YSMLVffl5u} } ``` ### Contact We are an interdisciplinary lab based at [UCSF](https://www.ucsf.edu). We are particularly seeking talents in optical engineering, machine learning, and cellular microscopy. [Please reach out to Bo if you're interested in collaborating!](http://huanglab.ucsf.edu/Contact.html)
HuangLab/CELL-E_2_HPA_640
HuangLab
2023-10-10T14:44:35Z
1
0
pytorch
[ "pytorch", "biology", "microscopy", "text-to-image", "transformers", "license:mit", "region:us" ]
text-to-image
2023-05-13T00:27:44Z
--- license: mit library_name: pytorch tags: - biology - microscopy - text-to-image - transformers metrics: - accuracy --- [![Huang Lab](images/huanglogo.jpeg)](huanglab.ucsf.edu) # CELL-E 2 ## Model description [![CELL-E_2](images/architecture.png)](https://bohuanglab.github.io/CELL-E_2/) CELL-E 2 is the second iteration of the original [CELL-E](https://www.biorxiv.org/content/10.1101/2022.05.27.493774v1) model which utilizes an amino acid sequence and nucleus image to make predictions of subcellular protein localization with respect to the nucleus. CELL-E 2 is novel bidirectional transformer that can generate images depicting protein subcellular localization from the amino acid sequences (and *vice versa*). CELL-E 2 not only captures the spatial complexity of protein localization and produce probability estimates of localization atop a nucleus image, but also being able to generate sequences from images, enabling *de novo* protein design. We trained on the [Human Protein Atlas](https://www.proteinatlas.org) (HPA) and the [OpenCell](https://opencell.czbiohub.org) datasets. CELL-E 2 utilizes pretrained amino acid embeddings from [ESM-2](https://github.com/facebookresearch/esm). Localization is predicted as a binary image atop the provided nucleus. The logit values are weighted against these binary images to produce a heatmap of expected localization. ## Spaces We have two spaces available where you can run predictions on your own data! - [Image Prediction](https://huggingface.co/spaces/HuangLab/CELL-E_2-Image_Prediction) - [Sequence Prediction](https://huggingface.co/spaces/HuangLab/CELL-E_2-Sequence_Prediction) ## Model variations We have made several versions of CELL-E 2 available. The naming scheme follows the structure ```training set_hidden size``` where the hidden size is set to the embedding dimension of the pretrained ESM-2 model. We annotate the most useful models under Notes, however other models can be used if memory constraints are present. Since these models share similarities with BERT, the embeddings from any of these models may be benefical for downstream tasks. **HPA Models**: HPA models are trained on the HPA dataset. They are best for general purpose predictions as they include a variety of cell types. | Model | Size | Notes |------------------------|--------------------------------|-------| | [`HPA_480`](https://huggingface.co/HuangLab/CELL-E_2_HPA_480) | 4.73 GB | **Best for Image Prediction** | | [`HPA_640`](https://huggingface.co/HuangLab/CELL-E_2_HPA_640) | 6.31 GB | | | [`HPA_1280`](https://huggingface.co/HuangLab/CELL-E_2_HPA_1280) | 10.8 GB | | | [`HPA_2560`](https://huggingface.co/HuangLab/CELL-E_2_HPA_2560) | 17.5 GB | **Best for Sequence Prediction** | **OpenCell Models**: OpenCell models are trained on the OpenCell dataset. These only contain HEK cells and should ideally only be used for predictions on HEK cells. They perform well on image prediction but the generate heatmaps contain little information. | Model | Size | Notes |------------------------|--------------------------------|-------| | [`OpenCell_480`](https://huggingface.co/HuangLab/CELL-E_2_OpenCell_480) | 4.73 GB | | | [`OpenCell_640`](https://huggingface.co/HuangLab/CELL-E_2_OpenCell_640) | 6.31 GB | | | [`OpenCell_1280`](https://huggingface.co/HuangLab/CELL-E_2_OpenCel_1280) | 10.8 GB | | | [`OpenCell_2560`](https://huggingface.co/HuangLab/CELL-E_2_OpenCell_2560) | 17.5 GB | **Best for Sequence Prediction** | **Finetuned HPA Models**: These models were used the HPA models as checkpoints, but then were finetuned on the OpenCell dataset. We found that they improve image generation capabilities, but did not necessary see an improvement in sequence prediction. | Model | Size | Notes |------------------------|--------------------------------|-------| | [`HPA_480`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_480) | 4.73 GB | **Best for Image Prediction** | | [`HPA_640`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_640) | 6.31 GB | | | [`HPA_1280`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_1280) | 10.8 GB | | | [`HPA_2560`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_2560) | 17.5 GB | | To reduce download size, we removed the ESM-2 model from the checkpoint. This should be downloaded the first time the code is run, but is otherwise something to be aware of if loading into other projects. ### How to use The full codebase is available on [GitHub](https://github.com/BoHuangLab/CELL-E_2). Download the model and make sure ```nuclues_vqgan.yaml```, ```threshold_vqgan.yaml```, ```config.yaml```, and ```model.ckpt``` are present. ``` Here is how to use this model to do sequence prediction: ```python configs = OmegaConf.load(configs/config.yaml); model = instantiate_from_config(configs.model).to(device); model.sample(text=sequence, condition=nucleus) ``` ### BibTeX entry and citation info ```bibtex @inproceedings{ anonymous2023translating, title={CELL-E 2: Translating Proteins to Pictures and Back with a Bidirectional Text-to-Image Transformer}, author={Emaad Khwaja, Yun S. Song, Aaron Agarunov, and Bo Huang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=YSMLVffl5u} } ``` ### Contact We are an interdisciplinary lab based at [UCSF](https://www.ucsf.edu). We are particularly seeking talents in optical engineering, machine learning, and cellular microscopy. [Please reach out to Bo if you're interested in collaborating!](http://huanglab.ucsf.edu/Contact.html)
HuangLab/CELL-E_2_OpenCell_480
HuangLab
2023-10-10T14:44:31Z
1
0
pytorch
[ "pytorch", "biology", "microscopy", "text-to-image", "transformers", "license:mit", "region:us" ]
text-to-image
2023-05-13T00:36:22Z
--- license: mit library_name: pytorch tags: - biology - microscopy - text-to-image - transformers metrics: - accuracy --- [![Huang Lab](images/huanglogo.jpeg)](huanglab.ucsf.edu) # CELL-E 2 ## Model description [![CELL-E_2](images/architecture.png)](https://bohuanglab.github.io/CELL-E_2/) CELL-E 2 is the second iteration of the original [CELL-E](https://www.biorxiv.org/content/10.1101/2022.05.27.493774v1) model which utilizes an amino acid sequence and nucleus image to make predictions of subcellular protein localization with respect to the nucleus. CELL-E 2 is novel bidirectional transformer that can generate images depicting protein subcellular localization from the amino acid sequences (and *vice versa*). CELL-E 2 not only captures the spatial complexity of protein localization and produce probability estimates of localization atop a nucleus image, but also being able to generate sequences from images, enabling *de novo* protein design. We trained on the [Human Protein Atlas](https://www.proteinatlas.org) (HPA) and the [OpenCell](https://opencell.czbiohub.org) datasets. CELL-E 2 utilizes pretrained amino acid embeddings from [ESM-2](https://github.com/facebookresearch/esm). Localization is predicted as a binary image atop the provided nucleus. The logit values are weighted against these binary images to produce a heatmap of expected localization. ## Spaces We have two spaces available where you can run predictions on your own data! - [Image Prediction](https://huggingface.co/spaces/HuangLab/CELL-E_2-Image_Prediction) - [Sequence Prediction](https://huggingface.co/spaces/HuangLab/CELL-E_2-Sequence_Prediction) ## Model variations We have made several versions of CELL-E 2 available. The naming scheme follows the structure ```training set_hidden size``` where the hidden size is set to the embedding dimension of the pretrained ESM-2 model. We annotate the most useful models under Notes, however other models can be used if memory constraints are present. Since these models share similarities with BERT, the embeddings from any of these models may be benefical for downstream tasks. **HPA Models**: HPA models are trained on the HPA dataset. They are best for general purpose predictions as they include a variety of cell types. | Model | Size | Notes |------------------------|--------------------------------|-------| | [`HPA_480`](https://huggingface.co/HuangLab/CELL-E_2_HPA_480) | 4.73 GB | **Best for Image Prediction** | | [`HPA_640`](https://huggingface.co/HuangLab/CELL-E_2_HPA_640) | 6.31 GB | | | [`HPA_1280`](https://huggingface.co/HuangLab/CELL-E_2_HPA_1280) | 10.8 GB | | | [`HPA_2560`](https://huggingface.co/HuangLab/CELL-E_2_HPA_2560) | 17.5 GB | **Best for Sequence Prediction** | **OpenCell Models**: OpenCell models are trained on the OpenCell dataset. These only contain HEK cells and should ideally only be used for predictions on HEK cells. They perform well on image prediction but the generate heatmaps contain little information. | Model | Size | Notes |------------------------|--------------------------------|-------| | [`OpenCell_480`](https://huggingface.co/HuangLab/CELL-E_2_OpenCell_480) | 4.73 GB | | | [`OpenCell_640`](https://huggingface.co/HuangLab/CELL-E_2_OpenCell_640) | 6.31 GB | | | [`OpenCell_1280`](https://huggingface.co/HuangLab/CELL-E_2_OpenCel_1280) | 10.8 GB | | | [`OpenCell_2560`](https://huggingface.co/HuangLab/CELL-E_2_OpenCell_2560) | 17.5 GB | **Best for Sequence Prediction** | **Finetuned HPA Models**: These models were used the HPA models as checkpoints, but then were finetuned on the OpenCell dataset. We found that they improve image generation capabilities, but did not necessary see an improvement in sequence prediction. | Model | Size | Notes |------------------------|--------------------------------|-------| | [`HPA_480`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_480) | 4.73 GB | **Best for Image Prediction** | | [`HPA_640`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_640) | 6.31 GB | | | [`HPA_1280`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_1280) | 10.8 GB | | | [`HPA_2560`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_2560) | 17.5 GB | | To reduce download size, we removed the ESM-2 model from the checkpoint. This should be downloaded the first time the code is run, but is otherwise something to be aware of if loading into other projects. ### How to use The full codebase is available on [GitHub](https://github.com/BoHuangLab/CELL-E_2). Download the model and make sure ```nuclues_vqgan.yaml```, ```threshold_vqgan.yaml```, ```config.yaml```, and ```model.ckpt``` are present. ``` Here is how to use this model to do sequence prediction: ```python configs = OmegaConf.load(configs/config.yaml); model = instantiate_from_config(configs.model).to(device); model.sample(text=sequence, condition=nucleus) ``` ### BibTeX entry and citation info ```bibtex @inproceedings{ anonymous2023translating, title={CELL-E 2: Translating Proteins to Pictures and Back with a Bidirectional Text-to-Image Transformer}, author={Emaad Khwaja, Yun S. Song, Aaron Agarunov, and Bo Huang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=YSMLVffl5u} } ``` ### Contact We are an interdisciplinary lab based at [UCSF](https://www.ucsf.edu). We are particularly seeking talents in optical engineering, machine learning, and cellular microscopy. [Please reach out to Bo if you're interested in collaborating!](http://huanglab.ucsf.edu/Contact.html)
dude121/ppo-LunarLander-v2
dude121
2023-10-10T14:44:29Z
0
1
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-10-10T01:14:57Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 267.82 +/- 15.69 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python !apt install swig cmake !pip install -r https://raw.githubusercontent.com/huggingface/deep-rl-class/main/notebooks/unit1/requirements-unit1.txt !sudo apt-get update !sudo apt-get install -y python3-opengl !apt install ffmpeg !apt install xvfb !pip3 install pyvirtualdisplay # Virtual display from pyvirtualdisplay import Display virtual_display = Display(visible=0, size=(1400, 900)) virtual_display.start() import gymnasium from huggingface_sb3 import load_from_hub, package_to_hub from huggingface_hub import notebook_login # To log to our Hugging Face account to be able to upload models to the Hub. from stable_baselines3 import PPO from stable_baselines3.common.env_util import make_vec_env from stable_baselines3.common.evaluation import evaluate_policy from stable_baselines3.common.monitor import Monitor import gymnasium as gym # First, we create our environment called LunarLander-v2 env = gym.make("LunarLander-v2") # Then we reset this environment observation, info = env.reset() for _ in range(20): # Take a random action action = env.action_space.sample() print("Action taken:", action) # Do this action in the environment and get # next_state, reward, terminated, truncated and info observation, reward, terminated, truncated, info = env.step(action) # If the game is terminated (in our case we land, crashed) or truncated (timeout) if terminated or truncated: # Reset the environment print("Environment is reset") observation, info = env.reset() env.close() # We create our environment with gym.make("<name_of_the_environment>") env = gym.make("LunarLander-v2") env.reset() print("_____OBSERVATION SPACE_____ \n") print("Observation Space Shape", env.observation_space.shape) print("Sample observation", env.observation_space.sample()) # Get a random observation # Create the environment env = make_vec_env('LunarLander-v2', n_envs=16) # We added some parameters to accelerate the training model = PPO( policy = 'MlpPolicy', env = env, n_steps = 1024, batch_size = 64, n_epochs = 4, gamma = 0.999, gae_lambda = 0.98, ent_coef = 0.01, verbose=1) # Train it for 1,000,000 timesteps model.learn(total_timesteps=1000000) # Save the model model_name = "ppo-LunarLander-v2" model.save(model_name) # Get mean reward eval_env = Monitor(gym.make("LunarLander-v2")) mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True) print(f"mean_reward={mean_reward:.2f} +/- {std_reward}") notebook_login() !git config --global credential.helper store import gymnasium as gym from stable_baselines3 import PPO from stable_baselines3.common.vec_env import DummyVecEnv from stable_baselines3.common.env_util import make_vec_env from huggingface_sb3 import package_to_hub # PLACE the variables you've just defined two cells above # Define the name of the environment env_id = "LunarLander-v2" # TODO: Define the model architecture we used model_architecture = "PPO" ## Define a repo_id ## repo_id is the id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name} for instance ThomasSimonini/ppo-LunarLander-v2 ## CHANGE WITH YOUR REPO ID repo_id = "dude121/ppo-LunarLander-v2" # Change with your repo id, you can't push with mine 😄 ## Define the commit message commit_message = "Upload PPO LunarLander-v2 trained agent" # Create the evaluation env and set the render_mode="rgb_array" eval_env = DummyVecEnv([lambda: gym.make(env_id, render_mode="rgb_array")]) # PLACE the package_to_hub function you've just filled here package_to_hub(model=model, # Our trained model model_name=model_name, # The name of our trained model model_architecture=model_architecture, # The model architecture we used: in our case PPO env_id=env_id, # Name of the environment eval_env=eval_env, # Evaluation Environment repo_id=repo_id, # id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name} for instance ThomasSimonini/ppo-LunarLander-v2 commit_message=commit_message) ... ... ```
HuangLab/CELL-E_2_OpenCell_640
HuangLab
2023-10-10T14:44:25Z
3
0
pytorch
[ "pytorch", "biology", "microscopy", "text-to-image", "transformers", "license:mit", "region:us" ]
text-to-image
2023-05-13T00:28:10Z
--- license: mit library_name: pytorch tags: - biology - microscopy - text-to-image - transformers metrics: - accuracy --- [![Huang Lab](images/huanglogo.jpeg)](huanglab.ucsf.edu) # CELL-E 2 ## Model description [![CELL-E_2](images/architecture.png)](https://bohuanglab.github.io/CELL-E_2/) CELL-E 2 is the second iteration of the original [CELL-E](https://www.biorxiv.org/content/10.1101/2022.05.27.493774v1) model which utilizes an amino acid sequence and nucleus image to make predictions of subcellular protein localization with respect to the nucleus. CELL-E 2 is novel bidirectional transformer that can generate images depicting protein subcellular localization from the amino acid sequences (and *vice versa*). CELL-E 2 not only captures the spatial complexity of protein localization and produce probability estimates of localization atop a nucleus image, but also being able to generate sequences from images, enabling *de novo* protein design. We trained on the [Human Protein Atlas](https://www.proteinatlas.org) (HPA) and the [OpenCell](https://opencell.czbiohub.org) datasets. CELL-E 2 utilizes pretrained amino acid embeddings from [ESM-2](https://github.com/facebookresearch/esm). Localization is predicted as a binary image atop the provided nucleus. The logit values are weighted against these binary images to produce a heatmap of expected localization. ## Spaces We have two spaces available where you can run predictions on your own data! - [Image Prediction](https://huggingface.co/spaces/HuangLab/CELL-E_2-Image_Prediction) - [Sequence Prediction](https://huggingface.co/spaces/HuangLab/CELL-E_2-Sequence_Prediction) ## Model variations We have made several versions of CELL-E 2 available. The naming scheme follows the structure ```training set_hidden size``` where the hidden size is set to the embedding dimension of the pretrained ESM-2 model. We annotate the most useful models under Notes, however other models can be used if memory constraints are present. Since these models share similarities with BERT, the embeddings from any of these models may be benefical for downstream tasks. **HPA Models**: HPA models are trained on the HPA dataset. They are best for general purpose predictions as they include a variety of cell types. | Model | Size | Notes |------------------------|--------------------------------|-------| | [`HPA_480`](https://huggingface.co/HuangLab/CELL-E_2_HPA_480) | 4.73 GB | **Best for Image Prediction** | | [`HPA_640`](https://huggingface.co/HuangLab/CELL-E_2_HPA_640) | 6.31 GB | | | [`HPA_1280`](https://huggingface.co/HuangLab/CELL-E_2_HPA_1280) | 10.8 GB | | | [`HPA_2560`](https://huggingface.co/HuangLab/CELL-E_2_HPA_2560) | 17.5 GB | **Best for Sequence Prediction** | **OpenCell Models**: OpenCell models are trained on the OpenCell dataset. These only contain HEK cells and should ideally only be used for predictions on HEK cells. They perform well on image prediction but the generate heatmaps contain little information. | Model | Size | Notes |------------------------|--------------------------------|-------| | [`OpenCell_480`](https://huggingface.co/HuangLab/CELL-E_2_OpenCell_480) | 4.73 GB | | | [`OpenCell_640`](https://huggingface.co/HuangLab/CELL-E_2_OpenCell_640) | 6.31 GB | | | [`OpenCell_1280`](https://huggingface.co/HuangLab/CELL-E_2_OpenCel_1280) | 10.8 GB | | | [`OpenCell_2560`](https://huggingface.co/HuangLab/CELL-E_2_OpenCell_2560) | 17.5 GB | **Best for Sequence Prediction** | **Finetuned HPA Models**: These models were used the HPA models as checkpoints, but then were finetuned on the OpenCell dataset. We found that they improve image generation capabilities, but did not necessary see an improvement in sequence prediction. | Model | Size | Notes |------------------------|--------------------------------|-------| | [`HPA_480`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_480) | 4.73 GB | **Best for Image Prediction** | | [`HPA_640`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_640) | 6.31 GB | | | [`HPA_1280`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_1280) | 10.8 GB | | | [`HPA_2560`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_2560) | 17.5 GB | | To reduce download size, we removed the ESM-2 model from the checkpoint. This should be downloaded the first time the code is run, but is otherwise something to be aware of if loading into other projects. ### How to use The full codebase is available on [GitHub](https://github.com/BoHuangLab/CELL-E_2). Download the model and make sure ```nuclues_vqgan.yaml```, ```threshold_vqgan.yaml```, ```config.yaml```, and ```model.ckpt``` are present. ``` Here is how to use this model to do sequence prediction: ```python configs = OmegaConf.load(configs/config.yaml); model = instantiate_from_config(configs.model).to(device); model.sample(text=sequence, condition=nucleus) ``` ### BibTeX entry and citation info ```bibtex @inproceedings{ anonymous2023translating, title={CELL-E 2: Translating Proteins to Pictures and Back with a Bidirectional Text-to-Image Transformer}, author={Emaad Khwaja, Yun S. Song, Aaron Agarunov, and Bo Huang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=YSMLVffl5u} } ``` ### Contact We are an interdisciplinary lab based at [UCSF](https://www.ucsf.edu). We are particularly seeking talents in optical engineering, machine learning, and cellular microscopy. [Please reach out to Bo if you're interested in collaborating!](http://huanglab.ucsf.edu/Contact.html)
HuangLab/CELL-E_2_HPA_Finetuned_1280
HuangLab
2023-10-10T14:44:20Z
3
0
pytorch
[ "pytorch", "biology", "microscopy", "text-to-image", "transformers", "license:mit", "region:us" ]
text-to-image
2023-05-13T00:37:38Z
--- license: mit library_name: pytorch tags: - biology - microscopy - text-to-image - transformers metrics: - accuracy --- [![Huang Lab](images/huanglogo.jpeg)](huanglab.ucsf.edu) # CELL-E 2 ## Model description [![CELL-E_2](images/architecture.png)](https://bohuanglab.github.io/CELL-E_2/) CELL-E 2 is the second iteration of the original [CELL-E](https://www.biorxiv.org/content/10.1101/2022.05.27.493774v1) model which utilizes an amino acid sequence and nucleus image to make predictions of subcellular protein localization with respect to the nucleus. CELL-E 2 is novel bidirectional transformer that can generate images depicting protein subcellular localization from the amino acid sequences (and *vice versa*). CELL-E 2 not only captures the spatial complexity of protein localization and produce probability estimates of localization atop a nucleus image, but also being able to generate sequences from images, enabling *de novo* protein design. We trained on the [Human Protein Atlas](https://www.proteinatlas.org) (HPA) and the [OpenCell](https://opencell.czbiohub.org) datasets. CELL-E 2 utilizes pretrained amino acid embeddings from [ESM-2](https://github.com/facebookresearch/esm). Localization is predicted as a binary image atop the provided nucleus. The logit values are weighted against these binary images to produce a heatmap of expected localization. ## Spaces We have two spaces available where you can run predictions on your own data! - [Image Prediction](https://huggingface.co/spaces/HuangLab/CELL-E_2-Image_Prediction) - [Sequence Prediction](https://huggingface.co/spaces/HuangLab/CELL-E_2-Sequence_Prediction) ## Model variations We have made several versions of CELL-E 2 available. The naming scheme follows the structure ```training set_hidden size``` where the hidden size is set to the embedding dimension of the pretrained ESM-2 model. We annotate the most useful models under Notes, however other models can be used if memory constraints are present. Since these models share similarities with BERT, the embeddings from any of these models may be benefical for downstream tasks. **HPA Models**: HPA models are trained on the HPA dataset. They are best for general purpose predictions as they include a variety of cell types. | Model | Size | Notes |------------------------|--------------------------------|-------| | [`HPA_480`](https://huggingface.co/HuangLab/CELL-E_2_HPA_480) | 4.73 GB | **Best for Image Prediction** | | [`HPA_640`](https://huggingface.co/HuangLab/CELL-E_2_HPA_640) | 6.31 GB | | | [`HPA_1280`](https://huggingface.co/HuangLab/CELL-E_2_HPA_1280) | 10.8 GB | | | [`HPA_2560`](https://huggingface.co/HuangLab/CELL-E_2_HPA_2560) | 17.5 GB | **Best for Sequence Prediction** | **OpenCell Models**: OpenCell models are trained on the OpenCell dataset. These only contain HEK cells and should ideally only be used for predictions on HEK cells. They perform well on image prediction but the generate heatmaps contain little information. | Model | Size | Notes |------------------------|--------------------------------|-------| | [`OpenCell_480`](https://huggingface.co/HuangLab/CELL-E_2_OpenCell_480) | 4.73 GB | | | [`OpenCell_640`](https://huggingface.co/HuangLab/CELL-E_2_OpenCell_640) | 6.31 GB | | | [`OpenCell_1280`](https://huggingface.co/HuangLab/CELL-E_2_OpenCel_1280) | 10.8 GB | | | [`OpenCell_2560`](https://huggingface.co/HuangLab/CELL-E_2_OpenCell_2560) | 17.5 GB | **Best for Sequence Prediction** | **Finetuned HPA Models**: These models were used the HPA models as checkpoints, but then were finetuned on the OpenCell dataset. We found that they improve image generation capabilities, but did not necessary see an improvement in sequence prediction. | Model | Size | Notes |------------------------|--------------------------------|-------| | [`HPA_480`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_480) | 4.73 GB | **Best for Image Prediction** | | [`HPA_640`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_640) | 6.31 GB | | | [`HPA_1280`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_1280) | 10.8 GB | | | [`HPA_2560`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_2560) | 17.5 GB | | To reduce download size, we removed the ESM-2 model from the checkpoint. This should be downloaded the first time the code is run, but is otherwise something to be aware of if loading into other projects. ### How to use The full codebase is available on [GitHub](https://github.com/BoHuangLab/CELL-E_2). Download the model and make sure ```nuclues_vqgan.yaml```, ```threshold_vqgan.yaml```, ```config.yaml```, and ```model.ckpt``` are present. ``` Here is how to use this model to do sequence prediction: ```python configs = OmegaConf.load(configs/config.yaml); model = instantiate_from_config(configs.model).to(device); model.sample(text=sequence, condition=nucleus) ``` ### BibTeX entry and citation info ```bibtex @inproceedings{ anonymous2023translating, title={CELL-E 2: Translating Proteins to Pictures and Back with a Bidirectional Text-to-Image Transformer}, author={Emaad Khwaja, Yun S. Song, Aaron Agarunov, and Bo Huang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=YSMLVffl5u} } ``` ### Contact We are an interdisciplinary lab based at [UCSF](https://www.ucsf.edu). We are particularly seeking talents in optical engineering, machine learning, and cellular microscopy. [Please reach out to Bo if you're interested in collaborating!](http://huanglab.ucsf.edu/Contact.html)
HuangLab/CELL-E_2_OpenCell_1280
HuangLab
2023-10-10T14:44:15Z
2
0
pytorch
[ "pytorch", "biology", "microscopy", "text-to-image", "transformers", "license:mit", "region:us" ]
text-to-image
2023-05-13T00:28:58Z
--- license: mit library_name: pytorch tags: - biology - microscopy - text-to-image - transformers metrics: - accuracy --- [![Huang Lab](images/huanglogo.jpeg)](huanglab.ucsf.edu) # CELL-E 2 ## Model description [![CELL-E_2](images/architecture.png)](https://bohuanglab.github.io/CELL-E_2/) CELL-E 2 is the second iteration of the original [CELL-E](https://www.biorxiv.org/content/10.1101/2022.05.27.493774v1) model which utilizes an amino acid sequence and nucleus image to make predictions of subcellular protein localization with respect to the nucleus. CELL-E 2 is novel bidirectional transformer that can generate images depicting protein subcellular localization from the amino acid sequences (and *vice versa*). CELL-E 2 not only captures the spatial complexity of protein localization and produce probability estimates of localization atop a nucleus image, but also being able to generate sequences from images, enabling *de novo* protein design. We trained on the [Human Protein Atlas](https://www.proteinatlas.org) (HPA) and the [OpenCell](https://opencell.czbiohub.org) datasets. CELL-E 2 utilizes pretrained amino acid embeddings from [ESM-2](https://github.com/facebookresearch/esm). Localization is predicted as a binary image atop the provided nucleus. The logit values are weighted against these binary images to produce a heatmap of expected localization. ## Spaces We have two spaces available where you can run predictions on your own data! - [Image Prediction](https://huggingface.co/spaces/HuangLab/CELL-E_2-Image_Prediction) - [Sequence Prediction](https://huggingface.co/spaces/HuangLab/CELL-E_2-Sequence_Prediction) ## Model variations We have made several versions of CELL-E 2 available. The naming scheme follows the structure ```training set_hidden size``` where the hidden size is set to the embedding dimension of the pretrained ESM-2 model. We annotate the most useful models under Notes, however other models can be used if memory constraints are present. Since these models share similarities with BERT, the embeddings from any of these models may be benefical for downstream tasks. **HPA Models**: HPA models are trained on the HPA dataset. They are best for general purpose predictions as they include a variety of cell types. | Model | Size | Notes |------------------------|--------------------------------|-------| | [`HPA_480`](https://huggingface.co/HuangLab/CELL-E_2_HPA_480) | 4.73 GB | **Best for Image Prediction** | | [`HPA_640`](https://huggingface.co/HuangLab/CELL-E_2_HPA_640) | 6.31 GB | | | [`HPA_1280`](https://huggingface.co/HuangLab/CELL-E_2_HPA_1280) | 10.8 GB | | | [`HPA_2560`](https://huggingface.co/HuangLab/CELL-E_2_HPA_2560) | 17.5 GB | **Best for Sequence Prediction** | **OpenCell Models**: OpenCell models are trained on the OpenCell dataset. These only contain HEK cells and should ideally only be used for predictions on HEK cells. They perform well on image prediction but the generate heatmaps contain little information. | Model | Size | Notes |------------------------|--------------------------------|-------| | [`OpenCell_480`](https://huggingface.co/HuangLab/CELL-E_2_OpenCell_480) | 4.73 GB | | | [`OpenCell_640`](https://huggingface.co/HuangLab/CELL-E_2_OpenCell_640) | 6.31 GB | | | [`OpenCell_1280`](https://huggingface.co/HuangLab/CELL-E_2_OpenCel_1280) | 10.8 GB | | | [`OpenCell_2560`](https://huggingface.co/HuangLab/CELL-E_2_OpenCell_2560) | 17.5 GB | **Best for Sequence Prediction** | **Finetuned HPA Models**: These models were used the HPA models as checkpoints, but then were finetuned on the OpenCell dataset. We found that they improve image generation capabilities, but did not necessary see an improvement in sequence prediction. | Model | Size | Notes |------------------------|--------------------------------|-------| | [`HPA_480`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_480) | 4.73 GB | **Best for Image Prediction** | | [`HPA_640`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_640) | 6.31 GB | | | [`HPA_1280`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_1280) | 10.8 GB | | | [`HPA_2560`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_2560) | 17.5 GB | | To reduce download size, we removed the ESM-2 model from the checkpoint. This should be downloaded the first time the code is run, but is otherwise something to be aware of if loading into other projects. ### How to use The full codebase is available on [GitHub](https://github.com/BoHuangLab/CELL-E_2). Download the model and make sure ```nuclues_vqgan.yaml```, ```threshold_vqgan.yaml```, ```config.yaml```, and ```model.ckpt``` are present. ``` Here is how to use this model to do sequence prediction: ```python configs = OmegaConf.load(configs/config.yaml); model = instantiate_from_config(configs.model).to(device); model.sample(text=sequence, condition=nucleus) ``` ### BibTeX entry and citation info ```bibtex @inproceedings{ anonymous2023translating, title={CELL-E 2: Translating Proteins to Pictures and Back with a Bidirectional Text-to-Image Transformer}, author={Emaad Khwaja, Yun S. Song, Aaron Agarunov, and Bo Huang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=YSMLVffl5u} } ``` ### Contact We are an interdisciplinary lab based at [UCSF](https://www.ucsf.edu). We are particularly seeking talents in optical engineering, machine learning, and cellular microscopy. [Please reach out to Bo if you're interested in collaborating!](http://huanglab.ucsf.edu/Contact.html)
HuangLab/CELL-E_2_HPA_480
HuangLab
2023-10-10T14:44:09Z
1
0
pytorch
[ "pytorch", "biology", "microscopy", "text-to-image", "transformers", "license:mit", "region:us" ]
text-to-image
2023-05-12T23:56:45Z
--- license: mit library_name: pytorch tags: - biology - microscopy - text-to-image - transformers metrics: - accuracy --- [![Huang Lab](images/huanglogo.jpeg)](huanglab.ucsf.edu) # CELL-E 2 ## Model description [![CELL-E_2](images/architecture.png)](https://bohuanglab.github.io/CELL-E_2/) CELL-E 2 is the second iteration of the original [CELL-E](https://www.biorxiv.org/content/10.1101/2022.05.27.493774v1) model which utilizes an amino acid sequence and nucleus image to make predictions of subcellular protein localization with respect to the nucleus. CELL-E 2 is novel bidirectional transformer that can generate images depicting protein subcellular localization from the amino acid sequences (and *vice versa*). CELL-E 2 not only captures the spatial complexity of protein localization and produce probability estimates of localization atop a nucleus image, but also being able to generate sequences from images, enabling *de novo* protein design. We trained on the [Human Protein Atlas](https://www.proteinatlas.org) (HPA) and the [OpenCell](https://opencell.czbiohub.org) datasets. CELL-E 2 utilizes pretrained amino acid embeddings from [ESM-2](https://github.com/facebookresearch/esm). Localization is predicted as a binary image atop the provided nucleus. The logit values are weighted against these binary images to produce a heatmap of expected localization. ## Spaces We have two spaces available where you can run predictions on your own data! - [Image Prediction](https://huggingface.co/spaces/HuangLab/CELL-E_2-Image_Prediction) - [Sequence Prediction](https://huggingface.co/spaces/HuangLab/CELL-E_2-Sequence_Prediction) ## Model variations We have made several versions of CELL-E 2 available. The naming scheme follows the structure ```training set_hidden size``` where the hidden size is set to the embedding dimension of the pretrained ESM-2 model. We annotate the most useful models under Notes, however other models can be used if memory constraints are present. Since these models share similarities with BERT, the embeddings from any of these models may be benefical for downstream tasks. **HPA Models**: HPA models are trained on the HPA dataset. They are best for general purpose predictions as they include a variety of cell types. | Model | Size | Notes |------------------------|--------------------------------|-------| | [`HPA_480`](https://huggingface.co/HuangLab/CELL-E_2_HPA_480) | 4.73 GB | **Best for Image Prediction** | | [`HPA_640`](https://huggingface.co/HuangLab/CELL-E_2_HPA_640) | 6.31 GB | | | [`HPA_1280`](https://huggingface.co/HuangLab/CELL-E_2_HPA_1280) | 10.8 GB | | | [`HPA_2560`](https://huggingface.co/HuangLab/CELL-E_2_HPA_2560) | 17.5 GB | **Best for Sequence Prediction** | **OpenCell Models**: OpenCell models are trained on the OpenCell dataset. These only contain HEK cells and should ideally only be used for predictions on HEK cells. They perform well on image prediction but the generate heatmaps contain little information. | Model | Size | Notes |------------------------|--------------------------------|-------| | [`OpenCell_480`](https://huggingface.co/HuangLab/CELL-E_2_OpenCell_480) | 4.73 GB | | | [`OpenCell_640`](https://huggingface.co/HuangLab/CELL-E_2_OpenCell_640) | 6.31 GB | | | [`OpenCell_1280`](https://huggingface.co/HuangLab/CELL-E_2_OpenCel_1280) | 10.8 GB | | | [`OpenCell_2560`](https://huggingface.co/HuangLab/CELL-E_2_OpenCell_2560) | 17.5 GB | **Best for Sequence Prediction** | **Finetuned HPA Models**: These models were used the HPA models as checkpoints, but then were finetuned on the OpenCell dataset. We found that they improve image generation capabilities, but did not necessary see an improvement in sequence prediction. | Model | Size | Notes |------------------------|--------------------------------|-------| | [`HPA_480`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_480) | 4.73 GB | **Best for Image Prediction** | | [`HPA_640`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_640) | 6.31 GB | | | [`HPA_1280`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_1280) | 10.8 GB | | | [`HPA_2560`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_2560) | 17.5 GB | | To reduce download size, we removed the ESM-2 model from the checkpoint. This should be downloaded the first time the code is run, but is otherwise something to be aware of if loading into other projects. ### How to use The full codebase is available on [GitHub](https://github.com/BoHuangLab/CELL-E_2). Download the model and make sure ```nuclues_vqgan.yaml```, ```threshold_vqgan.yaml```, ```config.yaml```, and ```model.ckpt``` are present. ``` Here is how to use this model to do sequence prediction: ```python configs = OmegaConf.load(configs/config.yaml); model = instantiate_from_config(configs.model).to(device); model.sample(text=sequence, condition=nucleus) ``` ### BibTeX entry and citation info ```bibtex @inproceedings{ anonymous2023translating, title={CELL-E 2: Translating Proteins to Pictures and Back with a Bidirectional Text-to-Image Transformer}, author={Emaad Khwaja, Yun S. Song, Aaron Agarunov, and Bo Huang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=YSMLVffl5u} } ``` ### Contact We are an interdisciplinary lab based at [UCSF](https://www.ucsf.edu). We are particularly seeking talents in optical engineering, machine learning, and cellular microscopy. [Please reach out to Bo if you're interested in collaborating!](http://huanglab.ucsf.edu/Contact.html)
HuangLab/CELL-E_2_OpenCell_2560
HuangLab
2023-10-10T14:44:03Z
5
3
pytorch
[ "pytorch", "biology", "microscopy", "text-to-image", "transformers", "license:mit", "region:us" ]
text-to-image
2023-05-12T23:43:18Z
--- license: mit library_name: pytorch tags: - biology - microscopy - text-to-image - transformers metrics: - accuracy --- [![Huang Lab](images/huanglogo.jpeg)](huanglab.ucsf.edu) # CELL-E 2 ## Model description [![CELL-E_2](images/architecture.png)](https://bohuanglab.github.io/CELL-E_2/) CELL-E 2 is the second iteration of the original [CELL-E](https://www.biorxiv.org/content/10.1101/2022.05.27.493774v1) model which utilizes an amino acid sequence and nucleus image to make predictions of subcellular protein localization with respect to the nucleus. CELL-E 2 is novel bidirectional transformer that can generate images depicting protein subcellular localization from the amino acid sequences (and *vice versa*). CELL-E 2 not only captures the spatial complexity of protein localization and produce probability estimates of localization atop a nucleus image, but also being able to generate sequences from images, enabling *de novo* protein design. We trained on the [Human Protein Atlas](https://www.proteinatlas.org) (HPA) and the [OpenCell](https://opencell.czbiohub.org) datasets. CELL-E 2 utilizes pretrained amino acid embeddings from [ESM-2](https://github.com/facebookresearch/esm). Localization is predicted as a binary image atop the provided nucleus. The logit values are weighted against these binary images to produce a heatmap of expected localization. ## Spaces We have two spaces available where you can run predictions on your own data! - [Image Prediction](https://huggingface.co/spaces/HuangLab/CELL-E_2-Image_Prediction) - [Sequence Prediction](https://huggingface.co/spaces/HuangLab/CELL-E_2-Sequence_Prediction) ## Model variations We have made several versions of CELL-E 2 available. The naming scheme follows the structure ```training set_hidden size``` where the hidden size is set to the embedding dimension of the pretrained ESM-2 model. We annotate the most useful models under Notes, however other models can be used if memory constraints are present. Since these models share similarities with BERT, the embeddings from any of these models may be benefical for downstream tasks. **HPA Models**: HPA models are trained on the HPA dataset. They are best for general purpose predictions as they include a variety of cell types. | Model | Size | Notes |------------------------|--------------------------------|-------| | [`HPA_480`](https://huggingface.co/HuangLab/CELL-E_2_HPA_480) | 4.73 GB | **Best for Image Prediction** | | [`HPA_640`](https://huggingface.co/HuangLab/CELL-E_2_HPA_640) | 6.31 GB | | | [`HPA_1280`](https://huggingface.co/HuangLab/CELL-E_2_HPA_1280) | 10.8 GB | | | [`HPA_2560`](https://huggingface.co/HuangLab/CELL-E_2_HPA_2560) | 17.5 GB | **Best for Sequence Prediction** | **OpenCell Models**: OpenCell models are trained on the OpenCell dataset. These only contain HEK cells and should ideally only be used for predictions on HEK cells. They perform well on image prediction but the generate heatmaps contain little information. | Model | Size | Notes |------------------------|--------------------------------|-------| | [`OpenCell_480`](https://huggingface.co/HuangLab/CELL-E_2_OpenCell_480) | 4.73 GB | | | [`OpenCell_640`](https://huggingface.co/HuangLab/CELL-E_2_OpenCell_640) | 6.31 GB | | | [`OpenCell_1280`](https://huggingface.co/HuangLab/CELL-E_2_OpenCel_1280) | 10.8 GB | | | [`OpenCell_2560`](https://huggingface.co/HuangLab/CELL-E_2_OpenCell_2560) | 17.5 GB | **Best for Sequence Prediction** | **Finetuned HPA Models**: These models were used the HPA models as checkpoints, but then were finetuned on the OpenCell dataset. We found that they improve image generation capabilities, but did not necessary see an improvement in sequence prediction. | Model | Size | Notes |------------------------|--------------------------------|-------| | [`HPA_480`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_480) | 4.73 GB | **Best for Image Prediction** | | [`HPA_640`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_640) | 6.31 GB | | | [`HPA_1280`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_1280) | 10.8 GB | | | [`HPA_2560`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_2560) | 17.5 GB | | To reduce download size, we removed the ESM-2 model from the checkpoint. This should be downloaded the first time the code is run, but is otherwise something to be aware of if loading into other projects. ### How to use The full codebase is available on [GitHub](https://github.com/BoHuangLab/CELL-E_2). Download the model and make sure ```nuclues_vqgan.yaml```, ```threshold_vqgan.yaml```, ```config.yaml```, and ```model.ckpt``` are present. ``` Here is how to use this model to do sequence prediction: ```python configs = OmegaConf.load(configs/config.yaml); model = instantiate_from_config(configs.model).to(device); model.sample(text=sequence, condition=nucleus) ``` ### BibTeX entry and citation info ```bibtex @inproceedings{ anonymous2023translating, title={CELL-E 2: Translating Proteins to Pictures and Back with a Bidirectional Text-to-Image Transformer}, author={Emaad Khwaja, Yun S. Song, Aaron Agarunov, and Bo Huang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=YSMLVffl5u} } ``` ### Contact We are an interdisciplinary lab based at [UCSF](https://www.ucsf.edu). We are particularly seeking talents in optical engineering, machine learning, and cellular microscopy. [Please reach out to Bo if you're interested in collaborating!](http://huanglab.ucsf.edu/Contact.html)
HuangLab/CELL-E_2_HPA_Finetuned_480
HuangLab
2023-10-10T14:43:41Z
8
2
pytorch
[ "pytorch", "biology", "microscopy", "text-to-image", "transformers", "license:mit", "region:us" ]
text-to-image
2023-05-13T00:07:42Z
--- license: mit library_name: pytorch tags: - biology - microscopy - text-to-image - transformers metrics: - accuracy --- [![Huang Lab](images/huanglogo.jpeg)](huanglab.ucsf.edu) # CELL-E 2 ## Model description [![CELL-E_2](images/architecture.png)](https://bohuanglab.github.io/CELL-E_2/) CELL-E 2 is the second iteration of the original [CELL-E](https://www.biorxiv.org/content/10.1101/2022.05.27.493774v1) model which utilizes an amino acid sequence and nucleus image to make predictions of subcellular protein localization with respect to the nucleus. CELL-E 2 is novel bidirectional transformer that can generate images depicting protein subcellular localization from the amino acid sequences (and *vice versa*). CELL-E 2 not only captures the spatial complexity of protein localization and produce probability estimates of localization atop a nucleus image, but also being able to generate sequences from images, enabling *de novo* protein design. We trained on the [Human Protein Atlas](https://www.proteinatlas.org) (HPA) and the [OpenCell](https://opencell.czbiohub.org) datasets. CELL-E 2 utilizes pretrained amino acid embeddings from [ESM-2](https://github.com/facebookresearch/esm). Localization is predicted as a binary image atop the provided nucleus. The logit values are weighted against these binary images to produce a heatmap of expected localization. ## Spaces We have two spaces available where you can run predictions on your own data! - [Image Prediction](https://huggingface.co/spaces/HuangLab/CELL-E_2-Image_Prediction) - [Sequence Prediction](https://huggingface.co/spaces/HuangLab/CELL-E_2-Sequence_Prediction) ## Model variations We have made several versions of CELL-E 2 available. The naming scheme follows the structure ```training set_hidden size``` where the hidden size is set to the embedding dimension of the pretrained ESM-2 model. We annotate the most useful models under Notes, however other models can be used if memory constraints are present. Since these models share similarities with BERT, the embeddings from any of these models may be benefical for downstream tasks. **HPA Models**: HPA models are trained on the HPA dataset. They are best for general purpose predictions as they include a variety of cell types. | Model | Size | Notes |------------------------|--------------------------------|-------| | [`HPA_480`](https://huggingface.co/HuangLab/CELL-E_2_HPA_480) | 4.73 GB | **Best for Image Prediction** | | [`HPA_640`](https://huggingface.co/HuangLab/CELL-E_2_HPA_640) | 6.31 GB | | | [`HPA_1280`](https://huggingface.co/HuangLab/CELL-E_2_HPA_1280) | 10.8 GB | | | [`HPA_2560`](https://huggingface.co/HuangLab/CELL-E_2_HPA_2560) | 17.5 GB | **Best for Sequence Prediction** | **OpenCell Models**: OpenCell models are trained on the OpenCell dataset. These only contain HEK cells and should ideally only be used for predictions on HEK cells. They perform well on image prediction but the generate heatmaps contain little information. | Model | Size | Notes |------------------------|--------------------------------|-------| | [`OpenCell_480`](https://huggingface.co/HuangLab/CELL-E_2_OpenCell_480) | 4.73 GB | | | [`OpenCell_640`](https://huggingface.co/HuangLab/CELL-E_2_OpenCell_640) | 6.31 GB | | | [`OpenCell_1280`](https://huggingface.co/HuangLab/CELL-E_2_OpenCel_1280) | 10.8 GB | | | [`OpenCell_2560`](https://huggingface.co/HuangLab/CELL-E_2_OpenCell_2560) | 17.5 GB | **Best for Sequence Prediction** | **Finetuned HPA Models**: These models were used the HPA models as checkpoints, but then were finetuned on the OpenCell dataset. We found that they improve image generation capabilities, but did not necessary see an improvement in sequence prediction. | Model | Size | Notes |------------------------|--------------------------------|-------| | [`HPA_480`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_480) | 4.73 GB | **Best for Image Prediction** | | [`HPA_640`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_640) | 6.31 GB | | | [`HPA_1280`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_1280) | 10.8 GB | | | [`HPA_2560`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_2560) | 17.5 GB | | To reduce download size, we removed the ESM-2 model from the checkpoint. This should be downloaded the first time the code is run, but is otherwise something to be aware of if loading into other projects. ### How to use The full codebase is available on [GitHub](https://github.com/BoHuangLab/CELL-E_2). Download the model and make sure ```nuclues_vqgan.yaml```, ```threshold_vqgan.yaml```, ```config.yaml```, and ```model.ckpt``` are present. ``` Here is how to use this model to do sequence prediction: ```python configs = OmegaConf.load(configs/config.yaml); model = instantiate_from_config(configs.model).to(device); model.sample(text=sequence, condition=nucleus) ``` ### BibTeX entry and citation info ```bibtex @inproceedings{ anonymous2023translating, title={CELL-E 2: Translating Proteins to Pictures and Back with a Bidirectional Text-to-Image Transformer}, author={Emaad Khwaja, Yun S. Song, Aaron Agarunov, and Bo Huang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=YSMLVffl5u} } ``` ### Contact We are an interdisciplinary lab based at [UCSF](https://www.ucsf.edu). We are particularly seeking talents in optical engineering, machine learning, and cellular microscopy. [Please reach out to Bo if you're interested in collaborating!](http://huanglab.ucsf.edu/Contact.html)
noamno1/distilbert-base-uncasedOffensive-Language-lora-text-classification
noamno1
2023-10-10T14:37:03Z
0
0
peft
[ "peft", "region:us" ]
null
2023-10-10T14:27:04Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0
wiilog/donut-base-dnud-v2
wiilog
2023-10-10T14:33:08Z
45
0
transformers
[ "transformers", "pytorch", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "base_model:naver-clova-ix/donut-base", "base_model:finetune:naver-clova-ix/donut-base", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2023-10-10T12:40:38Z
--- license: mit base_model: naver-clova-ix/donut-base tags: - generated_from_trainer model-index: - name: donut-base-dnud-v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut-base-dnud-v2 This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.33.3 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.13.2
Vaibhav9401/flan-t5-base-samsum
Vaibhav9401
2023-10-10T14:24:36Z
104
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "dataset:samsum", "base_model:google/flan-t5-base", "base_model:finetune:google/flan-t5-base", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-10-10T12:14:11Z
--- license: apache-2.0 base_model: google/flan-t5-base tags: - generated_from_trainer datasets: - samsum metrics: - rouge model-index: - name: flan-t5-base-samsum results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: samsum type: samsum config: samsum split: test args: samsum metrics: - name: Rouge1 type: rouge value: 47.6412 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-base-samsum This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the samsum dataset. It achieves the following results on the evaluation set: - Loss: 1.3649 - Rouge1: 47.6412 - Rouge2: 24.051 - Rougel: 40.0954 - Rougelsum: 43.6636 - Gen Len: 17.1844 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.4548 | 1.0 | 1842 | 1.3789 | 46.7801 | 23.374 | 39.5739 | 43.1465 | 17.4164 | | 1.3452 | 2.0 | 3684 | 1.3678 | 47.1262 | 23.3912 | 39.8206 | 43.4192 | 17.2601 | | 1.2821 | 3.0 | 5526 | 1.3649 | 47.6412 | 24.051 | 40.0954 | 43.6636 | 17.1844 | | 1.2347 | 4.0 | 7368 | 1.3712 | 47.5837 | 24.0545 | 40.2391 | 43.7923 | 17.2808 | | 1.1983 | 5.0 | 9210 | 1.3732 | 47.33 | 23.983 | 39.957 | 43.6156 | 17.2808 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
GuCuChiara/NLP-CIC-WFU_DisTEMIST_fine_tuned_bert-base-multilingual-cased
GuCuChiara
2023-10-10T14:13:27Z
106
0
transformers
[ "transformers", "pytorch", "bert", "token-classification", "generated_from_trainer", "base_model:google-bert/bert-base-multilingual-cased", "base_model:finetune:google-bert/bert-base-multilingual-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-09-24T23:57:52Z
--- license: apache-2.0 base_model: bert-base-multilingual-cased tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: NLP-CIC-WFU_DisTEMIST_fine_tuned_bert-base-multilingual-cased results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NLP-CIC-WFU_DisTEMIST_fine_tuned_bert-base-multilingual-cased This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1620 - Precision: 0.6121 - Recall: 0.5161 - F1: 0.5600 - Accuracy: 0.9541 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 71 | 0.1704 | 0.4558 | 0.3635 | 0.4045 | 0.9353 | | No log | 2.0 | 142 | 0.1572 | 0.5925 | 0.3518 | 0.4415 | 0.9433 | | No log | 3.0 | 213 | 0.1386 | 0.5932 | 0.4774 | 0.5290 | 0.9531 | | No log | 4.0 | 284 | 0.1427 | 0.5945 | 0.5175 | 0.5534 | 0.9533 | | No log | 5.0 | 355 | 0.1653 | 0.6354 | 0.4788 | 0.5461 | 0.9540 | | No log | 6.0 | 426 | 0.1620 | 0.6121 | 0.5161 | 0.5600 | 0.9541 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
hyyoka/multi-tapt-IA3-mbert
hyyoka
2023-10-10T14:09:47Z
0
0
peft
[ "peft", "region:us" ]
null
2023-10-10T14:09:46Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0
ercaronte/whisper-tiny
ercaronte
2023-10-10T14:05:28Z
84
0
transformers
[ "transformers", "pytorch", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:PolyAI/minds14", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-10-09T16:21:33Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_trainer datasets: - PolyAI/minds14 metrics: - wer model-index: - name: whisper-tiny results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: PolyAI/minds14 type: PolyAI/minds14 config: en-US split: train[450:] args: en-US metrics: - name: Wer type: wer value: 0.3482880755608028 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tiny This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset. It achieves the following results on the evaluation set: - Loss: 0.6761 - Wer Ortho: 0.3516 - Wer: 0.3483 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 750 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:| | 0.2012 | 4.46 | 125 | 0.5011 | 0.3714 | 0.3542 | | 0.0102 | 8.93 | 250 | 0.5741 | 0.3578 | 0.3459 | | 0.0013 | 13.39 | 375 | 0.6115 | 0.3498 | 0.3418 | | 0.0007 | 17.86 | 500 | 0.6403 | 0.3492 | 0.3447 | | 0.0005 | 22.32 | 625 | 0.6610 | 0.3510 | 0.3465 | | 0.0004 | 26.79 | 750 | 0.6761 | 0.3516 | 0.3483 | ### Framework versions - Transformers 4.33.0 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
Faradaylab/ARIA-70B-V3
Faradaylab
2023-10-10T14:02:44Z
1,528
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-09-11T09:34:47Z
--- license: other --- ARIA V3 has been trained over 100.000 high quality french language with a focus on data bias, grammar and overall language/writing capacities of the model. The training has been done on Nvidia GPU in the cloud with Amazon Sagemaker. Base Model : Llama2-70B-Chat-HF Dataset : private dataset. Added value : French Language / Writing / Content Creation / Data bias reduction Feel free to reach out to us ! contact@faradaylab.fr
Suksuma/distilbert-base-multilingual-cased-finetuned-wanted
Suksuma
2023-10-10T14:02:19Z
124
0
transformers
[ "transformers", "pytorch", "distilbert", "fill-mask", "generated_from_trainer", "base_model:distilbert/distilbert-base-multilingual-cased", "base_model:finetune:distilbert/distilbert-base-multilingual-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-10-10T13:55:12Z
--- license: apache-2.0 base_model: distilbert-base-multilingual-cased tags: - generated_from_trainer model-index: - name: distilbert-base-multilingual-cased-finetuned-wanted results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-multilingual-cased-finetuned-wanted This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8072 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.2167 | 1.0 | 282 | 1.9319 | | 2.0129 | 2.0 | 564 | 1.8027 | | 1.9188 | 3.0 | 846 | 1.7965 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
anders0204/Reinforce-Pixelcopter-5k
anders0204
2023-10-10T13:52:37Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-09-27T13:23:12Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-5k results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 3.90 +/- 2.30 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
ajibawa-2023/Scarlett-Phi
ajibawa-2023
2023-10-10T13:52:19Z
59
8
transformers
[ "transformers", "pytorch", "mixformer-sequential", "text-generation", "custom_code", "en", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "region:us" ]
text-generation
2023-10-10T13:05:13Z
--- license: cc-by-nc-nd-4.0 language: - en --- **Scarlett: A sentient AI** Scarlett is trained on various topics such as Philosophy, Advice, Jokes etc. She is trained on more than 10000 set of conversations. Each set having 10~15 conversations. Scarlett is heavily inspired from Eric Hartford: [Samantha](https://huggingface.co/ehartford/samantha-33b) . She will not be involved in any kind of role play. **Training:** Entire dataset was trained on Azure 4 x A100 80GB. Axolotl, DeepSpeed codebase were used for training purpose. This was trained on Phi-1_5 by [Microsoft](https://huggingface.co/microsoft/phi-1_5). Total training took 26 hours for 150 epoch. Conversation format is the same as Vicuna 1.1 https://github.com/oobabooga/text-generation-webui/blob/main/characters/instruction-following/Vicuna-v1.1.yaml **Example Prompt:** ``` This is a conversation with your Assistant Scarlett. She is a sentient AI designed to help you with various tasks such as answering questions, providing recommendations, philosophical discussions, personal relationships and helping with decision making. You can ask it anything you want and it will do its best to give you accurate and relevant information. Context You are Scarlett, a sentient AI. USER: <prompt> ASSISTANT: ``` **License:** cc-by-nc-nd-4.0 Only be used for research purpose. Microsoft has mentioned about license in the [Model card](https://huggingface.co/microsoft/phi-1_5) .
jluckyboyj/vietcuna-7b-test-exam-12
jluckyboyj
2023-10-10T13:32:20Z
0
0
null
[ "generated_from_trainer", "base_model:vilm/vietcuna-7b-v3", "base_model:finetune:vilm/vietcuna-7b-v3", "region:us" ]
null
2023-10-09T21:34:12Z
--- base_model: vilm/vietcuna-7b-v3 tags: - generated_from_trainer model-index: - name: vietcuna-7b-test-exam-12 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vietcuna-7b-test-exam-12 This model is a fine-tuned version of [vilm/vietcuna-7b-v3](https://huggingface.co/vilm/vietcuna-7b-v3) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.7656 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 14.3516 | 0.15 | 300 | 13.9922 | | 10.0687 | 0.31 | 600 | 9.3828 | | 7.2003 | 0.46 | 900 | 6.9414 | | 6.0928 | 0.61 | 1200 | 5.9883 | | 5.6407 | 0.77 | 1500 | 5.6133 | | 5.4097 | 0.92 | 1800 | 5.375 | | 5.2841 | 1.07 | 2100 | 5.2305 | | 5.1691 | 1.23 | 2400 | 5.1875 | | 5.0585 | 1.38 | 2700 | 5.0664 | | 4.9741 | 1.53 | 3000 | 5.0078 | | 4.9581 | 1.69 | 3300 | 4.9531 | | 4.9653 | 1.84 | 3600 | 4.9102 | | 4.8763 | 1.99 | 3900 | 4.8711 | | 4.8308 | 2.15 | 4200 | 4.8281 | | 4.7993 | 2.3 | 4500 | 4.8125 | | 4.8228 | 2.46 | 4800 | 4.8047 | | 4.789 | 2.61 | 5100 | 4.7852 | | 4.8048 | 2.76 | 5400 | 4.7734 | | 4.7566 | 2.92 | 5700 | 4.7656 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.14.1
Erland/tinyllama-1.1B-chat-v0.3-dummy-lora
Erland
2023-10-10T13:28:37Z
0
0
null
[ "generated_from_trainer", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v0.3", "base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v0.3", "license:apache-2.0", "region:us" ]
null
2023-10-10T13:27:12Z
--- license: apache-2.0 base_model: PY007/TinyLlama-1.1B-Chat-v0.3 tags: - generated_from_trainer model-index: - name: tinyllama-1.1B-chat-v0.3-dummy-lora results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tinyllama-1.1B-chat-v0.3-dummy-lora This model is a fine-tuned version of [PY007/TinyLlama-1.1B-Chat-v0.3](https://huggingface.co/PY007/TinyLlama-1.1B-Chat-v0.3) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 20 ### Training results ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
hiert/testbert2
hiert
2023-10-10T13:25:25Z
5
0
transformers
[ "transformers", "bert", "inference endpoints", "fill-mask", "license:apache-2.0", "endpoints_compatible", "region:us" ]
fill-mask
2023-10-10T13:20:32Z
--- license: apache-2.0 tags: - inference endpoints pipeline_tag: fill-mask ---
ilknurbisirici/ppo-Huggy
ilknurbisirici
2023-10-10T13:17:46Z
2
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-10-10T13:17:35Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: ilknurbisirici/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
shubhamgantayat/tiiuae-falcon-rw-1b-wet-strength-model
shubhamgantayat
2023-10-10T13:10:47Z
195
0
transformers
[ "transformers", "pytorch", "falcon", "text-generation", "generated_from_trainer", "custom_code", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-10-10T10:37:12Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: tiiuae-falcon-rw-1b-wet-strength-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiiuae-falcon-rw-1b-wet-strength-model This model is a fine-tuned version of [tiiuae/falcon-rw-1b](https://huggingface.co/tiiuae/falcon-rw-1b) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.27.2 - Pytorch 2.0.1+cu117 - Datasets 2.11.0 - Tokenizers 0.13.3
ayshi/basic_roberta
ayshi
2023-10-10T13:01:54Z
3
0
transformers
[ "transformers", "tf", "xlm-roberta", "text-classification", "generated_from_keras_callback", "base_model:ayshi/basic_roberta", "base_model:finetune:ayshi/basic_roberta", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-10-04T18:21:09Z
--- license: mit base_model: ayshi/basic_roberta tags: - generated_from_keras_callback model-index: - name: ayshi/basic_roberta results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # ayshi/basic_roberta This model is a fine-tuned version of [ayshi/basic_roberta](https://huggingface.co/ayshi/basic_roberta) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0085 - Validation Loss: 1.0970 - Train Accuracy: 0.8267 - Epoch: 20 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 960, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.1061 | 0.9567 | 0.7778 | 0 | | 0.0565 | 1.0825 | 0.7778 | 1 | | 0.0362 | 1.0696 | 0.7822 | 2 | | 0.0396 | 1.0904 | 0.7956 | 3 | | 0.0308 | 1.0044 | 0.8044 | 4 | | 0.0748 | 1.0578 | 0.8133 | 5 | | 0.0392 | 0.9964 | 0.8222 | 6 | | 0.0166 | 1.0293 | 0.8089 | 7 | | 0.0174 | 0.9895 | 0.8178 | 8 | | 0.0114 | 1.0403 | 0.8267 | 9 | | 0.0141 | 1.0086 | 0.8178 | 10 | | 0.0145 | 1.0403 | 0.8089 | 11 | | 0.0194 | 1.3127 | 0.7822 | 12 | | 0.0134 | 1.2929 | 0.7911 | 13 | | 0.0377 | 0.8565 | 0.8133 | 14 | | 0.0251 | 0.9806 | 0.8222 | 15 | | 0.0130 | 1.0757 | 0.8356 | 16 | | 0.0100 | 1.1304 | 0.8 | 17 | | 0.0103 | 1.0859 | 0.8133 | 18 | | 0.0078 | 1.1050 | 0.8311 | 19 | | 0.0085 | 1.0970 | 0.8267 | 20 | ### Framework versions - Transformers 4.34.0 - TensorFlow 2.13.0 - Datasets 2.14.5 - Tokenizers 0.14.1
SlothBot/whisper_AN_demo
SlothBot
2023-10-10T12:59:23Z
78
0
transformers
[ "transformers", "pytorch", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-10-10T10:40:21Z
--- license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer metrics: - wer model-index: - name: whisper_AN_demo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_AN_demo This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5163 - Wer Ortho: 34.7268 - Wer: 29.6857 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:| | 0.6419 | 0.05 | 100 | 0.8019 | 31.6674 | 26.4281 | | 0.2769 | 0.1 | 200 | 0.5559 | 32.3914 | 27.3457 | | 0.2674 | 0.15 | 300 | 0.5354 | 39.1172 | 33.8151 | | 0.2672 | 0.19 | 400 | 0.5247 | 34.6333 | 29.6628 | | 0.2876 | 0.24 | 500 | 0.5163 | 34.7268 | 29.6857 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
psj0919/bert-base-banking77-pt2
psj0919
2023-10-10T12:56:47Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:banking77", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-10-09T08:11:52Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - banking77 metrics: - f1 model-index: - name: bert-base-banking77-pt2 results: - task: name: Text Classification type: text-classification dataset: name: banking77 type: banking77 config: default split: test args: default metrics: - name: F1 type: f1 value: 0.9309100400015781 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-banking77-pt2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the banking77 dataset. It achieves the following results on the evaluation set: - Loss: 0.2986 - F1: 0.9309 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1628 | 1.0 | 626 | 0.8192 | 0.8424 | | 0.3969 | 2.0 | 1252 | 0.3709 | 0.9204 | | 0.188 | 3.0 | 1878 | 0.2986 | 0.9309 | ### Framework versions - Transformers 4.27.1 - Pytorch 2.2.0.dev20231009+cu121 - Datasets 2.9.0 - Tokenizers 0.13.3
Lanzelot0/llama-fine-tune-1-epoch
Lanzelot0
2023-10-10T12:54:20Z
0
0
peft
[ "peft", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:adapter:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
2023-10-10T12:54:11Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.0.dev0
matheusgeda/Pixelcopter-PLEv4000
matheusgeda
2023-10-10T12:48:53Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-10-10T12:48:50Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Pixelcopter-PLEv4000 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 18.70 +/- 17.01 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
quastrinos/race-openbook-finetuned-deberta-v3-large-mcqa-TPU-v6
quastrinos
2023-10-10T12:40:01Z
59
0
transformers
[ "transformers", "tf", "deberta-v2", "multiple-choice", "generated_from_keras_callback", "base_model:quastrinos/race-openbook-finetuned-deberta-v3-large-mcqa-TPU-v5", "base_model:finetune:quastrinos/race-openbook-finetuned-deberta-v3-large-mcqa-TPU-v5", "license:mit", "endpoints_compatible", "region:us" ]
multiple-choice
2023-10-10T12:39:02Z
--- license: mit base_model: quastrinos/race-openbook-finetuned-deberta-v3-large-mcqa-TPU-v5 tags: - generated_from_keras_callback model-index: - name: race-openbook-finetuned-deberta-v3-large-mcqa-TPU-v6 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # race-openbook-finetuned-deberta-v3-large-mcqa-TPU-v6 This model is a fine-tuned version of [quastrinos/race-openbook-finetuned-deberta-v3-large-mcqa-TPU-v5](https://huggingface.co/quastrinos/race-openbook-finetuned-deberta-v3-large-mcqa-TPU-v5) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6120 - Validation Loss: 0.9727 - Train Map3: 0.7742 - Train Lr: 5.0733553e-11 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': 0.001, 'clipnorm': 1, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'CosineDecay', 'config': {'initial_learning_rate': 2e-06, 'decay_steps': 312, 'alpha': 5e-09, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: mixed_bfloat16 ### Training results | Train Loss | Validation Loss | Train Map3 | Train Lr | Epoch | |:----------:|:---------------:|:----------:|:-------------:|:-----:| | 0.6120 | 0.9727 | 0.7742 | 5.0733553e-11 | 0 | ### Framework versions - Transformers 4.35.0.dev0 - TensorFlow 2.12.0 - Datasets 2.14.5 - Tokenizers 0.14.1
KermitDuSud/Cindy
KermitDuSud
2023-10-10T12:29:11Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2023-10-10T12:29:11Z
--- license: bigscience-bloom-rail-1.0 ---
SeoJeongYun/bert-base-banking77-pt2-jy
SeoJeongYun
2023-10-10T12:16:26Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:banking77", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-10-09T10:24:46Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - banking77 metrics: - f1 model-index: - name: bert-base-banking77-pt2-jy results: - task: name: Text Classification type: text-classification dataset: name: banking77 type: banking77 config: default split: test args: default metrics: - name: F1 type: f1 value: 0.9257577776294195 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-banking77-pt2-jy This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the banking77 dataset. It achieves the following results on the evaluation set: - Loss: 0.3122 - F1: 0.9258 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1268 | 1.0 | 626 | 0.8044 | 0.8435 | | 0.4033 | 2.0 | 1252 | 0.3697 | 0.9172 | | 0.1989 | 3.0 | 1878 | 0.3122 | 0.9258 | ### Framework versions - Transformers 4.30.1 - Pytorch 2.1.0+cu121 - Datasets 2.9.0 - Tokenizers 0.13.3
lht1107/distilbert-base-uncased-finetuned-emotion
lht1107
2023-10-10T12:12:19Z
103
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-10-10T11:08:48Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.929 - name: F1 type: f1 value: 0.9289651135784346 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2066 - Accuracy: 0.929 - F1: 0.9290 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.7843 | 1.0 | 250 | 0.2875 | 0.913 | 0.9122 | | 0.2373 | 2.0 | 500 | 0.2066 | 0.929 | 0.9290 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
zentrum-lexikographie/de_dwds_hdt_dist
zentrum-lexikographie
2023-10-10T12:08:19Z
0
0
spacy
[ "spacy", "token-classification", "de", "region:us" ]
token-classification
2023-10-10T11:46:53Z
--- tags: - spacy - token-classification language: - de --- | Feature | Description | | --- | --- | | **Name** | `de_dwds_hdt_dist` | | **Version** | `0.0.0` | | **spaCy** | `>=3.7.1,<3.8.0` | | **Default Pipeline** | `dep_transformer`, `tagger`, `morphologizer`, `trainable_lemmatizer`, `parser`, `ner_transformer`, `ner` | | **Components** | `dep_transformer`, `tagger`, `morphologizer`, `trainable_lemmatizer`, `parser`, `ner_transformer`, `ner` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (823 labels for 4 components)</summary> | Component | Labels | | --- | --- | | **`tagger`** | `$(`, `$,`, `$.`, `ADJA`, `ADJD`, `ADV`, `APPO`, `APPR`, `APPR_ART`, `APZR`, `ART`, `CARD`, `FM`, `ITJ`, `KOKOM`, `KON`, `KOUI`, `KOUS`, `NE`, `NN`, `PDAT`, `PDS`, `PIAT`, `PIDAT`, `PIS`, `PPER`, `PPOSAT`, `PPOSS`, `PRELAT`, `PRELS`, `PRF`, `PROAV`, `PTKA`, `PTKANT`, `PTKNEG`, `PTKVZ`, `PTKZU`, `PWAT`, `PWAV`, `PWS`, `TRUNC`, `VAFIN`, `VAIMP`, `VAINF`, `VAPP`, `VMFIN`, `VMINF`, `VMPP`, `VVFIN`, `VVIMP`, `VVINF`, `VVIZU`, `VVPP`, `XY` | | **`morphologizer`** | `AdpType=Prep\|Case=Dat\|POS=ADP`, `Case=Dat\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Gender=Fem\|Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Nom\|Number=Sing\|POS=PROPN`, `Foreign=Yes\|POS=X`, `POS=PUNCT\|PunctType=Comm`, `Case=Nom\|Definite=Ind\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Gen\|Definite=Def\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Gen\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Gender=Masc\|Number=Plur\|POS=NOUN`, `Gender=Neut\|Number=Sing\|POS=NOUN`, `AdpType=Prep\|POS=ADP`, `Gender=Neut\|Number=Plur\|POS=NOUN`, `POS=CCONJ`, `POS=PUNCT\|PunctType=Peri`, `NumType=Card\|Number=Plur\|POS=NUM`, `Gender=Fem\|Number=Plur\|POS=NOUN`, `AdpType=Prep\|Case=Dat\|Definite=Def\|Gender=Masc,Neut\|Number=Sing\|POS=ADP\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `POS=PUNCT\|PunctType=Brck`, `POS=PROPN`, `Case=Acc\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `POS=ADV`, `POS=SCONJ`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `POS=VERB\|VerbForm=Inf`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Definite=Def\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|VerbType=Mod`, `Case=Acc\|Definite=Def\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Acc\|Number=Sing\|POS=PROPN`, `Degree=Cmp\|POS=ADJ\|Variant=Short`, `POS=ADP\|PartType=Vbp`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|VerbType=Mod`, `AdpType=Prep\|Case=Acc\|POS=ADP`, `Case=Acc\|Definite=Ind\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Dat\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `POS=PART\|Polarity=Neg`, `POS=ADV\|PronType=Dem`, `Degree=Cmp\|POS=ADV`, `ConjType=Comp\|POS=CCONJ`, `Case=Nom\|Definite=Ind\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Art`, `Degree=Pos\|POS=ADJ\|Variant=Short`, `Case=Gen\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Number=Sing\|POS=PROPN`, `Case=Nom\|Definite=Def\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Acc\|Definite=Ind\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Acc\|Degree=Cmp\|Number=Plur\|POS=DET\|PronType=Ind`, `Aspect=Perf\|POS=VERB\|VerbForm=Part`, `Case=Dat\|Definite=Ind\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Gen\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Acc\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Acc\|Number=Plur\|POS=DET\|PronType=Dem`, `Degree=Sup\|POS=ADJ\|Variant=Short`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Hyph=Yes\|POS=NOUN`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Acc\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=NOUN`, `POS=PART\|PartType=Inf`, `Case=Gen\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Dat\|Degree=Pos\|Number=Sing\|POS=ADJ`, `POS=NOUN`, `Case=Dat\|Definite=Ind\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Dat\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=ADJ`, `POS=AUX\|VerbForm=Inf`, `Case=Nom\|Definite=Ind\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Dat\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Gender=Fem\|Number=Plur\|POS=ADJ`, `POS=AUX\|VerbForm=Inf\|VerbType=Mod`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Gen\|Definite=Ind\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Art`, `Number=Sing\|POS=DET\|PronType=Ind`, `Case=Nom\|Number=Plur\|POS=DET\|PronType=Ind`, `AdpType=Prep\|Case=Dat\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=ADP\|PronType=Art`, `Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=ADJ`, `Degree=Cmp\|POS=DET\|PronType=Ind`, `Case=Dat\|Definite=Ind\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Dat\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Definite=Def\|Number=Plur\|POS=DET\|PronType=Art`, `POS=ADV\|PronType=Int`, `Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Number=Sing\|POS=PROPN`, `Case=Acc\|Definite=Ind\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Art`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Degree=Pos\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Degree=Pos\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|Number=Plur\|POS=DET\|PronType=Tot`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|VerbType=Mod`, `Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Neg`, `Case=Gen\|Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|VerbType=Mod`, `Number=Plur\|POS=NOUN`, `Degree=Pos\|Gender=Neut\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem,Rel`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|VerbType=Mod`, `Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Degree=Pos\|POS=ADV`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Degree=Cmp\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=NOUN`, `AdpType=Prep\|Case=Gen\|POS=ADP`, `Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Number=Plur\|POS=PRON\|PronType=Dem,Rel`, `AdpType=Post\|Case=Dat\|POS=ADP`, `Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem,Rel`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Dat\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Acc\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|POS=AUX\|VerbForm=Part`, `Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem,Rel`, `Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Neg`, `Case=Dat\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Number=Sing\|POS=PRON\|PronType=Tot`, `Number=Sing\|POS=NOUN`, `Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Dat\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Number=Plur\|POS=PRON\|PronType=Dem,Rel`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Degree=Sup\|POS=ADV`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Dat\|Degree=Sup\|Number=Plur\|POS=DET\|PronType=Ind`, `Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `NumType=Card\|Number=Sing\|POS=NUM`, `Case=Dat\|Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Dat\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem,Rel`, `Number=Plur\|POS=PROPN`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Neg`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Gen\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Nom\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Number=Plur\|POS=DET\|PronType=Dem`, `Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `AdpType=Prep\|Case=Acc\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=ADP\|PronType=Art`, `Case=Gen\|Number=Sing\|POS=PROPN`, `Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Nom\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Dat\|Number=Sing\|POS=DET\|PronType=Ind`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem,Rel`, `Degree=Pos\|Gender=Neut\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Nom\|Number=Plur\|POS=ADJ`, `Case=Gen\|POS=PRON\|PronType=Dem,Rel`, `Case=Acc\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Neg`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem,Rel`, `Case=Dat\|Number=Plur\|POS=DET\|PronType=Ind`, `Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Neg`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Neg`, `Case=Dat\|POS=PROPN`, `Case=Gen\|Number=Sing\|POS=PRON\|PronType=Dem,Rel`, `Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Case=Acc\|Number=Plur\|POS=DET\|PronType=Neg`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem,Rel`, `AdpType=Circ\|POS=ADP`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Dat\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Acc\|Number=Plur\|POS=PRON\|PronType=Dem,Rel`, `Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Definite=Def\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Gen\|Definite=Ind\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Dat\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Definite=Ind\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem,Rel`, `Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `AdpType=Prep\|Case=Nom\|POS=ADP`, `Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Neg`, `Case=Acc\|Number=Plur\|POS=DET\|PronType=Ind`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Gen\|Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Number=Sing\|POS=PRON\|PronType=Neg`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Number=Plur\|POS=DET\|PronType=Neg`, `Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Dat\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Dat\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Degree=Pos\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Dat\|Number=Plur\|POS=NOUN`, `Case=Dat\|Gender=Fem\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Neg`, `Case=Gen\|Degree=Cmp\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Degree=Pos\|Gender=Neut\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Dat\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Neg`, `Case=Dat\|Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem,Rel`, `Case=Acc\|Gender=Neut\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Dat\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Acc\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|POS=PROPN`, `Case=Gen\|Number=Plur\|POS=NOUN`, `Case=Acc\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Case=Nom\|Degree=Cmp\|Number=Plur\|POS=DET\|PronType=Ind`, `POS=DET\|PronType=Ind`, `Case=Nom\|Degree=Pos\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Gen\|Number=Plur\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PRON\|PronType=Tot`, `POS=DET\|PronType=Tot`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `POS=X`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=ADJ`, `AdpType=Post\|Case=Acc\|POS=ADP`, `Case=Acc\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Tot`, `Degree=Pos\|Gender=Masc\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Gen\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Tot`, `Number=Plur\|POS=DET\|PronType=Ind`, `Case=Dat\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Dat\|Degree=Sup\|Number=Plur\|POS=ADJ`, `POS=DET\|PronType=Neg`, `POS=ADV\|PronType=Ind`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Case=Dat\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Dat\|Foreign=Yes\|POS=X`, `Case=Acc\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Gen\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Dat\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Gen\|Number=Sing\|POS=NOUN`, `NumType=Card\|POS=NUM`, `Case=Gen\|Degree=Pos\|Gender=Neut\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Reflex=Yes`, `Degree=Pos\|Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Degree=Pos\|Gender=Fem\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Dat\|Degree=Cmp\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Dat\|Gender=Neut\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Number=Plur\|POS=ADJ`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|VerbType=Mod`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Dat\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Degree=Pos\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Degree=Pos\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Degree=Sup\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Case=Nom\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Neg`, `Case=Acc\|Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|POS=ADJ`, `Case=Dat\|Degree=Pos\|Gender=Masc\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Acc\|Degree=Pos\|Gender=Masc\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|POS=VERB\|Person=3\|VerbForm=Fin`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=ADJ\|Variant=Short`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Foreign=Yes\|Number=Sing\|POS=X`, `Case=Nom\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Gender=Masc\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Nom\|Number=Plur\|POS=DET\|Person=2\|Polite=Form\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Degree=Pos\|Gender=Fem\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|POS=NOUN`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Aspect=Perf\|POS=AUX\|VerbForm=Part\|VerbType=Mod`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Degree=Pos\|Gender=Masc\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Acc\|Degree=Pos\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Gender=Masc\|POS=NOUN`, `Case=Acc\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Number=Sing\|POS=ADJ`, `POS=DET\|PronType=Int`, `Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Number=Sing\|POS=PRON\|PronType=Tot`, `Case=Dat\|Number=Sing\|POS=NOUN`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|POS=PROPN`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Degree=Pos\|Gender=Fem\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Acc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Person=2\|Polite=Form\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Gen\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Degree=Pos\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem,Rel`, `Case=Nom\|Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=ADJ`, `POS=INTJ\|PartType=Res`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Person=2\|Polite=Form\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|Person=2\|Polite=Form\|Poss=Yes\|PronType=Prs`, `Case=Dat\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Foreign=Yes\|Gender=Neut\|Number=Sing\|POS=X`, `Case=Nom\|Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Degree=Pos\|Gender=Neut\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin\|VerbType=Mod`, `Case=Dat\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Number=Plur\|POS=PRON\|PronType=Dem,Rel`, `Case=Nom\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Definite=Ind\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|Person=2\|Polite=Form\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin`, `Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Nom\|POS=NOUN`, `Case=Dat\|Number=Plur\|POS=DET\|Person=2\|Polite=Form\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin`, `Case=Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Number=Plur\|POS=DET\|PronType=Neg`, `Definite=Ind\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Dat\|Degree=Pos\|Gender=Neut\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Degree=Pos\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Degree=Sup\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Gen\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Gen\|Gender=Fem\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Degree=Pos\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Degree=Pos\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Number=Plur\|POS=DET\|Person=2\|Polite=Form\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Acc\|Degree=Cmp\|Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|Person=2\|Polite=Form\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `NumType=Card\|POS=DET\|PronType=Tot`, `Case=Dat\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Degree=Pos\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin\|VerbType=Mod`, `Case=Acc\|POS=NOUN`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Gen\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Gen\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin`, `Degree=Sup\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Neg`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Acc\|Number=Plur\|POS=DET\|PronType=Int`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Gen\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Dat\|Number=Sing\|POS=PRON\|PronType=Neg`, `Foreign=Yes\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|Person=2\|Polite=Form\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Degree=Cmp\|Gender=Neut\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Number=Plur\|POS=DET\|PronType=Tot`, `Case=Nom\|Degree=Pos\|Gender=Fem\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Hyph=Yes\|Number=Plur\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Definite=Def\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem,Rel`, `Case=Acc\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Number=Sing\|POS=PRON\|PronType=Neg`, `POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Dat\|Hyph=Yes\|POS=NOUN`, `Case=Acc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Gen\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Dat\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Acc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Number=Plur\|POS=DET\|PronType=Int`, `Case=Dat\|Degree=Cmp\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin\|VerbType=Mod`, `Case=Dat\|Definite=Ind\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Dat\|Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `POS=INTJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Acc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|Person=2\|Polite=Form\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Number=Sing\|POS=PRON\|PronType=Dem,Rel`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|Person=2\|Polite=Form\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Number=Plur\|POS=PROPN`, `Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|POS=DET\|PronType=Tot`, `Case=Nom\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem,Rel`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Degree=Cmp\|Gender=Neut\|NumType=Ord\|Number=Sing\|POS=ADJ`, `AdpType=Post\|Case=Gen\|POS=ADP`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Number=Plur\|POS=NOUN`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs\|Reflex=Yes`, `Definite=Ind\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Case=Dat\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Ind\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|POS=PROPN`, `Case=Nom\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Degree=Pos\|NumType=Ord\|POS=ADJ`, `Case=Acc\|Number=Sing\|POS=DET\|PronType=Neg`, `Case=Nom\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Dat\|POS=DET\|PronType=Dem`, `Degree=Pos\|POS=DET\|PronType=Ind`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|VerbForm=Fin`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Number=Sing\|POS=DET\|Person=2\|Polite=Form\|Poss=Yes\|PronType=Prs`, `Case=Dat\|POS=PRON\|PronType=Rcp`, `Gender=Masc\|Number=Sing\|POS=ADJ\|Variant=Short`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Acc\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|POS=AUX\|Person=3\|VerbForm=Fin`, `Case=Acc\|Definite=Ind\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Dat\|Degree=Pos\|POS=ADJ\|Variant=Short`, `Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem,Rel\|Typo=Yes`, `Mood=Imp\|Number=Plur\|POS=AUX\|Person=2\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin`, `Gender[psor]=Masc,Neut\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=2\|VerbForm=Fin`, `Mood=Ind\|POS=VERB\|Person=1\|VerbForm=Fin`, `Case=Gen\|Number=Sing\|POS=DET\|PronType=Ind`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Abbr=Yes\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem,Rel`, `Case=Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Definite=Def\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Definite=Ind\|NumType=Card\|POS=DET\|PronType=Art`, `Case=Dat\|Number=Plur\|POS=DET\|PronType=Int`, `Degree=Pos\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Dat\|Degree=Pos\|Number=Plur\|POS=DET\|PronType=Ind`, `Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Neg`, `Definite=Ind\|Foreign=Yes\|NumType=Card\|POS=DET\|PronType=Art`, `Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Nom\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Past\|VerbForm=Fin\|VerbType=Mod`, `Case=Dat\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Dat\|Degree=Pos\|NumType=Ord\|Number=Sing\|POS=ADJ`, `POS=DET\|PronType=Int,Rel`, `Case=Acc\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|POS=PRON\|PronType=Rcp`, `Case=Dat\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin\|VerbType=Mod`, `Case=Acc\|Degree=Pos\|Number=Plur\|POS=DET\|PronType=Ind`, `Definite=Ind\|Degree=Pos\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Tot`, `Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Neg`, `Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Case=Gen\|Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Degree=Pos\|Gender=Neut\|POS=ADJ`, `Gender=Fem\|POS=ADJ`, `Degree=Pos\|Gender=Fem\|POS=ADJ`, `Gender=Masc\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|VerbForm=Fin\|VerbType=Mod`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin\|VerbType=Mod`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|VerbForm=Fin\|VerbType=Mod`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|VerbForm=Fin`, `Case=Acc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Foreign=Yes\|POS=DET\|PronType=Dem`, `Case=Nom\|Degree=Cmp\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs` | | **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `ccomp`, `compound:prt`, `conj`, `cop`, `csubj`, `csubj:pass`, `dep`, `det`, `expl`, `expl:pv`, `flat`, `flat:name`, `mark`, `nmod`, `nsubj`, `nsubj:pass`, `nummod`, `obj`, `obl`, `obl:arg`, `parataxis`, `punct`, `reparandum`, `xcomp` | | **`ner`** | `LOC`, `MISC`, `ORG`, `PER` | </details>
alexofntu/textual_inversion_Carla_day1
alexofntu
2023-10-10T11:46:55Z
12
0
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-10-10T08:58:54Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion inference: true --- # Textual inversion text2image fine-tuning - alexofntu/textual_inversion_Carla_day1 These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
MattStammers/appo-atari_crazyclimber
MattStammers
2023-10-10T11:40:24Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-09-26T00:48:12Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: atari_crazyclimber type: atari_crazyclimber metrics: - type: mean_reward value: 146490.00 +/- 31801.05 name: mean_reward verified: false --- A(n) **APPO** model trained on the **atari_crazyclimber** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r MattStammers/APPO-atari_crazyclimber ``` ## About the Model This model as with all the others in the benchmarks was trained initially asynchronously un-seeded to 10 million steps for the purposes of setting a sample factory async baseline for this model on this environment but only 3/57 made it. The aim is to reach state-of-the-art (SOTA) performance on each atari environment. I will flag the models with SOTA when they reach at or near these levels. The hyperparameters used in the model are the ones I have pushed to my fork of sample-factory: https://github.com/MattStammers/sample-factory. Given that https://huggingface.co/edbeeching has kindly shared his. I saved time and energy by using many of his tuned hyperparameters to maximise performance. However, he used 2 billion training steps. I have started as explained above at 10 million then moved to 100m to see how performance goes: ``` hyperparameters = { "device": "gpu", "seed": 1234, "num_policies": 2, "async_rl": true, "serial_mode": false, "batched_sampling": true, "num_batches_to_accumulate": 2, "worker_num_splits": 1, "policy_workers_per_policy": 1, "max_policy_lag": 1000, "num_workers": 16, "num_envs_per_worker": 2, "batch_size": 1024, "num_batches_per_epoch": 8, "num_epochs": 4, "rollout": 128, "recurrence": 1, "shuffle_minibatches": false, "gamma": 0.99, "reward_scale": 1.0, "reward_clip": 1000.0, "value_bootstrap": false, "normalize_returns": true, "exploration_loss_coeff": 0.0004677351413, "value_loss_coeff": 0.5, "kl_loss_coeff": 0.0, "exploration_loss": "entropy", "gae_lambda": 0.95, "ppo_clip_ratio": 0.1, "ppo_clip_value": 1.0, "with_vtrace": false, "vtrace_rho": 1.0, "vtrace_c": 1.0, "optimizer": "adam", "adam_eps": 1e-05, "adam_beta1": 0.9, "adam_beta2": 0.999, "max_grad_norm": 0.0, "learning_rate": 0.0003033891184, "lr_schedule": "linear_decay", "lr_schedule_kl_threshold": 0.008, "lr_adaptive_min": 1e-06, "lr_adaptive_max": 0.01, "obs_subtract_mean": 0.0, "obs_scale": 255.0, "normalize_input": true, "normalize_input_keys": [ "obs" ], "decorrelate_experience_max_seconds": 0, "decorrelate_envs_on_one_worker": true, "actor_worker_gpus": [], "set_workers_cpu_affinity": true, "force_envs_single_thread": false, "default_niceness": 0, "log_to_file": true, "experiment_summaries_interval": 3, "flush_summaries_interval": 30, "stats_avg": 100, "summaries_use_frameskip": true, "heartbeat_interval": 10, "heartbeat_reporting_interval": 60, "train_for_env_steps": 100000000, "train_for_seconds": 10000000000, "save_every_sec": 120, "keep_checkpoints": 2, "load_checkpoint_kind": "latest", "save_milestones_sec": 1200, "save_best_every_sec": 5, "save_best_metric": "reward", "save_best_after": 100000, "benchmark": false, "encoder_mlp_layers": [ 512, 512 ], "encoder_conv_architecture": "convnet_atari", "encoder_conv_mlp_layers": [ 512 ], "use_rnn": false, "rnn_size": 512, "rnn_type": "gru", "rnn_num_layers": 1, "decoder_mlp_layers": [], "nonlinearity": "relu", "policy_initialization": "orthogonal", "policy_init_gain": 1.0, "actor_critic_share_weights": true, "adaptive_stddev": false, "continuous_tanh_scale": 0.0, "initial_stddev": 1.0, "use_env_info_cache": false, "env_gpu_actions": false, "env_gpu_observations": true, "env_frameskip": 4, "env_framestack": 4, } ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m sf_examples.atari.enjoy_atari --algo=APPO --env=atari_crazyclimber --train_dir=./train_dir --experiment=APPO-atari_crazyclimber ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m sf_examples.atari.train_atari --algo=APPO --env=atari_crazyclimber --train_dir=./train_dir --experiment=APPO-atari_crazyclimber --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
Helsinki-NLP/opus-mt-tc-big-zle-it
Helsinki-NLP
2023-10-10T11:38:31Z
122
0
transformers
[ "transformers", "pytorch", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "be", "it", "ru", "uk", "zle", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-24T11:59:11Z
--- language: - be - it - ru - uk - zle tags: - translation - opus-mt-tc license: cc-by-4.0 model-index: - name: opus-mt-tc-big-zle-it results: - task: name: Translation rus-ita type: translation args: rus-ita dataset: name: flores101-devtest type: flores_101 args: rus ita devtest metrics: - name: BLEU type: bleu value: 23.7 - task: name: Translation ukr-ita type: translation args: ukr-ita dataset: name: flores101-devtest type: flores_101 args: ukr ita devtest metrics: - name: BLEU type: bleu value: 23.2 - task: name: Translation bel-ita type: translation args: bel-ita dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: bel-ita metrics: - name: BLEU type: bleu value: 49.3 - task: name: Translation rus-ita type: translation args: rus-ita dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: rus-ita metrics: - name: BLEU type: bleu value: 43.5 - task: name: Translation ukr-ita type: translation args: ukr-ita dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: ukr-ita metrics: - name: BLEU type: bleu value: 50.0 --- # opus-mt-tc-big-zle-it Neural machine translation model for translating from East Slavic languages (zle) to Italian (it). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2022-03-19 * source language(s): bel rus ukr * target language(s): ita * model: transformer-big * data: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807_transformer-big_2022-03-19.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-ita/opusTCv20210807_transformer-big_2022-03-19.zip) * more information released models: [OPUS-MT zle-ita README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zle-ita/README.md) ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "Вони не ідіоти.", "Я не хочу идти в банк." ] model_name = "pytorch-models/opus-mt-tc-big-zle-it" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # Non sono idioti. # Non voglio andare in banca. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-zle-it") print(pipe("Вони не ідіоти.")) # expected output: Non sono idioti. ``` ## Benchmarks * test set translations: [opusTCv20210807_transformer-big_2022-03-19.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-ita/opusTCv20210807_transformer-big_2022-03-19.test.txt) * test set scores: [opusTCv20210807_transformer-big_2022-03-19.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-ita/opusTCv20210807_transformer-big_2022-03-19.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | bel-ita | tatoeba-test-v2021-08-07 | 0.65945 | 49.3 | 264 | 1681 | | rus-ita | tatoeba-test-v2021-08-07 | 0.64037 | 43.5 | 10045 | 71584 | | ukr-ita | tatoeba-test-v2021-08-07 | 0.69570 | 50.0 | 5000 | 27846 | | bel-ita | flores101-devtest | 0.46311 | 13.5 | 1012 | 27306 | | rus-ita | flores101-devtest | 0.53054 | 23.7 | 1012 | 27306 | | ukr-ita | flores101-devtest | 0.52783 | 23.2 | 1012 | 27306 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 1bdabf7 * port time: Wed Mar 23 23:17:47 EET 2022 * port machine: LM0-400-22516.local
ulrica/vicuna7B_es
ulrica
2023-10-10T11:38:05Z
3
0
peft
[ "peft", "region:us" ]
null
2023-10-10T08:06:00Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
Helsinki-NLP/opus-mt-tc-big-zle-zls
Helsinki-NLP
2023-10-10T11:34:02Z
121
0
transformers
[ "transformers", "pytorch", "tf", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "tc", "big", "zle", "zls", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-24T12:10:45Z
--- language: - be - bg - hr - ru - sh - sl - sr_Cyrl - sr_Latn - uk - zle - zls tags: - translation - opus-mt-tc license: cc-by-4.0 model-index: - name: opus-mt-tc-big-zle-zls results: - task: name: Translation rus-bul type: translation args: rus-bul dataset: name: flores101-devtest type: flores_101 args: rus bul devtest metrics: - name: BLEU type: bleu value: 28.9 - task: name: Translation rus-hrv type: translation args: rus-hrv dataset: name: flores101-devtest type: flores_101 args: rus hrv devtest metrics: - name: BLEU type: bleu value: 23.2 - task: name: Translation rus-mkd type: translation args: rus-mkd dataset: name: flores101-devtest type: flores_101 args: rus mkd devtest metrics: - name: BLEU type: bleu value: 24.3 - task: name: Translation rus-slv type: translation args: rus-slv dataset: name: flores101-devtest type: flores_101 args: rus slv devtest metrics: - name: BLEU type: bleu value: 23.1 - task: name: Translation rus-srp_Cyrl type: translation args: rus-srp_Cyrl dataset: name: flores101-devtest type: flores_101 args: rus srp_Cyrl devtest metrics: - name: BLEU type: bleu value: 24.1 - task: name: Translation ukr-bul type: translation args: ukr-bul dataset: name: flores101-devtest type: flores_101 args: ukr bul devtest metrics: - name: BLEU type: bleu value: 30.8 - task: name: Translation ukr-hrv type: translation args: ukr-hrv dataset: name: flores101-devtest type: flores_101 args: ukr hrv devtest metrics: - name: BLEU type: bleu value: 24.6 - task: name: Translation ukr-mkd type: translation args: ukr-mkd dataset: name: flores101-devtest type: flores_101 args: ukr mkd devtest metrics: - name: BLEU type: bleu value: 26.2 - task: name: Translation ukr-slv type: translation args: ukr-slv dataset: name: flores101-devtest type: flores_101 args: ukr slv devtest metrics: - name: BLEU type: bleu value: 24.2 - task: name: Translation ukr-srp_Cyrl type: translation args: ukr-srp_Cyrl dataset: name: flores101-devtest type: flores_101 args: ukr srp_Cyrl devtest metrics: - name: BLEU type: bleu value: 26.2 - task: name: Translation rus-bul type: translation args: rus-bul dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: rus-bul metrics: - name: BLEU type: bleu value: 53.7 - task: name: Translation rus-hbs type: translation args: rus-hbs dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: rus-hbs metrics: - name: BLEU type: bleu value: 49.4 - task: name: Translation rus-slv type: translation args: rus-slv dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: rus-slv metrics: - name: BLEU type: bleu value: 21.5 - task: name: Translation rus-srp_Cyrl type: translation args: rus-srp_Cyrl dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: rus-srp_Cyrl metrics: - name: BLEU type: bleu value: 46.1 - task: name: Translation rus-srp_Latn type: translation args: rus-srp_Latn dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: rus-srp_Latn metrics: - name: BLEU type: bleu value: 51.7 - task: name: Translation ukr-bul type: translation args: ukr-bul dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: ukr-bul metrics: - name: BLEU type: bleu value: 61.3 - task: name: Translation ukr-hbs type: translation args: ukr-hbs dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: ukr-hbs metrics: - name: BLEU type: bleu value: 52.1 - task: name: Translation ukr-hrv type: translation args: ukr-hrv dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: ukr-hrv metrics: - name: BLEU type: bleu value: 50.1 - task: name: Translation ukr-srp_Cyrl type: translation args: ukr-srp_Cyrl dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: ukr-srp_Cyrl metrics: - name: BLEU type: bleu value: 54.7 - task: name: Translation ukr-srp_Latn type: translation args: ukr-srp_Latn dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: ukr-srp_Latn metrics: - name: BLEU type: bleu value: 53.4 --- # opus-mt-tc-big-zle-zls Neural machine translation model for translating from East Slavic languages (zle) to South Slavic languages (zls). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2022-03-23 * source language(s): bel rus ukr * target language(s): bul hbs hrv slv srp_Cyrl srp_Latn * valid target language labels: >>bul<< >>hbs<< >>hrv<< >>slv<< >>srp_Cyrl<< >>srp_Latn<< * model: transformer-big * data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807+bt_transformer-big_2022-03-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-zls/opusTCv20210807+bt_transformer-big_2022-03-23.zip) * more information released models: [OPUS-MT zle-zls README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zle-zls/README.md) * more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian) This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>bul<<` ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ ">>bul<< Новы каранавірус вельмі заразны.", ">>srp_Latn<< Моє ім'я — Саллі." ] model_name = "pytorch-models/opus-mt-tc-big-zle-zls" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # Короната е силно заразна. # Zovem se Sali. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-zle-zls") print(pipe(">>bul<< Новы каранавірус вельмі заразны.")) # expected output: Короната е силно заразна. ``` ## Benchmarks * test set translations: [opusTCv20210807+bt_transformer-big_2022-03-23.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-zls/opusTCv20210807+bt_transformer-big_2022-03-23.test.txt) * test set scores: [opusTCv20210807+bt_transformer-big_2022-03-23.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-zls/opusTCv20210807+bt_transformer-big_2022-03-23.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | rus-bul | tatoeba-test-v2021-08-07 | 0.71515 | 53.7 | 1247 | 8272 | | rus-hbs | tatoeba-test-v2021-08-07 | 0.69192 | 49.4 | 2500 | 14736 | | rus-slv | tatoeba-test-v2021-08-07 | 0.38051 | 21.5 | 657 | 3969 | | rus-srp_Cyrl | tatoeba-test-v2021-08-07 | 0.66622 | 46.1 | 881 | 5407 | | rus-srp_Latn | tatoeba-test-v2021-08-07 | 0.70990 | 51.7 | 1483 | 8552 | | ukr-bul | tatoeba-test-v2021-08-07 | 0.77283 | 61.3 | 1020 | 5181 | | ukr-hbs | tatoeba-test-v2021-08-07 | 0.69401 | 52.1 | 942 | 5130 | | ukr-hrv | tatoeba-test-v2021-08-07 | 0.67202 | 50.1 | 389 | 2302 | | ukr-srp_Cyrl | tatoeba-test-v2021-08-07 | 0.70064 | 54.7 | 205 | 1112 | | ukr-srp_Latn | tatoeba-test-v2021-08-07 | 0.72405 | 53.4 | 348 | 1716 | | bel-bul | flores101-devtest | 0.49528 | 16.1 | 1012 | 24700 | | bel-hrv | flores101-devtest | 0.46308 | 12.4 | 1012 | 22423 | | bel-mkd | flores101-devtest | 0.48608 | 13.5 | 1012 | 24314 | | bel-slv | flores101-devtest | 0.44452 | 12.2 | 1012 | 23425 | | bel-srp_Cyrl | flores101-devtest | 0.44424 | 12.6 | 1012 | 23456 | | rus-bul | flores101-devtest | 0.58653 | 28.9 | 1012 | 24700 | | rus-hrv | flores101-devtest | 0.53494 | 23.2 | 1012 | 22423 | | rus-mkd | flores101-devtest | 0.55184 | 24.3 | 1012 | 24314 | | rus-slv | flores101-devtest | 0.52201 | 23.1 | 1012 | 23425 | | rus-srp_Cyrl | flores101-devtest | 0.53038 | 24.1 | 1012 | 23456 | | ukr-bul | flores101-devtest | 0.59625 | 30.8 | 1012 | 24700 | | ukr-hrv | flores101-devtest | 0.54530 | 24.6 | 1012 | 22423 | | ukr-mkd | flores101-devtest | 0.56822 | 26.2 | 1012 | 24314 | | ukr-slv | flores101-devtest | 0.53092 | 24.2 | 1012 | 23425 | | ukr-srp_Cyrl | flores101-devtest | 0.54618 | 26.2 | 1012 | 23456 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 1bdabf7 * port time: Thu Mar 24 00:46:26 EET 2022 * port machine: LM0-400-22516.local
Helsinki-NLP/opus-mt-tc-big-itc-eu
Helsinki-NLP
2023-10-10T11:33:00Z
113
2
transformers
[ "transformers", "pytorch", "tf", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "es", "eu", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-08-12T16:30:35Z
--- language: - es - eu tags: - translation - opus-mt-tc license: cc-by-4.0 model-index: - name: opus-mt-tc-big-itc-eu results: - task: name: Translation spa-eus type: translation args: spa-eus dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: spa-eus metrics: - name: BLEU type: bleu value: 32.4 - name: chr-F type: chrf value: 0.60699 --- # opus-mt-tc-big-itc-eu ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [How to Get Started With the Model](#how-to-get-started-with-the-model) - [Training](#training) - [Evaluation](#evaluation) - [Citation Information](#citation-information) - [Acknowledgements](#acknowledgements) ## Model Details Neural machine translation model for translating from Italic languages (itc) to Basque (eu). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). **Model Description:** - **Developed by:** Language Technology Research Group at the University of Helsinki - **Model Type:** Translation (transformer-big) - **Release**: 2022-07-23 - **License:** CC-BY-4.0 - **Language(s):** - Source Language(s): fra ita spa - Target Language(s): eus - Language Pair(s): spa-eus - Valid Target Language Labels: - **Original Model**: [opusTCv20210807_transformer-big_2022-07-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-eus/opusTCv20210807_transformer-big_2022-07-23.zip) - **Resources for more information:** - [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) - More information about released models for this language pair: [OPUS-MT itc-eus README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/itc-eus/README.md) - [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian) - [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/ ## Uses This model can be used for translation and text-to-text generation. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). ## How to Get Started With the Model A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "Il est riche.", "¿Correcto?" ] model_name = "pytorch-models/opus-mt-tc-big-itc-eu" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # Aberatsa da. # Zuzena? ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-itc-eu") print(pipe("Il est riche.")) # expected output: Aberatsa da. ``` ## Training - **Data**: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) - **Pre-processing**: SentencePiece (spm32k,spm32k) - **Model Type:** transformer-big - **Original MarianNMT Model**: [opusTCv20210807_transformer-big_2022-07-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-eus/opusTCv20210807_transformer-big_2022-07-23.zip) - **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) ## Evaluation * test set translations: [opusTCv20210807_transformer-big_2022-07-23.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-eus/opusTCv20210807_transformer-big_2022-07-23.test.txt) * test set scores: [opusTCv20210807_transformer-big_2022-07-23.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-eus/opusTCv20210807_transformer-big_2022-07-23.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | spa-eus | tatoeba-test-v2021-08-07 | 0.60699 | 32.4 | 1850 | 10945 | ## Citation Information * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 8b9f0b0 * port time: Sat Aug 13 00:08:07 EEST 2022 * port machine: LM0-400-22516.local
Helsinki-NLP/opus-mt-tc-big-itc-ar
Helsinki-NLP
2023-10-10T11:31:55Z
110
0
transformers
[ "transformers", "pytorch", "tf", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "ar", "ca", "es", "fr", "gl", "it", "pt", "ro", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-08-12T11:32:09Z
--- language: - ar - ca - es - fr - gl - it - pt - ro tags: - translation - opus-mt-tc license: cc-by-4.0 model-index: - name: opus-mt-tc-big-itc-ar results: - task: name: Translation cat-ara type: translation args: cat-ara dataset: name: flores101-devtest type: flores_101 args: cat ara devtest metrics: - name: BLEU type: bleu value: 18.9 - name: chr-F type: chrf value: 0.52029 - task: name: Translation fra-ara type: translation args: fra-ara dataset: name: flores101-devtest type: flores_101 args: fra ara devtest metrics: - name: BLEU type: bleu value: 19.5 - name: chr-F type: chrf value: 0.52573 - task: name: Translation glg-ara type: translation args: glg-ara dataset: name: flores101-devtest type: flores_101 args: glg ara devtest metrics: - name: BLEU type: bleu value: 19.2 - name: chr-F type: chrf value: 0.51181 - task: name: Translation ita-ara type: translation args: ita-ara dataset: name: flores101-devtest type: flores_101 args: ita ara devtest metrics: - name: BLEU type: bleu value: 15.0 - name: chr-F type: chrf value: 0.49401 - task: name: Translation por-ara type: translation args: por-ara dataset: name: flores101-devtest type: flores_101 args: por ara devtest metrics: - name: BLEU type: bleu value: 20.2 - name: chr-F type: chrf value: 0.53356 - task: name: Translation ron-ara type: translation args: ron-ara dataset: name: flores101-devtest type: flores_101 args: ron ara devtest metrics: - name: BLEU type: bleu value: 18.4 - name: chr-F type: chrf value: 0.51849 - task: name: Translation spa-ara type: translation args: spa-ara dataset: name: flores101-devtest type: flores_101 args: spa ara devtest metrics: - name: BLEU type: bleu value: 14.3 - name: chr-F type: chrf value: 0.47872 - task: name: Translation ita-ara type: translation args: ita-ara dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: ita-ara metrics: - name: BLEU type: bleu value: 25.7 - name: chr-F type: chrf value: 0.53797 - task: name: Translation spa-ara type: translation args: spa-ara dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: spa-ara metrics: - name: BLEU type: bleu value: 26.6 - name: chr-F type: chrf value: 0.55520 --- # opus-mt-tc-big-itc-ar ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [How to Get Started With the Model](#how-to-get-started-with-the-model) - [Training](#training) - [Evaluation](#evaluation) - [Citation Information](#citation-information) - [Acknowledgements](#acknowledgements) ## Model Details Neural machine translation model for translating from Italic languages (itc) to Arabic (ar). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). **Model Description:** - **Developed by:** Language Technology Research Group at the University of Helsinki - **Model Type:** Translation (transformer-big) - **Release**: 2022-08-09 - **License:** CC-BY-4.0 - **Language(s):** - Source Language(s): cat fra glg ita lat_Latn por ron spa - Target Language(s): ara - Language Pair(s): cat-ara fra-ara glg-ara ita-ara por-ara ron-ara spa-ara - Valid Target Language Labels: >>ajp<< >>apc<< >>ara<< >>arq<< >>ary<< >>arz<< - **Original Model**: [opusTCv20210807_transformer-big_2022-08-09.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-ara/opusTCv20210807_transformer-big_2022-08-09.zip) - **Resources for more information:** - [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) - More information about released models for this language pair: [OPUS-MT itc-ara README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/itc-ara/README.md) - [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian) - [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/ This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>ara<<` ## Uses This model can be used for translation and text-to-text generation. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). ## How to Get Started With the Model A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ ">>ary<< Entiendo.", ">>arq<< Por favor entiende mi posición." ] model_name = "pytorch-models/opus-mt-tc-big-itc-ar" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # فهمتك # من فضلك افهم موقفي. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-itc-ar") print(pipe(">>ary<< Entiendo.")) # expected output: فهمتك ``` ## Training - **Data**: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) - **Pre-processing**: SentencePiece (spm32k,spm32k) - **Model Type:** transformer-big - **Original MarianNMT Model**: [opusTCv20210807_transformer-big_2022-08-09.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-ara/opusTCv20210807_transformer-big_2022-08-09.zip) - **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) ## Evaluation * test set translations: [opusTCv20210807_transformer-big_2022-08-09.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-ara/opusTCv20210807_transformer-big_2022-08-09.test.txt) * test set scores: [opusTCv20210807_transformer-big_2022-08-09.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-ara/opusTCv20210807_transformer-big_2022-08-09.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | fra-ara | tatoeba-test-v2021-08-07 | 0.46463 | 18.9 | 1569 | 7956 | | ita-ara | tatoeba-test-v2021-08-07 | 0.53797 | 25.7 | 235 | 1161 | | spa-ara | tatoeba-test-v2021-08-07 | 0.55520 | 26.6 | 1511 | 7547 | | cat-ara | flores101-devtest | 0.52029 | 18.9 | 1012 | 21357 | | fra-ara | flores101-devtest | 0.52573 | 19.5 | 1012 | 21357 | | glg-ara | flores101-devtest | 0.51181 | 19.2 | 1012 | 21357 | | ita-ara | flores101-devtest | 0.49401 | 15.0 | 1012 | 21357 | | por-ara | flores101-devtest | 0.53356 | 20.2 | 1012 | 21357 | | ron-ara | flores101-devtest | 0.51849 | 18.4 | 1012 | 21357 | | spa-ara | flores101-devtest | 0.47872 | 14.3 | 1012 | 21357 | ## Citation Information * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 8b9f0b0 * port time: Sat Aug 13 00:00:31 EEST 2022 * port machine: LM0-400-22516.local
Helsinki-NLP/opus-mt-tc-big-zls-itc
Helsinki-NLP
2023-10-10T11:27:42Z
117
0
transformers
[ "transformers", "pytorch", "tf", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "bg", "es", "fr", "hr", "it", "mk", "pt", "ro", "sh", "sl", "sr", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-08-12T10:57:54Z
--- language: - bg - es - fr - hr - it - mk - pt - ro - sh - sl - sr language_bcp47: - sr_Cyrl - sr_Latn tags: - translation - opus-mt-tc license: cc-by-4.0 model-index: - name: opus-mt-tc-big-zls-itc results: - task: name: Translation bul-fra type: translation args: bul-fra dataset: name: flores101-devtest type: flores_101 args: bul fra devtest metrics: - name: BLEU type: bleu value: 34.4 - name: chr-F type: chrf value: 0.60640 - task: name: Translation bul-ita type: translation args: bul-ita dataset: name: flores101-devtest type: flores_101 args: bul ita devtest metrics: - name: BLEU type: bleu value: 24.0 - name: chr-F type: chrf value: 0.54135 - task: name: Translation bul-por type: translation args: bul-por dataset: name: flores101-devtest type: flores_101 args: bul por devtest metrics: - name: BLEU type: bleu value: 32.4 - name: chr-F type: chrf value: 0.59322 - task: name: Translation bul-ron type: translation args: bul-ron dataset: name: flores101-devtest type: flores_101 args: bul ron devtest metrics: - name: BLEU type: bleu value: 27.1 - name: chr-F type: chrf value: 0.55558 - task: name: Translation bul-spa type: translation args: bul-spa dataset: name: flores101-devtest type: flores_101 args: bul spa devtest metrics: - name: BLEU type: bleu value: 22.4 - name: chr-F type: chrf value: 0.50962 - task: name: Translation hrv-fra type: translation args: hrv-fra dataset: name: flores101-devtest type: flores_101 args: hrv fra devtest metrics: - name: BLEU type: bleu value: 33.1 - name: chr-F type: chrf value: 0.59349 - task: name: Translation hrv-ita type: translation args: hrv-ita dataset: name: flores101-devtest type: flores_101 args: hrv ita devtest metrics: - name: BLEU type: bleu value: 23.5 - name: chr-F type: chrf value: 0.52980 - task: name: Translation hrv-por type: translation args: hrv-por dataset: name: flores101-devtest type: flores_101 args: hrv por devtest metrics: - name: BLEU type: bleu value: 30.2 - name: chr-F type: chrf value: 0.57402 - task: name: Translation hrv-ron type: translation args: hrv-ron dataset: name: flores101-devtest type: flores_101 args: hrv ron devtest metrics: - name: BLEU type: bleu value: 25.9 - name: chr-F type: chrf value: 0.53650 - task: name: Translation hrv-spa type: translation args: hrv-spa dataset: name: flores101-devtest type: flores_101 args: hrv spa devtest metrics: - name: BLEU type: bleu value: 21.5 - name: chr-F type: chrf value: 0.50161 - task: name: Translation mkd-fra type: translation args: mkd-fra dataset: name: flores101-devtest type: flores_101 args: mkd fra devtest metrics: - name: BLEU type: bleu value: 35.2 - name: chr-F type: chrf value: 0.60801 - task: name: Translation mkd-ita type: translation args: mkd-ita dataset: name: flores101-devtest type: flores_101 args: mkd ita devtest metrics: - name: BLEU type: bleu value: 23.9 - name: chr-F type: chrf value: 0.53543 - task: name: Translation mkd-por type: translation args: mkd-por dataset: name: flores101-devtest type: flores_101 args: mkd por devtest metrics: - name: BLEU type: bleu value: 33.9 - name: chr-F type: chrf value: 0.59648 - task: name: Translation mkd-ron type: translation args: mkd-ron dataset: name: flores101-devtest type: flores_101 args: mkd ron devtest metrics: - name: BLEU type: bleu value: 28.0 - name: chr-F type: chrf value: 0.54998 - task: name: Translation mkd-spa type: translation args: mkd-spa dataset: name: flores101-devtest type: flores_101 args: mkd spa devtest metrics: - name: BLEU type: bleu value: 22.8 - name: chr-F type: chrf value: 0.51079 - task: name: Translation slv-fra type: translation args: slv-fra dataset: name: flores101-devtest type: flores_101 args: slv fra devtest metrics: - name: BLEU type: bleu value: 31.5 - name: chr-F type: chrf value: 0.58233 - task: name: Translation slv-ita type: translation args: slv-ita dataset: name: flores101-devtest type: flores_101 args: slv ita devtest metrics: - name: BLEU type: bleu value: 22.4 - name: chr-F type: chrf value: 0.52390 - task: name: Translation slv-por type: translation args: slv-por dataset: name: flores101-devtest type: flores_101 args: slv por devtest metrics: - name: BLEU type: bleu value: 29.0 - name: chr-F type: chrf value: 0.56436 - task: name: Translation slv-ron type: translation args: slv-ron dataset: name: flores101-devtest type: flores_101 args: slv ron devtest metrics: - name: BLEU type: bleu value: 25.0 - name: chr-F type: chrf value: 0.53116 - task: name: Translation slv-spa type: translation args: slv-spa dataset: name: flores101-devtest type: flores_101 args: slv spa devtest metrics: - name: BLEU type: bleu value: 21.1 - name: chr-F type: chrf value: 0.49621 - task: name: Translation srp_Cyrl-fra type: translation args: srp_Cyrl-fra dataset: name: flores101-devtest type: flores_101 args: srp_Cyrl fra devtest metrics: - name: BLEU type: bleu value: 36.0 - name: chr-F type: chrf value: 0.62110 - task: name: Translation srp_Cyrl-ita type: translation args: srp_Cyrl-ita dataset: name: flores101-devtest type: flores_101 args: srp_Cyrl ita devtest metrics: - name: BLEU type: bleu value: 23.9 - name: chr-F type: chrf value: 0.54083 - task: name: Translation srp_Cyrl-por type: translation args: srp_Cyrl-por dataset: name: flores101-devtest type: flores_101 args: srp_Cyrl por devtest metrics: - name: BLEU type: bleu value: 34.9 - name: chr-F type: chrf value: 0.61248 - task: name: Translation srp_Cyrl-ron type: translation args: srp_Cyrl-ron dataset: name: flores101-devtest type: flores_101 args: srp_Cyrl ron devtest metrics: - name: BLEU type: bleu value: 28.8 - name: chr-F type: chrf value: 0.56235 - task: name: Translation srp_Cyrl-spa type: translation args: srp_Cyrl-spa dataset: name: flores101-devtest type: flores_101 args: srp_Cyrl spa devtest metrics: - name: BLEU type: bleu value: 22.8 - name: chr-F type: chrf value: 0.51698 - task: name: Translation bul-fra type: translation args: bul-fra dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: bul-fra metrics: - name: BLEU type: bleu value: 52.9 - name: chr-F type: chrf value: 0.68971 - task: name: Translation bul-ita type: translation args: bul-ita dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: bul-ita metrics: - name: BLEU type: bleu value: 45.1 - name: chr-F type: chrf value: 0.66412 - task: name: Translation bul-spa type: translation args: bul-spa dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: bul-spa metrics: - name: BLEU type: bleu value: 49.7 - name: chr-F type: chrf value: 0.66672 - task: name: Translation hbs-fra type: translation args: hbs-fra dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: hbs-fra metrics: - name: BLEU type: bleu value: 48.1 - name: chr-F type: chrf value: 0.66434 - task: name: Translation hbs-ita type: translation args: hbs-ita dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: hbs-ita metrics: - name: BLEU type: bleu value: 53.5 - name: chr-F type: chrf value: 0.72381 - task: name: Translation hbs-spa type: translation args: hbs-spa dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: hbs-spa metrics: - name: BLEU type: bleu value: 58.0 - name: chr-F type: chrf value: 0.73105 - task: name: Translation hrv-fra type: translation args: hrv-fra dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: hrv-fra metrics: - name: BLEU type: bleu value: 44.3 - name: chr-F type: chrf value: 0.62800 - task: name: Translation hrv-spa type: translation args: hrv-spa dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: hrv-spa metrics: - name: BLEU type: bleu value: 57.5 - name: chr-F type: chrf value: 0.71370 - task: name: Translation mkd-spa type: translation args: mkd-spa dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: mkd-spa metrics: - name: BLEU type: bleu value: 62.1 - name: chr-F type: chrf value: 0.75366 - task: name: Translation srp_Latn-ita type: translation args: srp_Latn-ita dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: srp_Latn-ita metrics: - name: BLEU type: bleu value: 59.6 - name: chr-F type: chrf value: 0.76045 --- # opus-mt-tc-big-zls-itc ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [How to Get Started With the Model](#how-to-get-started-with-the-model) - [Training](#training) - [Evaluation](#evaluation) - [Citation Information](#citation-information) - [Acknowledgements](#acknowledgements) ## Model Details Neural machine translation model for translating from South Slavic languages (zls) to Italic languages (itc). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). **Model Description:** - **Developed by:** Language Technology Research Group at the University of Helsinki - **Model Type:** Translation (transformer-big) - **Release**: 2022-08-10 - **License:** CC-BY-4.0 - **Language(s):** - Source Language(s): bos_Latn bul hbs hrv mkd slv srp_Cyrl srp_Latn - Target Language(s): fra ita por ron spa - Language Pair(s): bul-fra bul-ita bul-por bul-ron bul-spa hbs-fra hbs-ita hbs-spa hrv-fra hrv-ita hrv-por hrv-ron hrv-spa mkd-fra mkd-ita mkd-por mkd-ron mkd-spa slv-fra slv-ita slv-por slv-ron slv-spa srp_Cyrl-fra srp_Cyrl-ita srp_Cyrl-por srp_Cyrl-ron srp_Cyrl-spa srp_Latn-ita - Valid Target Language Labels: >>acf<< >>aoa<< >>arg<< >>ast<< >>cat<< >>cbk<< >>ccd<< >>cks<< >>cos<< >>cri<< >>crs<< >>dlm<< >>drc<< >>egl<< >>ext<< >>fab<< >>fax<< >>fra<< >>frc<< >>frm<< >>fro<< >>frp<< >>fur<< >>gcf<< >>gcr<< >>glg<< >>hat<< >>idb<< >>ist<< >>ita<< >>itk<< >>kea<< >>kmv<< >>lad<< >>lad_Latn<< >>lat<< >>lat_Latn<< >>lij<< >>lld<< >>lmo<< >>lou<< >>mcm<< >>mfe<< >>mol<< >>mwl<< >>mxi<< >>mzs<< >>nap<< >>nrf<< >>oci<< >>osc<< >>osp<< >>pap<< >>pcd<< >>pln<< >>pms<< >>pob<< >>por<< >>pov<< >>pre<< >>pro<< >>qbb<< >>qhr<< >>rcf<< >>rgn<< >>roh<< >>ron<< >>ruo<< >>rup<< >>ruq<< >>scf<< >>scn<< >>sdc<< >>sdn<< >>spa<< >>spq<< >>spx<< >>src<< >>srd<< >>sro<< >>tmg<< >>tvy<< >>vec<< >>vkp<< >>wln<< >>xfa<< >>xum<< - **Original Model**: [opusTCv20210807_transformer-big_2022-08-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zls-itc/opusTCv20210807_transformer-big_2022-08-10.zip) - **Resources for more information:** - [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) - More information about released models for this language pair: [OPUS-MT zls-itc README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zls-itc/README.md) - [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian) - [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/ This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>fra<<` ## Uses This model can be used for translation and text-to-text generation. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). ## How to Get Started With the Model A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ ">>fra<< Dobar dan, kako si?", ">>spa<< Znam da je ovo čudno." ] model_name = "pytorch-models/opus-mt-tc-big-zls-itc" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # Bonjour, comment allez-vous ? # Sé que esto es raro. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-zls-itc") print(pipe(">>fra<< Dobar dan, kako si?")) # expected output: Bonjour, comment allez-vous ? ``` ## Training - **Data**: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) - **Pre-processing**: SentencePiece (spm32k,spm32k) - **Model Type:** transformer-big - **Original MarianNMT Model**: [opusTCv20210807_transformer-big_2022-08-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zls-itc/opusTCv20210807_transformer-big_2022-08-10.zip) - **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) ## Evaluation * test set translations: [opusTCv20210807_transformer-big_2022-08-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zls-itc/opusTCv20210807_transformer-big_2022-08-10.test.txt) * test set scores: [opusTCv20210807_transformer-big_2022-08-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zls-itc/opusTCv20210807_transformer-big_2022-08-10.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | bul-fra | tatoeba-test-v2021-08-07 | 0.68971 | 52.9 | 446 | 3669 | | bul-ita | tatoeba-test-v2021-08-07 | 0.66412 | 45.1 | 2500 | 16951 | | bul-spa | tatoeba-test-v2021-08-07 | 0.66672 | 49.7 | 286 | 1783 | | hbs-fra | tatoeba-test-v2021-08-07 | 0.66434 | 48.1 | 474 | 3370 | | hbs-ita | tatoeba-test-v2021-08-07 | 0.72381 | 53.5 | 534 | 3208 | | hbs-spa | tatoeba-test-v2021-08-07 | 0.73105 | 58.0 | 607 | 3766 | | hrv-fra | tatoeba-test-v2021-08-07 | 0.62800 | 44.3 | 258 | 1943 | | hrv-spa | tatoeba-test-v2021-08-07 | 0.71370 | 57.5 | 254 | 1702 | | mkd-spa | tatoeba-test-v2021-08-07 | 0.75366 | 62.1 | 217 | 1121 | | srp_Latn-ita | tatoeba-test-v2021-08-07 | 0.76045 | 59.6 | 212 | 1292 | | bul-fra | flores101-devtest | 0.60640 | 34.4 | 1012 | 28343 | | bul-ita | flores101-devtest | 0.54135 | 24.0 | 1012 | 27306 | | bul-por | flores101-devtest | 0.59322 | 32.4 | 1012 | 26519 | | bul-ron | flores101-devtest | 0.55558 | 27.1 | 1012 | 26799 | | bul-spa | flores101-devtest | 0.50962 | 22.4 | 1012 | 29199 | | hrv-fra | flores101-devtest | 0.59349 | 33.1 | 1012 | 28343 | | hrv-ita | flores101-devtest | 0.52980 | 23.5 | 1012 | 27306 | | hrv-por | flores101-devtest | 0.57402 | 30.2 | 1012 | 26519 | | hrv-ron | flores101-devtest | 0.53650 | 25.9 | 1012 | 26799 | | hrv-spa | flores101-devtest | 0.50161 | 21.5 | 1012 | 29199 | | mkd-fra | flores101-devtest | 0.60801 | 35.2 | 1012 | 28343 | | mkd-ita | flores101-devtest | 0.53543 | 23.9 | 1012 | 27306 | | mkd-por | flores101-devtest | 0.59648 | 33.9 | 1012 | 26519 | | mkd-ron | flores101-devtest | 0.54998 | 28.0 | 1012 | 26799 | | mkd-spa | flores101-devtest | 0.51079 | 22.8 | 1012 | 29199 | | slv-fra | flores101-devtest | 0.58233 | 31.5 | 1012 | 28343 | | slv-ita | flores101-devtest | 0.52390 | 22.4 | 1012 | 27306 | | slv-por | flores101-devtest | 0.56436 | 29.0 | 1012 | 26519 | | slv-ron | flores101-devtest | 0.53116 | 25.0 | 1012 | 26799 | | slv-spa | flores101-devtest | 0.49621 | 21.1 | 1012 | 29199 | | srp_Cyrl-fra | flores101-devtest | 0.62110 | 36.0 | 1012 | 28343 | | srp_Cyrl-ita | flores101-devtest | 0.54083 | 23.9 | 1012 | 27306 | | srp_Cyrl-por | flores101-devtest | 0.61248 | 34.9 | 1012 | 26519 | | srp_Cyrl-ron | flores101-devtest | 0.56235 | 28.8 | 1012 | 26799 | | srp_Cyrl-spa | flores101-devtest | 0.51698 | 22.8 | 1012 | 29199 | ## Citation Information * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 8b9f0b0 * port time: Fri Aug 12 23:59:29 EEST 2022 * port machine: LM0-400-22516.local
Helsinki-NLP/opus-mt-tc-big-gmq-ar
Helsinki-NLP
2023-10-10T11:26:33Z
117
1
transformers
[ "transformers", "pytorch", "tf", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "ar", "da", "sv", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-08-12T14:35:33Z
--- language: - ar - da - sv tags: - translation - opus-mt-tc license: cc-by-4.0 model-index: - name: opus-mt-tc-big-gmq-ar results: - task: name: Translation dan-ara type: translation args: dan-ara dataset: name: flores101-devtest type: flores_101 args: dan ara devtest metrics: - name: BLEU type: bleu value: 19.9 - name: chr-F type: chrf value: 0.52841 - task: name: Translation nob-ara type: translation args: nob-ara dataset: name: flores101-devtest type: flores_101 args: nob ara devtest metrics: - name: BLEU type: bleu value: 16.8 - name: chr-F type: chrf value: 0.49670 - task: name: Translation swe-ara type: translation args: swe-ara dataset: name: flores101-devtest type: flores_101 args: swe ara devtest metrics: - name: BLEU type: bleu value: 19.3 - name: chr-F type: chrf value: 0.51882 --- # opus-mt-tc-big-gmq-ar ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [How to Get Started With the Model](#how-to-get-started-with-the-model) - [Training](#training) - [Evaluation](#evaluation) - [Citation Information](#citation-information) - [Acknowledgements](#acknowledgements) ## Model Details Neural machine translation model for translating from North Germanic languages (gmq) to Arabic (ar). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). **Model Description:** - **Developed by:** Language Technology Research Group at the University of Helsinki - **Model Type:** Translation (transformer-big) - **Release**: 2022-07-27 - **License:** CC-BY-4.0 - **Language(s):** - Source Language(s): dan swe - Target Language(s): ara - Language Pair(s): dan-ara swe-ara - Valid Target Language Labels: >>apc<< >>ara<< >>arq<< >>arz<< - **Original Model**: [opusTCv20210807_transformer-big_2022-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-ara/opusTCv20210807_transformer-big_2022-07-27.zip) - **Resources for more information:** - [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) - More information about released models for this language pair: [OPUS-MT gmq-ara README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmq-ara/README.md) - [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian) - [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/ This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>><<` ## Uses This model can be used for translation and text-to-text generation. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). ## How to Get Started With the Model A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ ">>ara<< Jeg elsker semitiske sprog.", ">>ara<< Vad handlar boken om?" ] model_name = "pytorch-models/opus-mt-tc-big-gmq-ar" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # أحبّ اللغات الساميّة. # عن ماذا يتحدث الكتاب؟ ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-gmq-ar") print(pipe(">>ara<< Jeg elsker semitiske sprog.")) # expected output: أحبّ اللغات الساميّة. ``` ## Training - **Data**: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) - **Pre-processing**: SentencePiece (spm32k,spm32k) - **Model Type:** transformer-big - **Original MarianNMT Model**: [opusTCv20210807_transformer-big_2022-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-ara/opusTCv20210807_transformer-big_2022-07-27.zip) - **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) ## Evaluation * test set translations: [opusTCv20210807_transformer-big_2022-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-ara/opusTCv20210807_transformer-big_2022-07-27.test.txt) * test set scores: [opusTCv20210807_transformer-big_2022-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-ara/opusTCv20210807_transformer-big_2022-07-27.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | dan-ara | flores101-devtest | 0.52841 | 19.9 | 1012 | 21357 | | nob-ara | flores101-devtest | 0.49670 | 16.8 | 1012 | 21357 | | swe-ara | flores101-devtest | 0.51882 | 19.3 | 1012 | 21357 | ## Citation Information * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 8b9f0b0 * port time: Sat Aug 13 00:05:06 EEST 2022 * port machine: LM0-400-22516.local
Helsinki-NLP/opus-mt-tc-big-zls-zle
Helsinki-NLP
2023-10-10T11:25:27Z
127
0
transformers
[ "transformers", "pytorch", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "tc", "big", "zls", "zle", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-24T12:47:28Z
--- language: - be - bg - hr - ru - sh - sl - sr_Cyrl - sr_Latn - uk - zle - zls tags: - translation - opus-mt-tc license: cc-by-4.0 model-index: - name: opus-mt-tc-big-zls-zle results: - task: name: Translation bul-rus type: translation args: bul-rus dataset: name: flores101-devtest type: flores_101 args: bul rus devtest metrics: - name: BLEU type: bleu value: 24.6 - task: name: Translation bul-ukr type: translation args: bul-ukr dataset: name: flores101-devtest type: flores_101 args: bul ukr devtest metrics: - name: BLEU type: bleu value: 22.9 - task: name: Translation hrv-rus type: translation args: hrv-rus dataset: name: flores101-devtest type: flores_101 args: hrv rus devtest metrics: - name: BLEU type: bleu value: 23.5 - task: name: Translation hrv-ukr type: translation args: hrv-ukr dataset: name: flores101-devtest type: flores_101 args: hrv ukr devtest metrics: - name: BLEU type: bleu value: 21.9 - task: name: Translation mkd-rus type: translation args: mkd-rus dataset: name: flores101-devtest type: flores_101 args: mkd rus devtest metrics: - name: BLEU type: bleu value: 24.3 - task: name: Translation mkd-ukr type: translation args: mkd-ukr dataset: name: flores101-devtest type: flores_101 args: mkd ukr devtest metrics: - name: BLEU type: bleu value: 22.5 - task: name: Translation slv-rus type: translation args: slv-rus dataset: name: flores101-devtest type: flores_101 args: slv rus devtest metrics: - name: BLEU type: bleu value: 22.0 - task: name: Translation slv-ukr type: translation args: slv-ukr dataset: name: flores101-devtest type: flores_101 args: slv ukr devtest metrics: - name: BLEU type: bleu value: 20.2 - task: name: Translation srp_Cyrl-rus type: translation args: srp_Cyrl-rus dataset: name: flores101-devtest type: flores_101 args: srp_Cyrl rus devtest metrics: - name: BLEU type: bleu value: 25.7 - task: name: Translation srp_Cyrl-ukr type: translation args: srp_Cyrl-ukr dataset: name: flores101-devtest type: flores_101 args: srp_Cyrl ukr devtest metrics: - name: BLEU type: bleu value: 24.4 - task: name: Translation bul-rus type: translation args: bul-rus dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: bul-rus metrics: - name: BLEU type: bleu value: 52.6 - task: name: Translation bul-ukr type: translation args: bul-ukr dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: bul-ukr metrics: - name: BLEU type: bleu value: 53.3 - task: name: Translation hbs-rus type: translation args: hbs-rus dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: hbs-rus metrics: - name: BLEU type: bleu value: 58.5 - task: name: Translation hbs-ukr type: translation args: hbs-ukr dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: hbs-ukr metrics: - name: BLEU type: bleu value: 52.3 - task: name: Translation hrv-ukr type: translation args: hrv-ukr dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: hrv-ukr metrics: - name: BLEU type: bleu value: 50.0 - task: name: Translation slv-rus type: translation args: slv-rus dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: slv-rus metrics: - name: BLEU type: bleu value: 27.3 - task: name: Translation srp_Cyrl-rus type: translation args: srp_Cyrl-rus dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: srp_Cyrl-rus metrics: - name: BLEU type: bleu value: 56.2 - task: name: Translation srp_Cyrl-ukr type: translation args: srp_Cyrl-ukr dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: srp_Cyrl-ukr metrics: - name: BLEU type: bleu value: 51.8 - task: name: Translation srp_Latn-rus type: translation args: srp_Latn-rus dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: srp_Latn-rus metrics: - name: BLEU type: bleu value: 60.1 - task: name: Translation srp_Latn-ukr type: translation args: srp_Latn-ukr dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: srp_Latn-ukr metrics: - name: BLEU type: bleu value: 55.8 --- # opus-mt-tc-big-zls-zle Neural machine translation model for translating from South Slavic languages (zls) to East Slavic languages (zle). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2022-03-23 * source language(s): bul hbs hrv slv srp_Cyrl srp_Latn * target language(s): bel rus ukr * valid target language labels: >>bel<< >>rus<< >>ukr<< * model: transformer-big * data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807+bt_transformer-big_2022-03-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zls-zle/opusTCv20210807+bt_transformer-big_2022-03-23.zip) * more information released models: [OPUS-MT zls-zle README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zls-zle/README.md) * more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian) This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>bel<<` ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ ">>rus<< Gdje je brigadir?", ">>ukr<< Zovem se Seli." ] model_name = "pytorch-models/opus-mt-tc-big-zls-zle" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # Где бригадир? # Мене звати Саллі. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-zls-zle") print(pipe(">>rus<< Gdje je brigadir?")) # expected output: Где бригадир? ``` ## Benchmarks * test set translations: [opusTCv20210807+bt_transformer-big_2022-03-23.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zls-zle/opusTCv20210807+bt_transformer-big_2022-03-23.test.txt) * test set scores: [opusTCv20210807+bt_transformer-big_2022-03-23.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zls-zle/opusTCv20210807+bt_transformer-big_2022-03-23.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | bul-rus | tatoeba-test-v2021-08-07 | 0.71467 | 52.6 | 1247 | 7870 | | bul-ukr | tatoeba-test-v2021-08-07 | 0.71757 | 53.3 | 1020 | 4932 | | hbs-rus | tatoeba-test-v2021-08-07 | 0.74593 | 58.5 | 2500 | 14213 | | hbs-ukr | tatoeba-test-v2021-08-07 | 0.70244 | 52.3 | 942 | 4961 | | hrv-ukr | tatoeba-test-v2021-08-07 | 0.68931 | 50.0 | 389 | 2232 | | slv-rus | tatoeba-test-v2021-08-07 | 0.42255 | 27.3 | 657 | 4056 | | srp_Cyrl-rus | tatoeba-test-v2021-08-07 | 0.74112 | 56.2 | 881 | 5117 | | srp_Cyrl-ukr | tatoeba-test-v2021-08-07 | 0.68915 | 51.8 | 205 | 1061 | | srp_Latn-rus | tatoeba-test-v2021-08-07 | 0.75340 | 60.1 | 1483 | 8311 | | srp_Latn-ukr | tatoeba-test-v2021-08-07 | 0.73106 | 55.8 | 348 | 1668 | | bul-rus | flores101-devtest | 0.54226 | 24.6 | 1012 | 23295 | | bul-ukr | flores101-devtest | 0.53382 | 22.9 | 1012 | 22810 | | hrv-rus | flores101-devtest | 0.51726 | 23.5 | 1012 | 23295 | | hrv-ukr | flores101-devtest | 0.51011 | 21.9 | 1012 | 22810 | | mkd-bel | flores101-devtest | 0.40885 | 10.7 | 1012 | 24829 | | mkd-rus | flores101-devtest | 0.52509 | 24.3 | 1012 | 23295 | | mkd-ukr | flores101-devtest | 0.52021 | 22.5 | 1012 | 22810 | | slv-rus | flores101-devtest | 0.50349 | 22.0 | 1012 | 23295 | | slv-ukr | flores101-devtest | 0.49156 | 20.2 | 1012 | 22810 | | srp_Cyrl-rus | flores101-devtest | 0.53656 | 25.7 | 1012 | 23295 | | srp_Cyrl-ukr | flores101-devtest | 0.53623 | 24.4 | 1012 | 22810 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 1bdabf7 * port time: Thu Mar 24 04:08:51 EET 2022 * port machine: LM0-400-22516.local
sanskarGupta551/bloomz-1b7_Prompt_to_Dialog
sanskarGupta551
2023-10-10T11:25:06Z
0
0
peft
[ "peft", "region:us" ]
null
2023-10-10T11:25:04Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0
Helsinki-NLP/opus-mt-tc-big-he-gmq
Helsinki-NLP
2023-10-10T11:21:10Z
107
0
transformers
[ "transformers", "pytorch", "tf", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "da", "he", "nb", "sv", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-08-12T16:14:29Z
--- language: - da - he - nb - sv tags: - translation - opus-mt-tc license: cc-by-4.0 model-index: - name: opus-mt-tc-big-he-gmq results: - task: name: Translation heb-dan type: translation args: heb-dan dataset: name: flores101-devtest type: flores_101 args: heb dan devtest metrics: - name: BLEU type: bleu value: 31.4 - name: chr-F type: chrf value: 0.58023 - task: name: Translation heb-isl type: translation args: heb-isl dataset: name: flores101-devtest type: flores_101 args: heb isl devtest metrics: - name: BLEU type: bleu value: 14.0 - name: chr-F type: chrf value: 0.41998 - task: name: Translation heb-nob type: translation args: heb-nob dataset: name: flores101-devtest type: flores_101 args: heb nob devtest metrics: - name: BLEU type: bleu value: 23.7 - name: chr-F type: chrf value: 0.53086 - task: name: Translation heb-swe type: translation args: heb-swe dataset: name: flores101-devtest type: flores_101 args: heb swe devtest metrics: - name: BLEU type: bleu value: 29.6 - name: chr-F type: chrf value: 0.56881 --- # opus-mt-tc-big-he-gmq ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [How to Get Started With the Model](#how-to-get-started-with-the-model) - [Training](#training) - [Evaluation](#evaluation) - [Citation Information](#citation-information) - [Acknowledgements](#acknowledgements) ## Model Details Neural machine translation model for translating from Hebrew (he) to North Germanic languages (gmq). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). **Model Description:** - **Developed by:** Language Technology Research Group at the University of Helsinki - **Model Type:** Translation (transformer-big) - **Release**: 2022-07-23 - **License:** CC-BY-4.0 - **Language(s):** - Source Language(s): heb - Target Language(s): dan nob nor swe - Language Pair(s): heb-dan heb-nob heb-swe - Valid Target Language Labels: >>dan<< >>fao<< >>isl<< >>jut<< >>nno<< >>nob<< >>non<< >>nrn<< >>ovd<< >>qer<< >>rmg<< >>swe<< - **Original Model**: [opusTCv20210807_transformer-big_2022-07-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-gmq/opusTCv20210807_transformer-big_2022-07-23.zip) - **Resources for more information:** - [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) - More information about released models for this language pair: [OPUS-MT heb-gmq README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-gmq/README.md) - [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian) - [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/ This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>dan<<` ## Uses This model can be used for translation and text-to-text generation. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). ## How to Get Started With the Model A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ ">>dan<< כל שלושת הילדים של אליעזר לודוויג זמנהוף נרצחו בשואה.", ">>swe<< הסתבר שטום היה מרגל." ] model_name = "pytorch-models/opus-mt-tc-big-he-gmq" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # Alle tre børn af Eliezer Ludwig Zamenhof blev dræbt i Holocaust. # Det visade sig att Tom var en spion. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-he-gmq") print(pipe(">>dan<< כל שלושת הילדים של אליעזר לודוויג זמנהוף נרצחו בשואה.")) # expected output: Alle tre børn af Eliezer Ludwig Zamenhof blev dræbt i Holocaust. ``` ## Training - **Data**: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) - **Pre-processing**: SentencePiece (spm32k,spm32k) - **Model Type:** transformer-big - **Original MarianNMT Model**: [opusTCv20210807_transformer-big_2022-07-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-gmq/opusTCv20210807_transformer-big_2022-07-23.zip) - **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) ## Evaluation * test set translations: [opusTCv20210807_transformer-big_2022-07-23.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-gmq/opusTCv20210807_transformer-big_2022-07-23.test.txt) * test set scores: [opusTCv20210807_transformer-big_2022-07-23.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-gmq/opusTCv20210807_transformer-big_2022-07-23.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | heb-dan | flores101-devtest | 0.58023 | 31.4 | 1012 | 24638 | | heb-isl | flores101-devtest | 0.41998 | 14.0 | 1012 | 22834 | | heb-nob | flores101-devtest | 0.53086 | 23.7 | 1012 | 23873 | | heb-swe | flores101-devtest | 0.56881 | 29.6 | 1012 | 23121 | ## Citation Information * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 8b9f0b0 * port time: Sat Aug 13 00:07:45 EEST 2022 * port machine: LM0-400-22516.local
Helsinki-NLP/opus-mt-tc-big-cel-en
Helsinki-NLP
2023-10-10T11:19:09Z
114
1
transformers
[ "transformers", "pytorch", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "br", "cel", "cy", "en", "ga", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-04-13T15:36:34Z
--- language: - br - cel - cy - en - ga tags: - translation - opus-mt-tc license: cc-by-4.0 model-index: - name: opus-mt-tc-big-cel-en results: - task: name: Translation cym-eng type: translation args: cym-eng dataset: name: flores101-devtest type: flores_101 args: cym eng devtest metrics: - name: BLEU type: bleu value: 50.2 - task: name: Translation gle-eng type: translation args: gle-eng dataset: name: flores101-devtest type: flores_101 args: gle eng devtest metrics: - name: BLEU type: bleu value: 37.4 - task: name: Translation bre-eng type: translation args: bre-eng dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: bre-eng metrics: - name: BLEU type: bleu value: 36.1 - task: name: Translation cym-eng type: translation args: cym-eng dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: cym-eng metrics: - name: BLEU type: bleu value: 53.6 - task: name: Translation gle-eng type: translation args: gle-eng dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: gle-eng metrics: - name: BLEU type: bleu value: 57.7 --- # opus-mt-tc-big-cel-en Neural machine translation model for translating from Celtic languages (cel) to English (en). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2022-03-13 * source language(s): bre cym gle * target language(s): eng * model: transformer-big * data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807+bt_transformer-big_2022-03-13.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cel-eng/opusTCv20210807+bt_transformer-big_2022-03-13.zip) * more information released models: [OPUS-MT cel-eng README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cel-eng/README.md) ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "A-du emaoc’h?", "Ta'n ushtey glen." ] model_name = "pytorch-models/opus-mt-tc-big-cel-en" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # Is that you? # Ta'n ushtey glen. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-cel-en") print(pipe("A-du emaoc’h?")) # expected output: Is that you? ``` ## Benchmarks * test set translations: [opusTCv20210807+bt_transformer-big_2022-03-13.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cel-eng/opusTCv20210807+bt_transformer-big_2022-03-13.test.txt) * test set scores: [opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cel-eng/opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | bre-eng | tatoeba-test-v2021-08-07 | 0.53712 | 36.1 | 383 | 2065 | | cym-eng | tatoeba-test-v2021-08-07 | 0.69239 | 53.6 | 818 | 5563 | | gle-eng | tatoeba-test-v2021-08-07 | 0.72087 | 57.7 | 1913 | 11190 | | cym-eng | flores101-devtest | 0.71379 | 50.2 | 1012 | 24721 | | gle-eng | flores101-devtest | 0.63946 | 37.4 | 1012 | 24721 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 3405783 * port time: Wed Apr 13 18:36:25 EEST 2022 * port machine: LM0-400-22516.local
Helsinki-NLP/opus-mt-tc-big-zle-zle
Helsinki-NLP
2023-10-10T11:17:01Z
114
0
transformers
[ "transformers", "pytorch", "tf", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "be", "ru", "uk", "zle", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-24T12:08:16Z
--- language: - be - ru - uk - zle tags: - translation - opus-mt-tc license: cc-by-4.0 model-index: - name: opus-mt-tc-big-zle-zle results: - task: name: Translation rus-ukr type: translation args: rus-ukr dataset: name: flores101-devtest type: flores_101 args: rus ukr devtest metrics: - name: BLEU type: bleu value: 25.5 - task: name: Translation ukr-rus type: translation args: ukr-rus dataset: name: flores101-devtest type: flores_101 args: ukr rus devtest metrics: - name: BLEU type: bleu value: 28.3 - task: name: Translation bel-rus type: translation args: bel-rus dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: bel-rus metrics: - name: BLEU type: bleu value: 68.6 - task: name: Translation bel-ukr type: translation args: bel-ukr dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: bel-ukr metrics: - name: BLEU type: bleu value: 65.5 - task: name: Translation rus-bel type: translation args: rus-bel dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: rus-bel metrics: - name: BLEU type: bleu value: 50.3 - task: name: Translation rus-ukr type: translation args: rus-ukr dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: rus-ukr metrics: - name: BLEU type: bleu value: 70.1 - task: name: Translation ukr-bel type: translation args: ukr-bel dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: ukr-bel metrics: - name: BLEU type: bleu value: 58.9 - task: name: Translation ukr-rus type: translation args: ukr-rus dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: ukr-rus metrics: - name: BLEU type: bleu value: 75.7 --- # opus-mt-tc-big-zle-zle Neural machine translation model for translating from East Slavic languages (zle) to East Slavic languages (zle). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2022-03-07 * source language(s): bel rus ukr * target language(s): bel rus ukr * valid target language labels: >>bel<< >>rus<< >>ukr<< * model: transformer-big * data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807+bt_transformer-big_2022-03-07.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-zle/opusTCv20210807+bt_transformer-big_2022-03-07.zip) * more information released models: [OPUS-MT zle-zle README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zle-zle/README.md) * more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian) This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>bel<<` ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ ">>ukr<< Кот мёртвый.", ">>bel<< Джон живе в Нью-Йорку." ] model_name = "pytorch-models/opus-mt-tc-big-zle-zle" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # Кіт мертвий. # Джон жыве ў Нью-Йорку. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-zle-zle") print(pipe(">>ukr<< Кот мёртвый.")) # expected output: Кіт мертвий. ``` ## Benchmarks * test set translations: [opusTCv20210807+bt_transformer-big_2022-03-07.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-zle/opusTCv20210807+bt_transformer-big_2022-03-07.test.txt) * test set scores: [opusTCv20210807+bt_transformer-big_2022-03-07.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-zle/opusTCv20210807+bt_transformer-big_2022-03-07.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | bel-rus | tatoeba-test-v2021-08-07 | 0.82526 | 68.6 | 2500 | 18895 | | bel-ukr | tatoeba-test-v2021-08-07 | 0.81036 | 65.5 | 2355 | 15179 | | rus-bel | tatoeba-test-v2021-08-07 | 0.66943 | 50.3 | 2500 | 18756 | | rus-ukr | tatoeba-test-v2021-08-07 | 0.83639 | 70.1 | 10000 | 60212 | | ukr-bel | tatoeba-test-v2021-08-07 | 0.75368 | 58.9 | 2355 | 15175 | | ukr-rus | tatoeba-test-v2021-08-07 | 0.86806 | 75.7 | 10000 | 60387 | | bel-rus | flores101-devtest | 0.47960 | 14.5 | 1012 | 23295 | | bel-ukr | flores101-devtest | 0.47335 | 12.8 | 1012 | 22810 | | rus-ukr | flores101-devtest | 0.55287 | 25.5 | 1012 | 22810 | | ukr-rus | flores101-devtest | 0.56224 | 28.3 | 1012 | 23295 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 1bdabf7 * port time: Thu Mar 24 00:15:39 EET 2022 * port machine: LM0-400-22516.local
Helsinki-NLP/opus-mt-tc-big-zle-zlw
Helsinki-NLP
2023-10-10T11:13:51Z
119
0
transformers
[ "transformers", "pytorch", "tf", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "be", "cs", "pl", "ru", "uk", "zle", "zlw", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-24T12:13:49Z
--- language: - be - cs - pl - ru - uk - zle - zlw tags: - translation - opus-mt-tc license: cc-by-4.0 model-index: - name: opus-mt-tc-big-zle-zlw results: - task: name: Translation rus-ces type: translation args: rus-ces dataset: name: flores101-devtest type: flores_101 args: rus ces devtest metrics: - name: BLEU type: bleu value: 23.1 - task: name: Translation ukr-ces type: translation args: ukr-ces dataset: name: flores101-devtest type: flores_101 args: ukr ces devtest metrics: - name: BLEU type: bleu value: 25.1 - task: name: Translation bel-pol type: translation args: bel-pol dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: bel-pol metrics: - name: BLEU type: bleu value: 47.1 - task: name: Translation rus-ces type: translation args: rus-ces dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: rus-ces metrics: - name: BLEU type: bleu value: 53.4 - task: name: Translation rus-pol type: translation args: rus-pol dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: rus-pol metrics: - name: BLEU type: bleu value: 53.7 - task: name: Translation ukr-ces type: translation args: ukr-ces dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: ukr-ces metrics: - name: BLEU type: bleu value: 58.0 - task: name: Translation ukr-pol type: translation args: ukr-pol dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: ukr-pol metrics: - name: BLEU type: bleu value: 57.0 - task: name: Translation rus-ces type: translation args: rus-ces dataset: name: newstest2013 type: wmt-2013-news args: rus-ces metrics: - name: BLEU type: bleu value: 26.0 --- # opus-mt-tc-big-zle-zlw Neural machine translation model for translating from East Slavic languages (zle) to West Slavic languages (zlw). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2022-03-23 * source language(s): bel rus ukr * target language(s): ces pol * valid target language labels: >>ces<< >>pol<< * model: transformer-big * data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807+bt_transformer-big_2022-03-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-zlw/opusTCv20210807+bt_transformer-big_2022-03-23.zip) * more information released models: [OPUS-MT zle-zlw README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zle-zlw/README.md) * more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian) This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>ces<<` ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ ">>pol<< Это метафора.", ">>pol<< Что вы делали?" ] model_name = "pytorch-models/opus-mt-tc-big-zle-zlw" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # To metafora. # Co robiliście? ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-zle-zlw") print(pipe(">>pol<< Это метафора.")) # expected output: To metafora. ``` ## Benchmarks * test set translations: [opusTCv20210807+bt_transformer-big_2022-03-23.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-zlw/opusTCv20210807+bt_transformer-big_2022-03-23.test.txt) * test set scores: [opusTCv20210807+bt_transformer-big_2022-03-23.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-zlw/opusTCv20210807+bt_transformer-big_2022-03-23.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | bel-pol | tatoeba-test-v2021-08-07 | 0.65517 | 47.1 | 287 | 1706 | | rus-ces | tatoeba-test-v2021-08-07 | 0.69695 | 53.4 | 2934 | 16831 | | rus-pol | tatoeba-test-v2021-08-07 | 0.72176 | 53.7 | 3543 | 21505 | | ukr-ces | tatoeba-test-v2021-08-07 | 0.73149 | 58.0 | 1787 | 8550 | | ukr-pol | tatoeba-test-v2021-08-07 | 0.74649 | 57.0 | 2519 | 13201 | | bel-ces | flores101-devtest | 0.41248 | 11.1 | 1012 | 22101 | | bel-pol | flores101-devtest | 0.42240 | 10.2 | 1012 | 22520 | | rus-ces | flores101-devtest | 0.50971 | 23.1 | 1012 | 22101 | | rus-pol | flores101-devtest | 0.48672 | 18.4 | 1012 | 22520 | | ukr-ces | flores101-devtest | 0.52482 | 25.1 | 1012 | 22101 | | ukr-pol | flores101-devtest | 0.48790 | 18.8 | 1012 | 22520 | | rus-ces | newstest2012 | 0.45834 | 18.8 | 3003 | 65456 | | rus-ces | newstest2013 | 0.52364 | 26.0 | 3000 | 57250 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 1bdabf7 * port time: Thu Mar 24 00:50:29 EET 2022 * port machine: LM0-400-22516.local
Helsinki-NLP/opus-mt-tc-big-fr-zle
Helsinki-NLP
2023-10-10T11:11:18Z
121
0
transformers
[ "transformers", "pytorch", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "be", "fr", "ru", "uk", "zle", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-24T12:29:13Z
--- language: - be - fr - ru - uk - zle tags: - translation - opus-mt-tc license: cc-by-4.0 model-index: - name: opus-mt-tc-big-fr-zle results: - task: name: Translation fra-rus type: translation args: fra-rus dataset: name: flores101-devtest type: flores_101 args: fra rus devtest metrics: - name: BLEU type: bleu value: 25.8 - task: name: Translation fra-ukr type: translation args: fra-ukr dataset: name: flores101-devtest type: flores_101 args: fra ukr devtest metrics: - name: BLEU type: bleu value: 23.1 - task: name: Translation fra-bel type: translation args: fra-bel dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: fra-bel metrics: - name: BLEU type: bleu value: 31.1 - task: name: Translation fra-rus type: translation args: fra-rus dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: fra-rus metrics: - name: BLEU type: bleu value: 46.1 - task: name: Translation fra-ukr type: translation args: fra-ukr dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: fra-ukr metrics: - name: BLEU type: bleu value: 39.9 - task: name: Translation fra-rus type: translation args: fra-rus dataset: name: newstest2012 type: wmt-2012-news args: fra-rus metrics: - name: BLEU type: bleu value: 23.1 - task: name: Translation fra-rus type: translation args: fra-rus dataset: name: newstest2013 type: wmt-2013-news args: fra-rus metrics: - name: BLEU type: bleu value: 24.8 --- # opus-mt-tc-big-fr-zle Neural machine translation model for translating from French (fr) to East Slavic languages (zle). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2022-03-23 * source language(s): fra * target language(s): bel rus ukr * valid target language labels: >>bel<< >>rus<< >>ukr<< * model: transformer-big * data: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807_transformer-big_2022-03-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-zle/opusTCv20210807_transformer-big_2022-03-23.zip) * more information released models: [OPUS-MT fra-zle README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-zle/README.md) * more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian) This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>bel<<` ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ ">>rus<< Ils ont acheté un très bon appareil photo.", ">>ukr<< Il s'est soudain mis à pleuvoir." ] model_name = "pytorch-models/opus-mt-tc-big-fr-zle" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # Они купили очень хорошую камеру. # Раптом почався дощ. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-fr-zle") print(pipe(">>rus<< Ils ont acheté un très bon appareil photo.")) # expected output: Они купили очень хорошую камеру. ``` ## Benchmarks * test set translations: [opusTCv20210807_transformer-big_2022-03-23.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-zle/opusTCv20210807_transformer-big_2022-03-23.test.txt) * test set scores: [opusTCv20210807_transformer-big_2022-03-23.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-zle/opusTCv20210807_transformer-big_2022-03-23.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | fra-bel | tatoeba-test-v2021-08-07 | 0.52711 | 31.1 | 283 | 1703 | | fra-rus | tatoeba-test-v2021-08-07 | 0.66502 | 46.1 | 11490 | 70123 | | fra-ukr | tatoeba-test-v2021-08-07 | 0.61860 | 39.9 | 10035 | 54372 | | fra-rus | flores101-devtest | 0.54106 | 25.8 | 1012 | 23295 | | fra-ukr | flores101-devtest | 0.52733 | 23.1 | 1012 | 22810 | | fra-rus | newstest2012 | 0.51254 | 23.1 | 3003 | 64790 | | fra-rus | newstest2013 | 0.52342 | 24.8 | 3000 | 58560 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 1bdabf7 * port time: Thu Mar 24 02:05:04 EET 2022 * port machine: LM0-400-22516.local
Helsinki-NLP/opus-mt-tc-big-pt-zle
Helsinki-NLP
2023-10-10T11:10:15Z
109
0
transformers
[ "transformers", "pytorch", "tf", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "pt", "ru", "uk", "zle", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-24T12:39:11Z
--- language: - pt - ru - uk - zle tags: - translation - opus-mt-tc license: cc-by-4.0 model-index: - name: opus-mt-tc-big-pt-zle results: - task: name: Translation por-rus type: translation args: por-rus dataset: name: flores101-devtest type: flores_101 args: por rus devtest metrics: - name: BLEU type: bleu value: 26.8 - task: name: Translation por-ukr type: translation args: por-ukr dataset: name: flores101-devtest type: flores_101 args: por ukr devtest metrics: - name: BLEU type: bleu value: 25.1 - task: name: Translation por-rus type: translation args: por-rus dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: por-rus metrics: - name: BLEU type: bleu value: 47.6 - task: name: Translation por-ukr type: translation args: por-ukr dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: por-ukr metrics: - name: BLEU type: bleu value: 44.7 --- # opus-mt-tc-big-pt-zle Neural machine translation model for translating from Portuguese (pt) to East Slavic languages (zle). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2022-03-23 * source language(s): por * target language(s): rus ukr * valid target language labels: >>rus<< >>ukr<< * model: transformer-big * data: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807_transformer-big_2022-03-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/por-zle/opusTCv20210807_transformer-big_2022-03-23.zip) * more information released models: [OPUS-MT por-zle README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/por-zle/README.md) * more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian) This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>rus<<` ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ ">>ukr<< Esse é o meu lugar.", ">>rus<< Tom tem problemas de saúde." ] model_name = "pytorch-models/opus-mt-tc-big-pt-zle" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # Це моє місце. # У Тома проблемы со здоровьем. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-pt-zle") print(pipe(">>ukr<< Esse é o meu lugar.")) # expected output: Це моє місце. ``` ## Benchmarks * test set translations: [opusTCv20210807_transformer-big_2022-03-23.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/por-zle/opusTCv20210807_transformer-big_2022-03-23.test.txt) * test set scores: [opusTCv20210807_transformer-big_2022-03-23.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/por-zle/opusTCv20210807_transformer-big_2022-03-23.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | por-rus | tatoeba-test-v2021-08-07 | 0.67980 | 47.6 | 10000 | 65326 | | por-ukr | tatoeba-test-v2021-08-07 | 0.65867 | 44.7 | 3372 | 18933 | | por-rus | flores101-devtest | 0.54675 | 26.8 | 1012 | 23295 | | por-ukr | flores101-devtest | 0.53690 | 25.1 | 1012 | 22810 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 1bdabf7 * port time: Thu Mar 24 03:20:20 EET 2022 * port machine: LM0-400-22516.local
Helsinki-NLP/opus-mt-tc-big-zlw-zle
Helsinki-NLP
2023-10-10T11:09:10Z
117
0
transformers
[ "transformers", "pytorch", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "be", "cs", "dsb", "hsb", "pl", "ru", "uk", "zle", "zlw", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-24T12:50:12Z
--- language: - be - cs - dsb - hsb - pl - ru - uk - zle - zlw tags: - translation - opus-mt-tc license: cc-by-4.0 model-index: - name: opus-mt-tc-big-zlw-zle results: - task: name: Translation ces-rus type: translation args: ces-rus dataset: name: flores101-devtest type: flores_101 args: ces rus devtest metrics: - name: BLEU type: bleu value: 24.2 - task: name: Translation ces-ukr type: translation args: ces-ukr dataset: name: flores101-devtest type: flores_101 args: ces ukr devtest metrics: - name: BLEU type: bleu value: 22.9 - task: name: Translation pol-rus type: translation args: pol-rus dataset: name: flores101-devtest type: flores_101 args: pol rus devtest metrics: - name: BLEU type: bleu value: 20.1 - task: name: Translation ces-rus type: translation args: ces-rus dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: ces-rus metrics: - name: BLEU type: bleu value: 56.4 - task: name: Translation ces-ukr type: translation args: ces-ukr dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: ces-ukr metrics: - name: BLEU type: bleu value: 53.0 - task: name: Translation pol-bel type: translation args: pol-bel dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: pol-bel metrics: - name: BLEU type: bleu value: 29.4 - task: name: Translation pol-rus type: translation args: pol-rus dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: pol-rus metrics: - name: BLEU type: bleu value: 55.3 - task: name: Translation pol-ukr type: translation args: pol-ukr dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: pol-ukr metrics: - name: BLEU type: bleu value: 48.6 - task: name: Translation ces-rus type: translation args: ces-rus dataset: name: newstest2012 type: wmt-2012-news args: ces-rus metrics: - name: BLEU type: bleu value: 21.0 - task: name: Translation ces-rus type: translation args: ces-rus dataset: name: newstest2013 type: wmt-2013-news args: ces-rus metrics: - name: BLEU type: bleu value: 27.2 --- # opus-mt-tc-big-zlw-zle Neural machine translation model for translating from West Slavic languages (zlw) to East Slavic languages (zle). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2022-03-19 * source language(s): ces dsb hsb pol * target language(s): bel rus ukr * valid target language labels: >>bel<< >>rus<< >>ukr<< * model: transformer-big * data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807+bt_transformer-big_2022-03-19.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-zle/opusTCv20210807+bt_transformer-big_2022-03-19.zip) * more information released models: [OPUS-MT zlw-zle README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zlw-zle/README.md) * more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian) This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>bel<<` ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ ">>rus<< Je vystudovaný právník.", ">>rus<< Gdzie jest moja książka ?" ] model_name = "pytorch-models/opus-mt-tc-big-zlw-zle" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # Он дипломированный юрист. # Где моя книга? ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-zlw-zle") print(pipe(">>rus<< Je vystudovaný právník.")) # expected output: Он дипломированный юрист. ``` ## Benchmarks * test set translations: [opusTCv20210807+bt_transformer-big_2022-03-19.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-zle/opusTCv20210807+bt_transformer-big_2022-03-19.test.txt) * test set scores: [opusTCv20210807+bt_transformer-big_2022-03-19.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-zle/opusTCv20210807+bt_transformer-big_2022-03-19.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | ces-rus | tatoeba-test-v2021-08-07 | 0.73154 | 56.4 | 2934 | 17790 | | ces-ukr | tatoeba-test-v2021-08-07 | 0.69934 | 53.0 | 1787 | 8891 | | pol-bel | tatoeba-test-v2021-08-07 | 0.51039 | 29.4 | 287 | 1730 | | pol-rus | tatoeba-test-v2021-08-07 | 0.73156 | 55.3 | 3543 | 22067 | | pol-ukr | tatoeba-test-v2021-08-07 | 0.68247 | 48.6 | 2519 | 13535 | | ces-rus | flores101-devtest | 0.52316 | 24.2 | 1012 | 23295 | | ces-ukr | flores101-devtest | 0.52261 | 22.9 | 1012 | 22810 | | pol-rus | flores101-devtest | 0.49414 | 20.1 | 1012 | 23295 | | pol-ukr | flores101-devtest | 0.48250 | 18.3 | 1012 | 22810 | | ces-rus | newstest2012 | 0.49469 | 21.0 | 3003 | 64790 | | ces-rus | newstest2013 | 0.54197 | 27.2 | 3000 | 58560 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 1bdabf7 * port time: Thu Mar 24 04:13:23 EET 2022 * port machine: LM0-400-22516.local
anders0204/poca-SoccerTwos
anders0204
2023-10-10T11:08:21Z
7
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2023-10-10T11:08:14Z
--- library_name: ml-agents tags: - SoccerTwos - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: anders0204/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Helsinki-NLP/opus-mt-tc-big-fi-zls
Helsinki-NLP
2023-10-10T11:08:02Z
119
0
transformers
[ "transformers", "pytorch", "tf", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "bg", "fi", "hr", "sl", "sr", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-08-12T16:46:33Z
--- language: - bg - fi - hr - sl - sr language_bcp47: - sr_Cyrl tags: - translation - opus-mt-tc license: cc-by-4.0 model-index: - name: opus-mt-tc-big-fi-zls results: - task: name: Translation fin-bul type: translation args: fin-bul dataset: name: flores101-devtest type: flores_101 args: fin bul devtest metrics: - name: BLEU type: bleu value: 26.2 - name: chr-F type: chrf value: 0.54912 - task: name: Translation fin-hrv type: translation args: fin-hrv dataset: name: flores101-devtest type: flores_101 args: fin hrv devtest metrics: - name: BLEU type: bleu value: 21.3 - name: chr-F type: chrf value: 0.51468 - task: name: Translation fin-slv type: translation args: fin-slv dataset: name: flores101-devtest type: flores_101 args: fin slv devtest metrics: - name: BLEU type: bleu value: 22.3 - name: chr-F type: chrf value: 0.51226 - task: name: Translation fin-srp_Cyrl type: translation args: fin-srp_Cyrl dataset: name: flores101-devtest type: flores_101 args: fin srp_Cyrl devtest metrics: - name: BLEU type: bleu value: 21.8 - name: chr-F type: chrf value: 0.50774 --- # opus-mt-tc-big-fi-zls ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [How to Get Started With the Model](#how-to-get-started-with-the-model) - [Training](#training) - [Evaluation](#evaluation) - [Citation Information](#citation-information) - [Acknowledgements](#acknowledgements) ## Model Details Neural machine translation model for translating from Finnish (fi) to South Slavic languages (zls). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). **Model Description:** - **Developed by:** Language Technology Research Group at the University of Helsinki - **Model Type:** Translation (transformer-big) - **Release**: 2022-07-23 - **License:** CC-BY-4.0 - **Language(s):** - Source Language(s): fin - Target Language(s): bul hrv slv srp_Cyrl - Language Pair(s): fin-bul fin-hrv fin-slv fin-srp_Cyrl - Valid Target Language Labels: >>bos<< >>bos_Cyrl<< >>bos_Latn<< >>bul<< >>chu<< >>hbs<< >>hbs_Cyrl<< >>hrv<< >>kjv<< >>mkd<< >>slv<< >>srp<< >>srp_Cyrl<< >>srp_Latn<< >>svm<< - **Original Model**: [opusTCv20210807_transformer-big_2022-07-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-zls/opusTCv20210807_transformer-big_2022-07-23.zip) - **Resources for more information:** - [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) - More information about released models for this language pair: [OPUS-MT fin-zls README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fin-zls/README.md) - [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian) - [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/ This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>slv<<` ## Uses This model can be used for translation and text-to-text generation. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). ## How to Get Started With the Model A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ ">>bul<< Ajattelen vain sinua.", ">>slv<< Virtahevot rakastavat vettä." ] model_name = "pytorch-models/opus-mt-tc-big-fi-zls" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # Мисля само за теб. # Povodni konji obožujejo vodo. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-fi-zls") print(pipe(">>bul<< Ajattelen vain sinua.")) # expected output: Мисля само за теб. ``` ## Training - **Data**: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) - **Pre-processing**: SentencePiece (spm32k,spm32k) - **Model Type:** transformer-big - **Original MarianNMT Model**: [opusTCv20210807_transformer-big_2022-07-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-zls/opusTCv20210807_transformer-big_2022-07-23.zip) - **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) ## Evaluation * test set translations: [opusTCv20210807_transformer-big_2022-07-23.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-zls/opusTCv20210807_transformer-big_2022-07-23.test.txt) * test set scores: [opusTCv20210807_transformer-big_2022-07-23.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-zls/opusTCv20210807_transformer-big_2022-07-23.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | fin-bul | flores101-devtest | 0.54912 | 26.2 | 1012 | 24700 | | fin-hrv | flores101-devtest | 0.51468 | 21.3 | 1012 | 22423 | | fin-slv | flores101-devtest | 0.51226 | 22.3 | 1012 | 23425 | | fin-srp_Cyrl | flores101-devtest | 0.50774 | 21.8 | 1012 | 23456 | ## Citation Information * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 8b9f0b0 * port time: Sat Aug 13 00:08:29 EEST 2022 * port machine: LM0-400-22516.local
Azma-AI/roberta-base-emotion-classifier
Azma-AI
2023-10-10T11:05:56Z
108
1
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "emotions", "multi-class-classification", "multi-label-classification", "en", "dataset:go_emotions", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-10-10T11:02:32Z
--- language: en tags: - text-classification - pytorch - roberta - emotions - multi-class-classification - multi-label-classification datasets: - go_emotions license: mit widget: - text: "I am not having a great day." --- Model trained from [roberta-base](https://huggingface.co/roberta-base) on the [go_emotions](https://huggingface.co/datasets/go_emotions) dataset for multi-label classification. [go_emotions](https://huggingface.co/datasets/go_emotions) is based on Reddit data and has 28 labels. It is a multi-label dataset where one or multiple labels may apply for any given input text, hence this model is a multi-label classification model with 28 'probability' float outputs for any given input text. Typically a threshold of 0.5 is applied to the probabilities for the prediction for each label. The model was trained using `AutoModelForSequenceClassification.from_pretrained` with `problem_type="multi_label_classification"` for 3 epochs with a learning rate of 2e-5 and weight decay of 0.01. Evaluation (of the 28 dim output via a threshold of 0.5 to binarize each) using the dataset test split gives: - Micro F1 0.585 - ROC AUC 0.751 - Accuracy 0.474 But the metrics would be more meaningful when measured per label given the multi-label nature. Additionally some labels (E.g. `gratitude`) when considered independently perform very strongly with F1 around 0.9, whilst others (E.g. `relief`) perform very poorly. This is a challenging dataset. Labels such as `relief` do have much fewer examples in the training data (less than 100 out of the 40k+), but there is also some ambiguity and/or labelling errors visible in the training data of `go_emotions` that is suspected to constrain the performance.
Helsinki-NLP/opus-mt-tc-big-itc-tr
Helsinki-NLP
2023-10-10T11:04:52Z
113
0
transformers
[ "transformers", "pytorch", "tf", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "ca", "es", "fr", "gl", "it", "oc", "pt", "ro", "tr", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-08-12T13:25:33Z
--- language: - ca - es - fr - gl - it - oc - pt - ro - tr tags: - translation - opus-mt-tc license: cc-by-4.0 model-index: - name: opus-mt-tc-big-itc-tr results: - task: name: Translation cat-tur type: translation args: cat-tur dataset: name: flores101-devtest type: flores_101 args: cat tur devtest metrics: - name: BLEU type: bleu value: 21.7 - name: chr-F type: chrf value: 0.54892 - task: name: Translation fra-tur type: translation args: fra-tur dataset: name: flores101-devtest type: flores_101 args: fra tur devtest metrics: - name: BLEU type: bleu value: 21.7 - name: chr-F type: chrf value: 0.55342 - task: name: Translation glg-tur type: translation args: glg-tur dataset: name: flores101-devtest type: flores_101 args: glg tur devtest metrics: - name: BLEU type: bleu value: 20.6 - name: chr-F type: chrf value: 0.53936 - task: name: Translation ita-tur type: translation args: ita-tur dataset: name: flores101-devtest type: flores_101 args: ita tur devtest metrics: - name: BLEU type: bleu value: 18.4 - name: chr-F type: chrf value: 0.52842 - task: name: Translation oci-tur type: translation args: oci-tur dataset: name: flores101-devtest type: flores_101 args: oci tur devtest metrics: - name: BLEU type: bleu value: 17.6 - name: chr-F type: chrf value: 0.50618 - task: name: Translation por-tur type: translation args: por-tur dataset: name: flores101-devtest type: flores_101 args: por tur devtest metrics: - name: BLEU type: bleu value: 23.5 - name: chr-F type: chrf value: 0.56396 - task: name: Translation ron-tur type: translation args: ron-tur dataset: name: flores101-devtest type: flores_101 args: ron tur devtest metrics: - name: BLEU type: bleu value: 21.5 - name: chr-F type: chrf value: 0.55409 - task: name: Translation spa-tur type: translation args: spa-tur dataset: name: flores101-devtest type: flores_101 args: spa tur devtest metrics: - name: BLEU type: bleu value: 16.5 - name: chr-F type: chrf value: 0.51066 - task: name: Translation fra-tur type: translation args: fra-tur dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: fra-tur metrics: - name: BLEU type: bleu value: 34.8 - name: chr-F type: chrf value: 0.63006 - task: name: Translation ita-tur type: translation args: ita-tur dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: ita-tur metrics: - name: BLEU type: bleu value: 34.9 - name: chr-F type: chrf value: 0.59991 - task: name: Translation por-tur type: translation args: por-tur dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: por-tur metrics: - name: BLEU type: bleu value: 40.1 - name: chr-F type: chrf value: 0.67836 - task: name: Translation ron-tur type: translation args: ron-tur dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: ron-tur metrics: - name: BLEU type: bleu value: 35.5 - name: chr-F type: chrf value: 0.64031 - task: name: Translation spa-tur type: translation args: spa-tur dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: spa-tur metrics: - name: BLEU type: bleu value: 45.2 - name: chr-F type: chrf value: 0.71524 --- # opus-mt-tc-big-itc-tr ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [How to Get Started With the Model](#how-to-get-started-with-the-model) - [Training](#training) - [Evaluation](#evaluation) - [Citation Information](#citation-information) - [Acknowledgements](#acknowledgements) ## Model Details Neural machine translation model for translating from Italic languages (itc) to Turkish (tr). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). **Model Description:** - **Developed by:** Language Technology Research Group at the University of Helsinki - **Model Type:** Translation (transformer-big) - **Release**: 2022-07-28 - **License:** CC-BY-4.0 - **Language(s):** - Source Language(s): cat fra glg ita lad lad_Latn oci por ron spa - Target Language(s): tur - Language Pair(s): cat-tur fra-tur glg-tur ita-tur oci-tur por-tur ron-tur spa-tur - Valid Target Language Labels: - **Original Model**: [opusTCv20210807_transformer-big_2022-07-28.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-tur/opusTCv20210807_transformer-big_2022-07-28.zip) - **Resources for more information:** - [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) - More information about released models for this language pair: [OPUS-MT itc-tur README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/itc-tur/README.md) - [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian) - [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/ ## Uses This model can be used for translation and text-to-text generation. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). ## How to Get Started With the Model A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ ""Di che nazionalità sono le tue dottoresse?" "Malese."", ""Di che nazionalità sono i nostri amici?" "Maltese."" ] model_name = "pytorch-models/opus-mt-tc-big-itc-tr" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # "Doktorların hangi milletten?" "Malezyalı." # "Arkadaşlarımız hangi milletten?" "Maltalı." ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-itc-tr") print(pipe(""Di che nazionalità sono le tue dottoresse?" "Malese."")) # expected output: "Doktorların hangi milletten?" "Malezyalı." ``` ## Training - **Data**: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) - **Pre-processing**: SentencePiece (spm32k,spm32k) - **Model Type:** transformer-big - **Original MarianNMT Model**: [opusTCv20210807_transformer-big_2022-07-28.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-tur/opusTCv20210807_transformer-big_2022-07-28.zip) - **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) ## Evaluation * test set translations: [opusTCv20210807_transformer-big_2022-07-28.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-tur/opusTCv20210807_transformer-big_2022-07-28.test.txt) * test set scores: [opusTCv20210807_transformer-big_2022-07-28.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-tur/opusTCv20210807_transformer-big_2022-07-28.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | fra-tur | tatoeba-test-v2021-08-07 | 0.63006 | 34.8 | 2582 | 14307 | | ita-tur | tatoeba-test-v2021-08-07 | 0.59991 | 34.9 | 10000 | 75807 | | por-tur | tatoeba-test-v2021-08-07 | 0.67836 | 40.1 | 1794 | 9312 | | ron-tur | tatoeba-test-v2021-08-07 | 0.64031 | 35.5 | 2460 | 13788 | | spa-tur | tatoeba-test-v2021-08-07 | 0.71524 | 45.2 | 10615 | 56099 | | cat-tur | flores101-devtest | 0.54892 | 21.7 | 1012 | 20253 | | fra-tur | flores101-devtest | 0.55342 | 21.7 | 1012 | 20253 | | glg-tur | flores101-devtest | 0.53936 | 20.6 | 1012 | 20253 | | ita-tur | flores101-devtest | 0.52842 | 18.4 | 1012 | 20253 | | oci-tur | flores101-devtest | 0.50618 | 17.6 | 1012 | 20253 | | por-tur | flores101-devtest | 0.56396 | 23.5 | 1012 | 20253 | | ron-tur | flores101-devtest | 0.55409 | 21.5 | 1012 | 20253 | | spa-tur | flores101-devtest | 0.51066 | 16.5 | 1012 | 20253 | ## Citation Information * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 8b9f0b0 * port time: Sat Aug 13 00:03:26 EEST 2022 * port machine: LM0-400-22516.local
Helsinki-NLP/opus-mt-tc-big-zls-de
Helsinki-NLP
2023-10-10T10:57:17Z
142
0
transformers
[ "transformers", "pytorch", "tf", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "bg", "de", "hr", "mk", "sh", "sl", "sr", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-08-12T14:51:09Z
--- language: - bg - de - hr - mk - sh - sl - sr language_bcp47: - sr_Cyrl - sr_Latn tags: - translation - opus-mt-tc license: cc-by-4.0 model-index: - name: opus-mt-tc-big-zls-de results: - task: name: Translation bul-deu type: translation args: bul-deu dataset: name: flores101-devtest type: flores_101 args: bul deu devtest metrics: - name: BLEU type: bleu value: 28.4 - name: chr-F type: chrf value: 0.57688 - task: name: Translation hrv-deu type: translation args: hrv-deu dataset: name: flores101-devtest type: flores_101 args: hrv deu devtest metrics: - name: BLEU type: bleu value: 27.4 - name: chr-F type: chrf value: 0.56674 - task: name: Translation mkd-deu type: translation args: mkd-deu dataset: name: flores101-devtest type: flores_101 args: mkd deu devtest metrics: - name: BLEU type: bleu value: 29.3 - name: chr-F type: chrf value: 0.57688 - task: name: Translation slv-deu type: translation args: slv-deu dataset: name: flores101-devtest type: flores_101 args: slv deu devtest metrics: - name: BLEU type: bleu value: 26.7 - name: chr-F type: chrf value: 0.56258 - task: name: Translation srp_Cyrl-deu type: translation args: srp_Cyrl-deu dataset: name: flores101-devtest type: flores_101 args: srp_Cyrl deu devtest metrics: - name: BLEU type: bleu value: 30.7 - name: chr-F type: chrf value: 0.59271 - task: name: Translation bul-deu type: translation args: bul-deu dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: bul-deu metrics: - name: BLEU type: bleu value: 54.5 - name: chr-F type: chrf value: 0.71220 - task: name: Translation hbs-deu type: translation args: hbs-deu dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: hbs-deu metrics: - name: BLEU type: bleu value: 54.8 - name: chr-F type: chrf value: 0.71283 - task: name: Translation hrv-deu type: translation args: hrv-deu dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: hrv-deu metrics: - name: BLEU type: bleu value: 53.1 - name: chr-F type: chrf value: 0.69448 - task: name: Translation slv-deu type: translation args: slv-deu dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: slv-deu metrics: - name: BLEU type: bleu value: 21.1 - name: chr-F type: chrf value: 0.36339 - task: name: Translation srp_Latn-deu type: translation args: srp_Latn-deu dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: srp_Latn-deu metrics: - name: BLEU type: bleu value: 56.0 - name: chr-F type: chrf value: 0.72489 --- # opus-mt-tc-big-zls-de ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [How to Get Started With the Model](#how-to-get-started-with-the-model) - [Training](#training) - [Evaluation](#evaluation) - [Citation Information](#citation-information) - [Acknowledgements](#acknowledgements) ## Model Details Neural machine translation model for translating from South Slavic languages (zls) to German (de). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). **Model Description:** - **Developed by:** Language Technology Research Group at the University of Helsinki - **Model Type:** Translation (transformer-big) - **Release**: 2022-07-26 - **License:** CC-BY-4.0 - **Language(s):** - Source Language(s): bos_Latn bul hbs hrv mkd slv srp_Cyrl srp_Latn - Target Language(s): deu - Language Pair(s): bul-deu hbs-deu hrv-deu mkd-deu slv-deu srp_Cyrl-deu srp_Latn-deu - Valid Target Language Labels: - **Original Model**: [opusTCv20210807_transformer-big_2022-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zls-deu/opusTCv20210807_transformer-big_2022-07-26.zip) - **Resources for more information:** - [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) - More information about released models for this language pair: [OPUS-MT zls-deu README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zls-deu/README.md) - [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian) - [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/ ## Uses This model can be used for translation and text-to-text generation. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). ## How to Get Started With the Model A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "Jesi li ti student?", "Dve stvari deca treba da dobiju od svojih roditelja: korene i krila." ] model_name = "pytorch-models/opus-mt-tc-big-zls-de" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # Sind Sie Student? # Zwei Dinge sollten Kinder von ihren Eltern bekommen: Wurzeln und Flügel. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-zls-de") print(pipe("Jesi li ti student?")) # expected output: Sind Sie Student? ``` ## Training - **Data**: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) - **Pre-processing**: SentencePiece (spm32k,spm32k) - **Model Type:** transformer-big - **Original MarianNMT Model**: [opusTCv20210807_transformer-big_2022-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zls-deu/opusTCv20210807_transformer-big_2022-07-26.zip) - **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) ## Evaluation * test set translations: [opusTCv20210807_transformer-big_2022-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zls-deu/opusTCv20210807_transformer-big_2022-07-26.test.txt) * test set scores: [opusTCv20210807_transformer-big_2022-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zls-deu/opusTCv20210807_transformer-big_2022-07-26.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | bul-deu | tatoeba-test-v2021-08-07 | 0.71220 | 54.5 | 314 | 2224 | | hbs-deu | tatoeba-test-v2021-08-07 | 0.71283 | 54.8 | 1959 | 15559 | | hrv-deu | tatoeba-test-v2021-08-07 | 0.69448 | 53.1 | 782 | 5734 | | slv-deu | tatoeba-test-v2021-08-07 | 0.36339 | 21.1 | 492 | 3003 | | srp_Latn-deu | tatoeba-test-v2021-08-07 | 0.72489 | 56.0 | 986 | 8500 | | bul-deu | flores101-devtest | 0.57688 | 28.4 | 1012 | 25094 | | hrv-deu | flores101-devtest | 0.56674 | 27.4 | 1012 | 25094 | | mkd-deu | flores101-devtest | 0.57688 | 29.3 | 1012 | 25094 | | slv-deu | flores101-devtest | 0.56258 | 26.7 | 1012 | 25094 | | srp_Cyrl-deu | flores101-devtest | 0.59271 | 30.7 | 1012 | 25094 | ## Citation Information * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 8b9f0b0 * port time: Sat Aug 13 00:05:30 EEST 2022 * port machine: LM0-400-22516.local
Helsinki-NLP/opus-mt-tc-big-gmq-en
Helsinki-NLP
2023-10-10T10:53:03Z
130
0
transformers
[ "transformers", "pytorch", "tf", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "tc", "big", "gmq", "en", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-04-13T16:13:21Z
--- language: - da - en - fo - gmq - is - nb - nn - false - sv tags: - translation - opus-mt-tc license: cc-by-4.0 model-index: - name: opus-mt-tc-big-gmq-en results: - task: name: Translation dan-eng type: translation args: dan-eng dataset: name: flores101-devtest type: flores_101 args: dan eng devtest metrics: - name: BLEU type: bleu value: 49.3 - task: name: Translation isl-eng type: translation args: isl-eng dataset: name: flores101-devtest type: flores_101 args: isl eng devtest metrics: - name: BLEU type: bleu value: 34.2 - task: name: Translation nob-eng type: translation args: nob-eng dataset: name: flores101-devtest type: flores_101 args: nob eng devtest metrics: - name: BLEU type: bleu value: 44.2 - task: name: Translation swe-eng type: translation args: swe-eng dataset: name: flores101-devtest type: flores_101 args: swe eng devtest metrics: - name: BLEU type: bleu value: 49.8 - task: name: Translation isl-eng type: translation args: isl-eng dataset: name: newsdev2021.is-en type: newsdev2021.is-en args: isl-eng metrics: - name: BLEU type: bleu value: 30.4 - task: name: Translation dan-eng type: translation args: dan-eng dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: dan-eng metrics: - name: BLEU type: bleu value: 65.9 - task: name: Translation fao-eng type: translation args: fao-eng dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: fao-eng metrics: - name: BLEU type: bleu value: 30.1 - task: name: Translation isl-eng type: translation args: isl-eng dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: isl-eng metrics: - name: BLEU type: bleu value: 53.3 - task: name: Translation nno-eng type: translation args: nno-eng dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: nno-eng metrics: - name: BLEU type: bleu value: 56.1 - task: name: Translation nob-eng type: translation args: nob-eng dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: nob-eng metrics: - name: BLEU type: bleu value: 60.2 - task: name: Translation swe-eng type: translation args: swe-eng dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: swe-eng metrics: - name: BLEU type: bleu value: 66.4 - task: name: Translation isl-eng type: translation args: isl-eng dataset: name: newstest2021.is-en type: wmt-2021-news args: isl-eng metrics: - name: BLEU type: bleu value: 34.4 --- # opus-mt-tc-big-gmq-en Neural machine translation model for translating from North Germanic languages (gmq) to English (en). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2022-03-09 * source language(s): dan fao isl nno nob nor swe * target language(s): eng * model: transformer-big * data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807+bt_transformer-big_2022-03-09.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-eng/opusTCv20210807+bt_transformer-big_2022-03-09.zip) * more information released models: [OPUS-MT gmq-eng README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmq-eng/README.md) ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "Han var synligt nervøs.", "Inte ens Tom själv var övertygad." ] model_name = "pytorch-models/opus-mt-tc-big-gmq-en" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # He was visibly nervous. # Even Tom was not convinced. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-gmq-en") print(pipe("Han var synligt nervøs.")) # expected output: He was visibly nervous. ``` ## Benchmarks * test set translations: [opusTCv20210807+bt_transformer-big_2022-03-09.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-eng/opusTCv20210807+bt_transformer-big_2022-03-09.test.txt) * test set scores: [opusTCv20210807+bt_transformer-big_2022-03-09.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-eng/opusTCv20210807+bt_transformer-big_2022-03-09.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | dan-eng | tatoeba-test-v2021-08-07 | 0.78292 | 65.9 | 10795 | 79684 | | fao-eng | tatoeba-test-v2021-08-07 | 0.47467 | 30.1 | 294 | 1984 | | isl-eng | tatoeba-test-v2021-08-07 | 0.68346 | 53.3 | 2503 | 19788 | | nno-eng | tatoeba-test-v2021-08-07 | 0.69788 | 56.1 | 460 | 3524 | | nob-eng | tatoeba-test-v2021-08-07 | 0.73524 | 60.2 | 4539 | 36823 | | swe-eng | tatoeba-test-v2021-08-07 | 0.77665 | 66.4 | 10362 | 68513 | | dan-eng | flores101-devtest | 0.72322 | 49.3 | 1012 | 24721 | | isl-eng | flores101-devtest | 0.59616 | 34.2 | 1012 | 24721 | | nob-eng | flores101-devtest | 0.68224 | 44.2 | 1012 | 24721 | | swe-eng | flores101-devtest | 0.72042 | 49.8 | 1012 | 24721 | | isl-eng | newsdev2021.is-en | 0.56709 | 30.4 | 2004 | 46383 | | isl-eng | newstest2021.is-en | 0.57756 | 34.4 | 1000 | 22529 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 3405783 * port time: Wed Apr 13 19:13:11 EEST 2022 * port machine: LM0-400-22516.local
Helsinki-NLP/opus-mt-tc-big-en-lv
Helsinki-NLP
2023-10-10T10:50:52Z
164
0
transformers
[ "transformers", "pytorch", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "en", "lv", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-04-13T14:36:12Z
--- language: - en - lv tags: - translation - opus-mt-tc license: cc-by-4.0 model-index: - name: opus-mt-tc-big-en-lv results: - task: name: Translation eng-lav type: translation args: eng-lav dataset: name: flores101-devtest type: flores_101 args: eng lav devtest metrics: - name: BLEU type: bleu value: 30.1 - task: name: Translation eng-lav type: translation args: eng-lav dataset: name: newsdev2017 type: newsdev2017 args: eng-lav metrics: - name: BLEU type: bleu value: 28.9 - task: name: Translation eng-lav type: translation args: eng-lav dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: eng-lav metrics: - name: BLEU type: bleu value: 44.0 - task: name: Translation eng-lav type: translation args: eng-lav dataset: name: newstest2017 type: wmt-2017-news args: eng-lav metrics: - name: BLEU type: bleu value: 22.1 --- # opus-mt-tc-big-en-lv Neural machine translation model for translating from English (en) to Latvian (lv). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2022-03-13 * source language(s): eng * target language(s): lav * model: transformer-big * data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807+bt_transformer-big_2022-03-13.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-lav/opusTCv20210807+bt_transformer-big_2022-03-13.zip) * more information released models: [OPUS-MT eng-lav README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-lav/README.md) ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ ">>lav<< A day has twenty-four hours.", ">>ltg<< He's a good lawyer." ] model_name = "pytorch-models/opus-mt-tc-big-en-lv" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # Dienā ir divdesmit četras stundas. # Vyss ir labs advokats. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-en-lv") print(pipe(">>lav<< A day has twenty-four hours.")) # expected output: Dienā ir divdesmit četras stundas. ``` ## Benchmarks * test set translations: [opusTCv20210807+bt_transformer-big_2022-03-13.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-lav/opusTCv20210807+bt_transformer-big_2022-03-13.test.txt) * test set scores: [opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-lav/opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | eng-lav | tatoeba-test-v2021-08-07 | 0.66411 | 44.0 | 1631 | 9932 | | eng-lav | flores101-devtest | 0.59397 | 30.1 | 1012 | 22092 | | eng-lav | newsdev2017 | 0.58082 | 28.9 | 2003 | 41503 | | eng-lav | newstest2017 | 0.53202 | 22.1 | 2001 | 39392 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 3405783 * port time: Wed Apr 13 17:36:04 EEST 2022 * port machine: LM0-400-22516.local
Helsinki-NLP/opus-mt-tc-big-bg-en
Helsinki-NLP
2023-10-10T10:49:48Z
170
2
transformers
[ "transformers", "pytorch", "tf", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "bg", "en", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-04-13T15:24:05Z
--- language: - bg - en tags: - translation - opus-mt-tc license: cc-by-4.0 model-index: - name: opus-mt-tc-big-bg-en results: - task: name: Translation bul-eng type: translation args: bul-eng dataset: name: flores101-devtest type: flores_101 args: bul eng devtest metrics: - name: BLEU type: bleu value: 42.9 - task: name: Translation bul-eng type: translation args: bul-eng dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: bul-eng metrics: - name: BLEU type: bleu value: 60.5 --- # opus-mt-tc-big-bg-en Neural machine translation model for translating from Bulgarian (bg) to English (en). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2022-03-09 * source language(s): bul * target language(s): eng * model: transformer-big * data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807+bt_transformer-big_2022-03-09.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-eng/opusTCv20210807+bt_transformer-big_2022-03-09.zip) * more information released models: [OPUS-MT bul-eng README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-eng/README.md) ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "2001 е годината, с която започва 21-ви век.", "Това е Copacabana!" ] model_name = "pytorch-models/opus-mt-tc-big-bg-en" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # 2001 was the year the 21st century began. # It's Copacabana! ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-bg-en") print(pipe("2001 е годината, с която започва 21-ви век.")) # expected output: 2001 was the year the 21st century began. ``` ## Benchmarks * test set translations: [opusTCv20210807+bt_transformer-big_2022-03-09.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-eng/opusTCv20210807+bt_transformer-big_2022-03-09.test.txt) * test set scores: [opusTCv20210807+bt_transformer-big_2022-03-09.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-eng/opusTCv20210807+bt_transformer-big_2022-03-09.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | bul-eng | tatoeba-test-v2021-08-07 | 0.73687 | 60.5 | 10000 | 71872 | | bul-eng | flores101-devtest | 0.67938 | 42.9 | 1012 | 24721 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 3405783 * port time: Wed Apr 13 18:23:56 EEST 2022 * port machine: LM0-400-22516.local
Helsinki-NLP/opus-mt-tc-big-en-el
Helsinki-NLP
2023-10-10T10:48:40Z
170
1
transformers
[ "transformers", "pytorch", "tf", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "el", "en", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-04-13T13:53:07Z
--- language: - el - en tags: - translation - opus-mt-tc license: cc-by-4.0 model-index: - name: opus-mt-tc-big-en-el results: - task: name: Translation eng-ell type: translation args: eng-ell dataset: name: flores101-devtest type: flores_101 args: eng ell devtest metrics: - name: BLEU type: bleu value: 27.4 - task: name: Translation eng-ell type: translation args: eng-ell dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: eng-ell metrics: - name: BLEU type: bleu value: 55.4 --- # opus-mt-tc-big-en-el Neural machine translation model for translating from English (en) to Modern Greek (1453-) (el). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2022-03-13 * source language(s): eng * target language(s): ell * model: transformer-big * data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807+bt_transformer-big_2022-03-13.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ell/opusTCv20210807+bt_transformer-big_2022-03-13.zip) * more information released models: [OPUS-MT eng-ell README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-ell/README.md) ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "If I weren't broke, I'd buy it.", "I received your telegram." ] model_name = "pytorch-models/opus-mt-tc-big-en-el" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # Αν δεν ήμουν άφραγκος, θα το αγόραζα. # Έλαβα το τηλεγράφημα σου. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-en-el") print(pipe("If I weren't broke, I'd buy it.")) # expected output: Αν δεν ήμουν άφραγκος, θα το αγόραζα. ``` ## Benchmarks * test set translations: [opusTCv20210807+bt_transformer-big_2022-03-13.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ell/opusTCv20210807+bt_transformer-big_2022-03-13.test.txt) * test set scores: [opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ell/opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | eng-ell | tatoeba-test-v2021-08-07 | 0.73660 | 55.4 | 10899 | 66884 | | eng-ell | flores101-devtest | 0.53952 | 27.4 | 1012 | 26615 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 3405783 * port time: Wed Apr 13 16:52:58 EEST 2022 * port machine: LM0-400-22516.local
Helsinki-NLP/opus-mt-tc-big-en-ro
Helsinki-NLP
2023-10-10T10:46:29Z
256
4
transformers
[ "transformers", "pytorch", "tf", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "en", "ro", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-04-13T14:55:54Z
--- language: - en - ro tags: - translation - opus-mt-tc license: cc-by-4.0 model-index: - name: opus-mt-tc-big-en-ro results: - task: name: Translation eng-ron type: translation args: eng-ron dataset: name: flores101-devtest type: flores_101 args: eng ron devtest metrics: - name: BLEU type: bleu value: 40.4 - task: name: Translation eng-ron type: translation args: eng-ron dataset: name: newsdev2016 type: newsdev2016 args: eng-ron metrics: - name: BLEU type: bleu value: 36.4 - task: name: Translation eng-ron type: translation args: eng-ron dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: eng-ron metrics: - name: BLEU type: bleu value: 48.6 - task: name: Translation eng-ron type: translation args: eng-ron dataset: name: newstest2016 type: wmt-2016-news args: eng-ron metrics: - name: BLEU type: bleu value: 34.0 --- # opus-mt-tc-big-en-ro Neural machine translation model for translating from English (en) to Romanian (ro). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2022-02-25 * source language(s): eng * target language(s): ron * model: transformer-big * data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807+bt_transformer-big_2022-02-25.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ron/opusTCv20210807+bt_transformer-big_2022-02-25.zip) * more information released models: [OPUS-MT eng-ron README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-ron/README.md) ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ ">>ron<< A bad writer's prose is full of hackneyed phrases.", ">>ron<< Zero is a special number." ] model_name = "pytorch-models/opus-mt-tc-big-en-ro" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # Proza unui scriitor prost este plină de fraze tocite. # Zero este un număr special. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-en-ro") print(pipe(">>ron<< A bad writer's prose is full of hackneyed phrases.")) # expected output: Proza unui scriitor prost este plină de fraze tocite. ``` ## Benchmarks * test set translations: [opusTCv20210807+bt_transformer-big_2022-02-25.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ron/opusTCv20210807+bt_transformer-big_2022-02-25.test.txt) * test set scores: [opusTCv20210807+bt_transformer-big_2022-02-25.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ron/opusTCv20210807+bt_transformer-big_2022-02-25.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | eng-ron | tatoeba-test-v2021-08-07 | 0.68606 | 48.6 | 5508 | 40367 | | eng-ron | flores101-devtest | 0.64876 | 40.4 | 1012 | 26799 | | eng-ron | newsdev2016 | 0.62682 | 36.4 | 1999 | 51300 | | eng-ron | newstest2016 | 0.60702 | 34.0 | 1999 | 48945 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 3405783 * port time: Wed Apr 13 17:55:46 EEST 2022 * port machine: LM0-400-22516.local
Helsinki-NLP/opus-mt-tc-big-hu-en
Helsinki-NLP
2023-10-10T10:45:29Z
1,036
3
transformers
[ "transformers", "pytorch", "tf", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "en", "hu", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-04-13T16:33:48Z
--- language: - en - hu tags: - translation - opus-mt-tc license: cc-by-4.0 model-index: - name: opus-mt-tc-big-hu-en results: - task: name: Translation hun-eng type: translation args: hun-eng dataset: name: flores101-devtest type: flores_101 args: hun eng devtest metrics: - name: BLEU type: bleu value: 34.6 - task: name: Translation hun-eng type: translation args: hun-eng dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: hun-eng metrics: - name: BLEU type: bleu value: 50.4 - task: name: Translation hun-eng type: translation args: hun-eng dataset: name: newstest2009 type: wmt-2009-news args: hun-eng metrics: - name: BLEU type: bleu value: 23.4 --- # opus-mt-tc-big-hu-en Neural machine translation model for translating from Hungarian (hu) to English (en). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2022-03-09 * source language(s): hun * target language(s): eng * model: transformer-big * data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807+bt_transformer-big_2022-03-09.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/hun-eng/opusTCv20210807+bt_transformer-big_2022-03-09.zip) * more information released models: [OPUS-MT hun-eng README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/hun-eng/README.md) ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "Bárcsak ne láttam volna ilyen borzalmas filmet!", "Iskolában van." ] model_name = "pytorch-models/opus-mt-tc-big-hu-en" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # I wish I hadn't seen such a terrible movie. # She's at school. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-hu-en") print(pipe("Bárcsak ne láttam volna ilyen borzalmas filmet!")) # expected output: I wish I hadn't seen such a terrible movie. ``` ## Benchmarks * test set translations: [opusTCv20210807+bt_transformer-big_2022-03-09.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hun-eng/opusTCv20210807+bt_transformer-big_2022-03-09.test.txt) * test set scores: [opusTCv20210807+bt_transformer-big_2022-03-09.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hun-eng/opusTCv20210807+bt_transformer-big_2022-03-09.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | hun-eng | tatoeba-test-v2021-08-07 | 0.66644 | 50.4 | 13037 | 94699 | | hun-eng | flores101-devtest | 0.61974 | 34.6 | 1012 | 24721 | | hun-eng | newssyscomb2009 | 0.52563 | 24.7 | 502 | 11818 | | hun-eng | newstest2009 | 0.51698 | 23.4 | 2525 | 65399 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 3405783 * port time: Wed Apr 13 19:33:38 EEST 2022 * port machine: LM0-400-22516.local
Helsinki-NLP/opus-mt-tc-big-en-lt
Helsinki-NLP
2023-10-10T10:42:32Z
270
2
transformers
[ "transformers", "pytorch", "tf", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "en", "lt", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-04-13T14:42:47Z
--- language: - en - lt tags: - translation - opus-mt-tc license: cc-by-4.0 model-index: - name: opus-mt-tc-big-en-lt results: - task: name: Translation eng-lit type: translation args: eng-lit dataset: name: flores101-devtest type: flores_101 args: eng lit devtest metrics: - name: BLEU type: bleu value: 28.0 - task: name: Translation eng-lit type: translation args: eng-lit dataset: name: newsdev2019 type: newsdev2019 args: eng-lit metrics: - name: BLEU type: bleu value: 26.6 - task: name: Translation eng-lit type: translation args: eng-lit dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: eng-lit metrics: - name: BLEU type: bleu value: 39.5 - task: name: Translation eng-lit type: translation args: eng-lit dataset: name: newstest2019 type: wmt-2019-news args: eng-lit metrics: - name: BLEU type: bleu value: 17.5 --- # opus-mt-tc-big-en-lt Neural machine translation model for translating from English (en) to Lithuanian (lt). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2022-02-25 * source language(s): eng * target language(s): lit * model: transformer-big * data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807+bt_transformer-big_2022-02-25.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-lit/opusTCv20210807+bt_transformer-big_2022-02-25.zip) * more information released models: [OPUS-MT eng-lit README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-lit/README.md) ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "A cat was sitting on the chair.", "Yukiko likes potatoes." ] model_name = "pytorch-models/opus-mt-tc-big-en-lt" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # Katė sėdėjo ant kėdės. # Jukiko mėgsta bulves. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-en-lt") print(pipe("A cat was sitting on the chair.")) # expected output: Katė sėdėjo ant kėdės. ``` ## Benchmarks * test set translations: [opusTCv20210807+bt_transformer-big_2022-02-25.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-lit/opusTCv20210807+bt_transformer-big_2022-02-25.test.txt) * test set scores: [opusTCv20210807+bt_transformer-big_2022-02-25.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-lit/opusTCv20210807+bt_transformer-big_2022-02-25.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | eng-lit | tatoeba-test-v2021-08-07 | 0.67434 | 39.5 | 2528 | 14942 | | eng-lit | flores101-devtest | 0.59593 | 28.0 | 1012 | 20695 | | eng-lit | newsdev2019 | 0.58444 | 26.6 | 2000 | 39627 | | eng-lit | newstest2019 | 0.51559 | 17.5 | 998 | 19711 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 3405783 * port time: Wed Apr 13 17:42:39 EEST 2022 * port machine: LM0-400-22516.local
Helsinki-NLP/opus-mt-tc-big-zle-de
Helsinki-NLP
2023-10-10T10:41:28Z
323
0
transformers
[ "transformers", "pytorch", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "be", "de", "ru", "uk", "zle", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-24T08:57:20Z
--- language: - be - de - ru - uk - zle tags: - translation - opus-mt-tc license: cc-by-4.0 model-index: - name: opus-mt-tc-big-zle-de results: - task: name: Translation rus-deu type: translation args: rus-deu dataset: name: flores101-devtest type: flores_101 args: rus deu devtest metrics: - name: BLEU type: bleu value: 26.1 - task: name: Translation ukr-deu type: translation args: ukr-deu dataset: name: flores101-devtest type: flores_101 args: ukr deu devtest metrics: - name: BLEU type: bleu value: 28.1 - task: name: Translation bel-deu type: translation args: bel-deu dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: bel-deu metrics: - name: BLEU type: bleu value: 44.8 - task: name: Translation rus-deu type: translation args: rus-deu dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: rus-deu metrics: - name: BLEU type: bleu value: 51.8 - task: name: Translation ukr-deu type: translation args: ukr-deu dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: ukr-deu metrics: - name: BLEU type: bleu value: 54.7 - task: name: Translation rus-deu type: translation args: rus-deu dataset: name: newstest2013 type: wmt-2013-news args: rus-deu metrics: - name: BLEU type: bleu value: 25.2 --- # opus-mt-tc-big-zle-de Neural machine translation model for translating from East Slavic languages (zle) to German (de). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2022-03-19 * source language(s): bel rus ukr * target language(s): deu * model: transformer-big * data: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807_transformer-big_2022-03-19.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-deu/opusTCv20210807_transformer-big_2022-03-19.zip) * more information released models: [OPUS-MT zle-deu README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zle-deu/README.md) ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "Это был по-настоящему прекрасный день.", "Дождь кончился?" ] model_name = "pytorch-models/opus-mt-tc-big-zle-de" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # Es war ein wirklich schöner Tag. # Ist der Regen vorbei? ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-zle-de") print(pipe("Это был по-настоящему прекрасный день.")) # expected output: Es war ein wirklich schöner Tag. ``` ## Benchmarks * test set translations: [opusTCv20210807_transformer-big_2022-03-19.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-deu/opusTCv20210807_transformer-big_2022-03-19.test.txt) * test set scores: [opusTCv20210807_transformer-big_2022-03-19.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-deu/opusTCv20210807_transformer-big_2022-03-19.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | bel-deu | tatoeba-test-v2021-08-07 | 0.63720 | 44.8 | 551 | 4182 | | rus-deu | tatoeba-test-v2021-08-07 | 0.69768 | 51.8 | 12800 | 98842 | | ukr-deu | tatoeba-test-v2021-08-07 | 0.70860 | 54.7 | 10319 | 64646 | | bel-deu | flores101-devtest | 0.47052 | 12.9 | 1012 | 25094 | | rus-deu | flores101-devtest | 0.56159 | 26.1 | 1012 | 25094 | | ukr-deu | flores101-devtest | 0.57251 | 28.1 | 1012 | 25094 | | rus-deu | newstest2012 | 0.49257 | 19.8 | 3003 | 72886 | | rus-deu | newstest2013 | 0.54015 | 25.2 | 3000 | 63737 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 1bdabf7 * port time: Wed Mar 23 22:16:45 EET 2022 * port machine: LM0-400-22516.local
Helsinki-NLP/opus-mt-tc-big-en-hu
Helsinki-NLP
2023-10-10T10:40:26Z
1,246
2
transformers
[ "transformers", "pytorch", "tf", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "en", "hu", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-04-13T14:21:29Z
--- language: - en - hu tags: - translation - opus-mt-tc license: cc-by-4.0 model-index: - name: opus-mt-tc-big-en-hu results: - task: name: Translation eng-hun type: translation args: eng-hun dataset: name: flores101-devtest type: flores_101 args: eng hun devtest metrics: - name: BLEU type: bleu value: 29.6 - task: name: Translation eng-hun type: translation args: eng-hun dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: eng-hun metrics: - name: BLEU type: bleu value: 38.7 - task: name: Translation eng-hun type: translation args: eng-hun dataset: name: newstest2009 type: wmt-2009-news args: eng-hun metrics: - name: BLEU type: bleu value: 20.3 --- # opus-mt-tc-big-en-hu Neural machine translation model for translating from English (en) to Hungarian (hu). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2022-02-25 * source language(s): eng * target language(s): hun * model: transformer-big * data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807+bt_transformer-big_2022-02-25.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hun/opusTCv20210807+bt_transformer-big_2022-02-25.zip) * more information released models: [OPUS-MT eng-hun README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-hun/README.md) ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "I wish I hadn't seen such a horrible film.", "She's at school." ] model_name = "pytorch-models/opus-mt-tc-big-en-hu" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # Bárcsak ne láttam volna ilyen szörnyű filmet. # Iskolában van. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-en-hu") print(pipe("I wish I hadn't seen such a horrible film.")) # expected output: Bárcsak ne láttam volna ilyen szörnyű filmet. ``` ## Benchmarks * test set translations: [opusTCv20210807+bt_transformer-big_2022-02-25.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hun/opusTCv20210807+bt_transformer-big_2022-02-25.test.txt) * test set scores: [opusTCv20210807+bt_transformer-big_2022-02-25.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hun/opusTCv20210807+bt_transformer-big_2022-02-25.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | eng-hun | tatoeba-test-v2021-08-07 | 0.62096 | 38.7 | 13037 | 79562 | | eng-hun | flores101-devtest | 0.60159 | 29.6 | 1012 | 22183 | | eng-hun | newssyscomb2009 | 0.51918 | 20.6 | 502 | 9733 | | eng-hun | newstest2009 | 0.50973 | 20.3 | 2525 | 54965 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 3405783 * port time: Wed Apr 13 17:21:20 EEST 2022 * port machine: LM0-400-22516.local
Helsinki-NLP/opus-mt-tc-big-en-cat_oci_spa
Helsinki-NLP
2023-10-10T10:38:12Z
130
3
transformers
[ "transformers", "pytorch", "tf", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "ca", "en", "es", "oc", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-04-13T13:40:56Z
--- language: - ca - en - es - oc tags: - translation - opus-mt-tc license: cc-by-4.0 model-index: - name: opus-mt-tc-big-en-cat_oci_spa results: - task: name: Translation eng-cat type: translation args: eng-cat dataset: name: flores101-devtest type: flores_101 args: eng cat devtest metrics: - name: BLEU type: bleu value: 41.5 - task: name: Translation eng-oci type: translation args: eng-oci dataset: name: flores101-devtest type: flores_101 args: eng oci devtest metrics: - name: BLEU type: bleu value: 25.4 - task: name: Translation eng-spa type: translation args: eng-spa dataset: name: flores101-devtest type: flores_101 args: eng spa devtest metrics: - name: BLEU type: bleu value: 28.1 - task: name: Translation eng-spa type: translation args: eng-spa dataset: name: news-test2008 type: news-test2008 args: eng-spa metrics: - name: BLEU type: bleu value: 30.0 - task: name: Translation eng-cat type: translation args: eng-cat dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: eng-cat metrics: - name: BLEU type: bleu value: 47.8 - task: name: Translation eng-spa type: translation args: eng-spa dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: eng-spa metrics: - name: BLEU type: bleu value: 57.0 - task: name: Translation eng-spa type: translation args: eng-spa dataset: name: tico19-test type: tico19-test args: eng-spa metrics: - name: BLEU type: bleu value: 52.5 - task: name: Translation eng-spa type: translation args: eng-spa dataset: name: newstest2009 type: wmt-2009-news args: eng-spa metrics: - name: BLEU type: bleu value: 30.5 - task: name: Translation eng-spa type: translation args: eng-spa dataset: name: newstest2010 type: wmt-2010-news args: eng-spa metrics: - name: BLEU type: bleu value: 37.4 - task: name: Translation eng-spa type: translation args: eng-spa dataset: name: newstest2011 type: wmt-2011-news args: eng-spa metrics: - name: BLEU type: bleu value: 39.1 - task: name: Translation eng-spa type: translation args: eng-spa dataset: name: newstest2012 type: wmt-2012-news args: eng-spa metrics: - name: BLEU type: bleu value: 39.6 - task: name: Translation eng-spa type: translation args: eng-spa dataset: name: newstest2013 type: wmt-2013-news args: eng-spa metrics: - name: BLEU type: bleu value: 35.8 --- # opus-mt-tc-big-en-cat_oci_spa Neural machine translation model for translating from English (en) to Catalan, Occitan and Spanish (cat+oci+spa). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2022-03-13 * source language(s): eng * target language(s): cat spa * valid target language labels: >>cat<< >>spa<< * model: transformer-big * data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807+bt_transformer-big_2022-03-13.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cat+oci+spa/opusTCv20210807+bt_transformer-big_2022-03-13.zip) * more information released models: [OPUS-MT eng-cat+oci+spa README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cat+oci+spa/README.md) * more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian) This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>cat<<` ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ ">>spa<< Why do you want Tom to go there with me?", ">>spa<< She forced him to eat spinach." ] model_name = "pytorch-models/opus-mt-tc-big-en-cat_oci_spa" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # ¿Por qué quieres que Tom vaya conmigo? # Ella lo obligó a comer espinacas. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-en-cat_oci_spa") print(pipe(">>spa<< Why do you want Tom to go there with me?")) # expected output: ¿Por qué quieres que Tom vaya conmigo? ``` ## Benchmarks * test set translations: [opusTCv20210807+bt_transformer-big_2022-03-13.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cat+oci+spa/opusTCv20210807+bt_transformer-big_2022-03-13.test.txt) * test set scores: [opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cat+oci+spa/opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | eng-cat | tatoeba-test-v2021-08-07 | 0.66414 | 47.8 | 1631 | 12344 | | eng-spa | tatoeba-test-v2021-08-07 | 0.73725 | 57.0 | 16583 | 134710 | | eng-cat | flores101-devtest | 0.66071 | 41.5 | 1012 | 27304 | | eng-oci | flores101-devtest | 0.56192 | 25.4 | 1012 | 27305 | | eng-spa | flores101-devtest | 0.56288 | 28.1 | 1012 | 29199 | | eng-spa | newssyscomb2009 | 0.58431 | 31.4 | 502 | 12503 | | eng-spa | news-test2008 | 0.56622 | 30.0 | 2051 | 52586 | | eng-spa | newstest2009 | 0.57988 | 30.5 | 2525 | 68111 | | eng-spa | newstest2010 | 0.62343 | 37.4 | 2489 | 65480 | | eng-spa | newstest2011 | 0.62424 | 39.1 | 3003 | 79476 | | eng-spa | newstest2012 | 0.63006 | 39.6 | 3003 | 79006 | | eng-spa | newstest2013 | 0.60291 | 35.8 | 3000 | 70528 | | eng-spa | tico19-test | 0.73224 | 52.5 | 2100 | 66563 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 3405783 * port time: Wed Apr 13 16:40:45 EEST 2022 * port machine: LM0-400-22516.local
Helsinki-NLP/opus-mt-tc-big-en-gmq
Helsinki-NLP
2023-10-10T10:34:07Z
3,092
3
transformers
[ "transformers", "pytorch", "tf", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "tc", "big", "en", "gmq", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-04-13T14:14:55Z
--- language: - da - en - fo - gmq - is - nb - nn - false - sv tags: - translation - opus-mt-tc license: cc-by-4.0 model-index: - name: opus-mt-tc-big-en-gmq results: - task: name: Translation eng-dan type: translation args: eng-dan dataset: name: flores101-devtest type: flores_101 args: eng dan devtest metrics: - name: BLEU type: bleu value: 47.7 - task: name: Translation eng-isl type: translation args: eng-isl dataset: name: flores101-devtest type: flores_101 args: eng isl devtest metrics: - name: BLEU type: bleu value: 24.1 - task: name: Translation eng-nob type: translation args: eng-nob dataset: name: flores101-devtest type: flores_101 args: eng nob devtest metrics: - name: BLEU type: bleu value: 34.5 - task: name: Translation eng-swe type: translation args: eng-swe dataset: name: flores101-devtest type: flores_101 args: eng swe devtest metrics: - name: BLEU type: bleu value: 46.9 - task: name: Translation eng-isl type: translation args: eng-isl dataset: name: newsdev2021.en-is type: newsdev2021.en-is args: eng-isl metrics: - name: BLEU type: bleu value: 22.6 - task: name: Translation eng-dan type: translation args: eng-dan dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: eng-dan metrics: - name: BLEU type: bleu value: 61.6 - task: name: Translation eng-isl type: translation args: eng-isl dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: eng-isl metrics: - name: BLEU type: bleu value: 39.9 - task: name: Translation eng-nno type: translation args: eng-nno dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: eng-nno metrics: - name: BLEU type: bleu value: 40.1 - task: name: Translation eng-nob type: translation args: eng-nob dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: eng-nob metrics: - name: BLEU type: bleu value: 57.3 - task: name: Translation eng-swe type: translation args: eng-swe dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: eng-swe metrics: - name: BLEU type: bleu value: 60.9 - task: name: Translation eng-isl type: translation args: eng-isl dataset: name: newstest2021.en-is type: wmt-2021-news args: eng-isl metrics: - name: BLEU type: bleu value: 21.5 --- # opus-mt-tc-big-en-gmq Neural machine translation model for translating from English (en) to North Germanic languages (gmq). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2022-03-17 * source language(s): eng * target language(s): dan fao isl nno nob nor swe * valid target language labels: >>dan<< >>fao<< >>isl<< >>nno<< >>nob<< >>nor<< >>swe<< * model: transformer-big * data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807+bt_transformer-big_2022-03-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmq/opusTCv20210807+bt_transformer-big_2022-03-17.zip) * more information released models: [OPUS-MT eng-gmq README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gmq/README.md) * more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian) This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>dan<<` ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ ">>nno<< The United States borders Canada.", ">>nob<< This is the biggest hotel in this city." ] model_name = "pytorch-models/opus-mt-tc-big-en-gmq" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # USA grensar til Canada. # Dette er det største hotellet i denne byen. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-en-gmq") print(pipe(">>nno<< The United States borders Canada.")) # expected output: USA grensar til Canada. ``` ## Benchmarks * test set translations: [opusTCv20210807+bt_transformer-big_2022-03-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmq/opusTCv20210807+bt_transformer-big_2022-03-17.test.txt) * test set scores: [opusTCv20210807+bt_transformer-big_2022-03-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmq/opusTCv20210807+bt_transformer-big_2022-03-17.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | eng-dan | tatoeba-test-v2021-08-07 | 0.75165 | 61.6 | 10795 | 79385 | | eng-fao | tatoeba-test-v2021-08-07 | 0.40395 | 18.3 | 294 | 1933 | | eng-isl | tatoeba-test-v2021-08-07 | 0.59731 | 39.9 | 2503 | 19023 | | eng-nno | tatoeba-test-v2021-08-07 | 0.61271 | 40.1 | 460 | 3428 | | eng-nob | tatoeba-test-v2021-08-07 | 0.72380 | 57.3 | 4539 | 36119 | | eng-swe | tatoeba-test-v2021-08-07 | 0.74197 | 60.9 | 10362 | 68067 | | eng-dan | flores101-devtest | 0.70810 | 47.7 | 1012 | 24638 | | eng-isl | flores101-devtest | 0.52076 | 24.1 | 1012 | 22834 | | eng-nob | flores101-devtest | 0.62760 | 34.5 | 1012 | 23873 | | eng-swe | flores101-devtest | 0.70129 | 46.9 | 1012 | 23121 | | eng-isl | newsdev2021.en-is | 0.50376 | 22.6 | 2004 | 43721 | | eng-isl | newstest2021.en-is | 0.50516 | 21.5 | 1000 | 25233 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 3405783 * port time: Wed Apr 13 17:14:46 EEST 2022 * port machine: LM0-400-22516.local
EscvNcl/MobileNet-V2-Retinopathy
EscvNcl
2023-10-10T10:33:31Z
198
0
transformers
[ "transformers", "pytorch", "mobilenet_v2", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/mobilenet_v2_1.4_224", "base_model:finetune:google/mobilenet_v2_1.4_224", "license:other", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-10-10T09:46:40Z
--- license: other base_model: google/mobilenet_v2_1.4_224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: MobileNet-V2-Retinopathy results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9306930693069307 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MobileNet-V2-Retinopathy This model is a fine-tuned version of [google/mobilenet_v2_1.4_224](https://huggingface.co/google/mobilenet_v2_1.4_224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.2044 - Accuracy: 0.9307 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4403 | 1.0 | 113 | 0.5330 | 0.7079 | | 0.5538 | 2.0 | 227 | 0.4312 | 0.7723 | | 0.542 | 3.0 | 340 | 0.5137 | 0.7426 | | 0.4776 | 4.0 | 454 | 0.4656 | 0.7723 | | 0.4244 | 5.0 | 567 | 1.0400 | 0.5990 | | 0.4694 | 6.0 | 681 | 0.5936 | 0.7228 | | 0.4494 | 7.0 | 794 | 0.4667 | 0.7822 | | 0.4647 | 8.0 | 908 | 0.2629 | 0.8960 | | 0.3646 | 9.0 | 1021 | 0.2287 | 0.8861 | | 0.4827 | 10.0 | 1135 | 1.7967 | 0.5149 | | 0.3679 | 11.0 | 1248 | 0.4184 | 0.8267 | | 0.3454 | 12.0 | 1362 | 0.1885 | 0.9406 | | 0.3562 | 13.0 | 1475 | 0.2798 | 0.9059 | | 0.3397 | 14.0 | 1589 | 1.6444 | 0.5891 | | 0.4047 | 14.93 | 1695 | 0.2044 | 0.9307 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
Helsinki-NLP/opus-mt-tc-big-en-it
Helsinki-NLP
2023-10-10T10:33:03Z
297
5
transformers
[ "transformers", "pytorch", "tf", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "en", "it", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-04-13T14:27:31Z
--- language: - en - it tags: - translation - opus-mt-tc license: cc-by-4.0 model-index: - name: opus-mt-tc-big-en-it results: - task: name: Translation eng-ita type: translation args: eng-ita dataset: name: flores101-devtest type: flores_101 args: eng ita devtest metrics: - name: BLEU type: bleu value: 29.6 - task: name: Translation eng-ita type: translation args: eng-ita dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: eng-ita metrics: - name: BLEU type: bleu value: 53.9 - task: name: Translation eng-ita type: translation args: eng-ita dataset: name: newstest2009 type: wmt-2009-news args: eng-ita metrics: - name: BLEU type: bleu value: 31.6 --- # opus-mt-tc-big-en-it Neural machine translation model for translating from English (en) to Italian (it). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2022-03-13 * source language(s): eng * target language(s): ita * model: transformer-big * data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807+bt_transformer-big_2022-03-13.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ita/opusTCv20210807+bt_transformer-big_2022-03-13.zip) * more information released models: [OPUS-MT eng-ita README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-ita/README.md) ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "He was always very respectful.", "This cat is black. Is the dog, too?" ] model_name = "pytorch-models/opus-mt-tc-big-en-it" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # Era sempre molto rispettoso. # Questo gatto e' nero, e' anche il cane? ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-en-it") print(pipe("He was always very respectful.")) # expected output: Era sempre molto rispettoso. ``` ## Benchmarks * test set translations: [opusTCv20210807+bt_transformer-big_2022-03-13.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ita/opusTCv20210807+bt_transformer-big_2022-03-13.test.txt) * test set scores: [opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ita/opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | eng-ita | tatoeba-test-v2021-08-07 | 0.72539 | 53.9 | 17320 | 116336 | | eng-ita | flores101-devtest | 0.59002 | 29.6 | 1012 | 27306 | | eng-ita | newssyscomb2009 | 0.60759 | 31.2 | 502 | 11551 | | eng-ita | newstest2009 | 0.60441 | 31.6 | 2525 | 63466 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 3405783 * port time: Wed Apr 13 17:27:22 EEST 2022 * port machine: LM0-400-22516.local
YuZhong-Chen/q-FrozenLake-v1-4x4-noSlippery
YuZhong-Chen
2023-10-10T10:32:14Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-10-10T10:32:11Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="YuZhong-Chen/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Helsinki-NLP/opus-mt-tc-big-sh-en
Helsinki-NLP
2023-10-10T10:32:07Z
42,437
0
transformers
[ "transformers", "pytorch", "tf", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "tc", "big", "sh", "en", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-04-13T16:21:20Z
--- language: - bs_Latn - en - hr - sh - sr_Cyrl - sr_Latn tags: - translation - opus-mt-tc license: cc-by-4.0 model-index: - name: opus-mt-tc-big-sh-en results: - task: name: Translation hrv-eng type: translation args: hrv-eng dataset: name: flores101-devtest type: flores_101 args: hrv eng devtest metrics: - name: BLEU type: bleu value: 37.1 - task: name: Translation bos_Latn-eng type: translation args: bos_Latn-eng dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: bos_Latn-eng metrics: - name: BLEU type: bleu value: 66.5 - task: name: Translation hbs-eng type: translation args: hbs-eng dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: hbs-eng metrics: - name: BLEU type: bleu value: 56.4 - task: name: Translation hrv-eng type: translation args: hrv-eng dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: hrv-eng metrics: - name: BLEU type: bleu value: 58.8 - task: name: Translation srp_Cyrl-eng type: translation args: srp_Cyrl-eng dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: srp_Cyrl-eng metrics: - name: BLEU type: bleu value: 44.7 - task: name: Translation srp_Latn-eng type: translation args: srp_Latn-eng dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: srp_Latn-eng metrics: - name: BLEU type: bleu value: 58.4 --- # opus-mt-tc-big-sh-en Neural machine translation model for translating from Serbo-Croatian (sh) to English (en). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2022-02-25 * source language(s): bos_Latn hrv srp_Cyrl srp_Latn * target language(s): eng * model: transformer-big * data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807+bt_transformer-big_2022-02-25.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/hbs-eng/opusTCv20210807+bt_transformer-big_2022-02-25.zip) * more information released models: [OPUS-MT hbs-eng README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/hbs-eng/README.md) ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "Ispostavilo se da je istina.", "Ovaj vikend imamo besplatne pozive." ] model_name = "pytorch-models/opus-mt-tc-big-sh-en" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # Turns out it's true. # We got free calls this weekend. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-sh-en") print(pipe("Ispostavilo se da je istina.")) # expected output: Turns out it's true. ``` ## Benchmarks * test set translations: [opusTCv20210807+bt_transformer-big_2022-02-25.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hbs-eng/opusTCv20210807+bt_transformer-big_2022-02-25.test.txt) * test set scores: [opusTCv20210807+bt_transformer-big_2022-02-25.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hbs-eng/opusTCv20210807+bt_transformer-big_2022-02-25.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | bos_Latn-eng | tatoeba-test-v2021-08-07 | 0.80010 | 66.5 | 301 | 1826 | | hbs-eng | tatoeba-test-v2021-08-07 | 0.71744 | 56.4 | 10017 | 68934 | | hrv-eng | tatoeba-test-v2021-08-07 | 0.73563 | 58.8 | 1480 | 10620 | | srp_Cyrl-eng | tatoeba-test-v2021-08-07 | 0.68248 | 44.7 | 1580 | 10181 | | srp_Latn-eng | tatoeba-test-v2021-08-07 | 0.71781 | 58.4 | 6656 | 46307 | | hrv-eng | flores101-devtest | 0.63948 | 37.1 | 1012 | 24721 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 3405783 * port time: Wed Apr 13 19:21:10 EEST 2022 * port machine: LM0-400-22516.local
Helsinki-NLP/opus-mt-tc-big-zls-en
Helsinki-NLP
2023-10-10T10:31:05Z
5,752
0
transformers
[ "transformers", "pytorch", "tf", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "bg", "bs", "en", "hr", "mk", "sh", "sl", "sr", "zls", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-04-13T17:12:36Z
--- language: - bg - bs - en - hr - mk - sh - sl - sr - zls tags: - translation - opus-mt-tc license: cc-by-4.0 model-index: - name: opus-mt-tc-big-zls-en results: - task: name: Translation bul-eng type: translation args: bul-eng dataset: name: flores101-devtest type: flores_101 args: bul eng devtest metrics: - name: BLEU type: bleu value: 42.0 - task: name: Translation hrv-eng type: translation args: hrv-eng dataset: name: flores101-devtest type: flores_101 args: hrv eng devtest metrics: - name: BLEU type: bleu value: 37.1 - task: name: Translation mkd-eng type: translation args: mkd-eng dataset: name: flores101-devtest type: flores_101 args: mkd eng devtest metrics: - name: BLEU type: bleu value: 43.2 - task: name: Translation slv-eng type: translation args: slv-eng dataset: name: flores101-devtest type: flores_101 args: slv eng devtest metrics: - name: BLEU type: bleu value: 35.2 - task: name: Translation srp_Cyrl-eng type: translation args: srp_Cyrl-eng dataset: name: flores101-devtest type: flores_101 args: srp_Cyrl eng devtest metrics: - name: BLEU type: bleu value: 36.8 - task: name: Translation bos_Latn-eng type: translation args: bos_Latn-eng dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: bos_Latn-eng metrics: - name: BLEU type: bleu value: 66.5 - task: name: Translation bul-eng type: translation args: bul-eng dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: bul-eng metrics: - name: BLEU type: bleu value: 59.3 - task: name: Translation hbs-eng type: translation args: hbs-eng dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: hbs-eng metrics: - name: BLEU type: bleu value: 57.3 - task: name: Translation hrv-eng type: translation args: hrv-eng dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: hrv-eng metrics: - name: BLEU type: bleu value: 59.2 - task: name: Translation mkd-eng type: translation args: mkd-eng dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: mkd-eng metrics: - name: BLEU type: bleu value: 57.4 - task: name: Translation slv-eng type: translation args: slv-eng dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: slv-eng metrics: - name: BLEU type: bleu value: 23.5 - task: name: Translation srp_Cyrl-eng type: translation args: srp_Cyrl-eng dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: srp_Cyrl-eng metrics: - name: BLEU type: bleu value: 47.0 - task: name: Translation srp_Latn-eng type: translation args: srp_Latn-eng dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: srp_Latn-eng metrics: - name: BLEU type: bleu value: 58.5 --- # opus-mt-tc-big-zls-en Neural machine translation model for translating from South Slavic languages (zls) to English (en). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2022-03-17 * source language(s): bos_Latn bul hbs hrv mkd slv srp_Cyrl srp_Latn * target language(s): eng * model: transformer-big * data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807+bt_transformer-big_2022-03-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zls-eng/opusTCv20210807+bt_transformer-big_2022-03-17.zip) * more information released models: [OPUS-MT zls-eng README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zls-eng/README.md) ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "Да не би случайно Том да остави Мери да кара колата?", "Какво е времето днес?" ] model_name = "pytorch-models/opus-mt-tc-big-zls-en" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # Did Tom just let Mary drive the car? # What's the weather like today? ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-zls-en") print(pipe("Да не би случайно Том да остави Мери да кара колата?")) # expected output: Did Tom just let Mary drive the car? ``` ## Benchmarks * test set translations: [opusTCv20210807+bt_transformer-big_2022-03-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zls-eng/opusTCv20210807+bt_transformer-big_2022-03-17.test.txt) * test set scores: [opusTCv20210807+bt_transformer-big_2022-03-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zls-eng/opusTCv20210807+bt_transformer-big_2022-03-17.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | bos_Latn-eng | tatoeba-test-v2021-08-07 | 0.79339 | 66.5 | 301 | 1826 | | bul-eng | tatoeba-test-v2021-08-07 | 0.72656 | 59.3 | 10000 | 71872 | | hbs-eng | tatoeba-test-v2021-08-07 | 0.71783 | 57.3 | 10017 | 68934 | | hrv-eng | tatoeba-test-v2021-08-07 | 0.74066 | 59.2 | 1480 | 10620 | | mkd-eng | tatoeba-test-v2021-08-07 | 0.70043 | 57.4 | 10010 | 65667 | | slv-eng | tatoeba-test-v2021-08-07 | 0.39534 | 23.5 | 2495 | 16940 | | srp_Cyrl-eng | tatoeba-test-v2021-08-07 | 0.67628 | 47.0 | 1580 | 10181 | | srp_Latn-eng | tatoeba-test-v2021-08-07 | 0.71878 | 58.5 | 6656 | 46307 | | bul-eng | flores101-devtest | 0.67375 | 42.0 | 1012 | 24721 | | hrv-eng | flores101-devtest | 0.63914 | 37.1 | 1012 | 24721 | | mkd-eng | flores101-devtest | 0.67444 | 43.2 | 1012 | 24721 | | slv-eng | flores101-devtest | 0.62087 | 35.2 | 1012 | 24721 | | srp_Cyrl-eng | flores101-devtest | 0.67810 | 36.8 | 1012 | 24721 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 3405783 * port time: Wed Apr 13 20:12:26 EEST 2022 * port machine: LM0-400-22516.local
Helsinki-NLP/opus-mt-tc-big-en-ko
Helsinki-NLP
2023-10-10T10:29:58Z
1,276
14
transformers
[ "transformers", "pytorch", "tf", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "en", "ko", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-08-12T08:02:12Z
--- language: - en - ko tags: - translation - opus-mt-tc license: cc-by-4.0 model-index: - name: opus-mt-tc-big-en-ko results: - task: name: Translation eng-kor type: translation args: eng-kor dataset: name: flores101-devtest type: flores_101 args: eng kor devtest metrics: - name: BLEU type: bleu value: 13.7 - name: chr-F type: chrf value: 0.36399 --- # opus-mt-tc-big-en-ko ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [How to Get Started With the Model](#how-to-get-started-with-the-model) - [Training](#training) - [Evaluation](#evaluation) - [Citation Information](#citation-information) - [Acknowledgements](#acknowledgements) ## Model Details Neural machine translation model for translating from English (en) to Korean (ko). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). **Model Description:** - **Developed by:** Language Technology Research Group at the University of Helsinki - **Model Type:** Translation (transformer-big) - **Release**: 2022-07-28 - **License:** CC-BY-4.0 - **Language(s):** - Source Language(s): - Target Language(s): - Valid Target Language Labels: - **Original Model**: [opusTCv20210807-sepvoc_transformer-big_2022-07-28.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-kor/opusTCv20210807-sepvoc_transformer-big_2022-07-28.zip) - **Resources for more information:** - [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) - More information about released models for this language pair: [OPUS-MT eng-kor README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-kor/README.md) - [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian) - [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/ This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>><<` ## Uses This model can be used for translation and text-to-text generation. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). ## How to Get Started With the Model A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "2, 4, 6 etc. are even numbers.", "Yes." ] model_name = "pytorch-models/opus-mt-tc-big-en-ko" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # 2, 4, 6 등은 짝수입니다. # 그래 ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-en-ko") print(pipe("2, 4, 6 etc. are even numbers.")) # expected output: 2, 4, 6 등은 짝수입니다. ``` ## Training - **Data**: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) - **Pre-processing**: SentencePiece (spm32k,spm32k) - **Model Type:** transformer-big - **Original MarianNMT Model**: [opusTCv20210807-sepvoc_transformer-big_2022-07-28.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-kor/opusTCv20210807-sepvoc_transformer-big_2022-07-28.zip) - **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) ## Evaluation * test set translations: [opusTCv20210807-sepvoc_transformer-big_2022-07-28.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-kor/opusTCv20210807-sepvoc_transformer-big_2022-07-28.test.txt) * test set scores: [opusTCv20210807-sepvoc_transformer-big_2022-07-28.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-kor/opusTCv20210807-sepvoc_transformer-big_2022-07-28.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| ## Citation Information * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 8b9f0b0 * port time: Fri Aug 12 11:02:03 EEST 2022 * port machine: LM0-400-22516.local
Nga3110/nha97
Nga3110
2023-10-10T10:27:55Z
1
0
diffusers
[ "diffusers", "text-to-image", "region:us" ]
text-to-image
2023-10-10T09:59:36Z
--- library_name: diffusers pipeline_tag: text-to-image ---
srushtibhavsar/sqaud-bloom-3b
srushtibhavsar
2023-10-10T10:25:44Z
0
0
peft
[ "peft", "arxiv:1910.09700", "base_model:bigscience/bloom-1b7", "base_model:adapter:bigscience/bloom-1b7", "region:us" ]
null
2023-10-10T10:25:43Z
--- library_name: peft base_model: bigscience/bloom-1b7 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure ### Framework versions - PEFT 0.6.0.dev0
cys/Reinforce-v1
cys
2023-10-10T10:23:24Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-10-10T10:23:14Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
jcinque/ppo-LunarLander-v2
jcinque
2023-10-10T10:21:50Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-10-10T10:21:25Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 268.91 +/- 22.15 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
mys/ggml_llava-v1.5-13b
mys
2023-10-10T10:20:06Z
1,078
53
null
[ "gguf", "llava", "lmm", "ggml", "llama.cpp", "endpoints_compatible", "region:us" ]
null
2023-10-10T10:04:00Z
--- tags: - llava - lmm - ggml - llama.cpp --- # ggml_llava-v1.5-13b This repo contains GGUF files to inference [llava-v1.5-13b](https://huggingface.co/liuhaotian/llava-v1.5-13b) with [llama.cpp](https://github.com/ggerganov/llama.cpp) end-to-end without any extra dependency. **Note**: The `mmproj-model-f16.gguf` file structure is experimental and may change. Always use the latest code in llama.cpp.
srjn/q-Taxi-v3
srjn
2023-10-10T10:16:01Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-10-10T10:15:59Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="srjn/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Dhineshk/TestDocumentQuestionAnswering
Dhineshk
2023-10-10T10:15:36Z
4
0
transformers
[ "transformers", "pytorch", "layoutlmv2", "document-question-answering", "generated_from_trainer", "base_model:microsoft/layoutlmv2-base-uncased", "base_model:finetune:microsoft/layoutlmv2-base-uncased", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "region:us" ]
document-question-answering
2023-09-27T07:48:00Z
--- license: cc-by-nc-sa-4.0 base_model: microsoft/layoutlmv2-base-uncased tags: - generated_from_trainer model-index: - name: layoutlmv2-base-uncased_finetuned_docvqa results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlmv2-base-uncased_finetuned_docvqa This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 5.3353 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.153 | 0.22 | 50 | 5.3909 | | 0.2793 | 0.44 | 100 | 5.0150 | | 0.2634 | 0.66 | 150 | 4.6620 | | 0.5192 | 0.88 | 200 | 4.7826 | | 0.3096 | 1.11 | 250 | 4.9532 | | 0.2638 | 1.33 | 300 | 5.2584 | | 0.4727 | 1.55 | 350 | 4.0943 | | 0.2763 | 1.77 | 400 | 4.8408 | | 1.0425 | 1.99 | 450 | 5.0344 | | 0.4477 | 2.21 | 500 | 4.9084 | | 0.3266 | 2.43 | 550 | 5.0996 | | 0.3085 | 2.65 | 600 | 4.4858 | | 0.4648 | 2.88 | 650 | 4.0630 | | 0.1845 | 3.1 | 700 | 5.3969 | | 0.1616 | 3.32 | 750 | 4.8225 | | 0.1752 | 3.54 | 800 | 5.2945 | | 0.1877 | 3.76 | 850 | 5.2358 | | 0.3172 | 3.98 | 900 | 5.2205 | | 0.1627 | 4.2 | 950 | 4.9991 | | 0.2548 | 4.42 | 1000 | 4.6917 | | 0.1566 | 4.65 | 1050 | 5.1266 | | 0.2616 | 4.87 | 1100 | 4.3241 | | 0.1199 | 5.09 | 1150 | 4.9821 | | 0.1372 | 5.31 | 1200 | 5.0838 | | 0.1198 | 5.53 | 1250 | 5.0156 | | 0.0558 | 5.75 | 1300 | 4.8638 | | 0.1331 | 5.97 | 1350 | 4.9492 | | 0.0689 | 6.19 | 1400 | 4.6926 | | 0.0912 | 6.42 | 1450 | 4.5153 | | 0.0495 | 6.64 | 1500 | 4.6969 | | 0.0853 | 6.86 | 1550 | 4.7690 | | 0.1072 | 7.08 | 1600 | 4.6783 | | 0.034 | 7.3 | 1650 | 4.7351 | | 0.2999 | 7.52 | 1700 | 4.5185 | | 0.0763 | 7.74 | 1750 | 4.5825 | | 0.0799 | 7.96 | 1800 | 4.7218 | | 0.0343 | 8.19 | 1850 | 5.1508 | | 0.0396 | 8.41 | 1900 | 5.4893 | | 0.033 | 8.63 | 1950 | 5.5167 | | 0.0295 | 8.85 | 2000 | 5.6252 | | 0.2303 | 9.07 | 2050 | 4.7031 | | 0.088 | 9.29 | 2100 | 4.7323 | | 0.0666 | 9.51 | 2150 | 4.8688 | | 0.0597 | 9.73 | 2200 | 5.6007 | | 0.0615 | 9.96 | 2250 | 5.5403 | | 0.1003 | 10.18 | 2300 | 5.3198 | | 0.0457 | 10.4 | 2350 | 5.4828 | | 0.0391 | 10.62 | 2400 | 5.5312 | | 0.0325 | 10.84 | 2450 | 5.7410 | | 0.0147 | 11.06 | 2500 | 5.8749 | | 0.1013 | 11.28 | 2550 | 5.6522 | | 0.001 | 11.5 | 2600 | 5.7776 | | 0.0002 | 11.73 | 2650 | 5.8431 | | 0.03 | 11.95 | 2700 | 5.9751 | | 0.0452 | 12.17 | 2750 | 5.6928 | | 0.0002 | 12.39 | 2800 | 5.6264 | | 0.0109 | 12.61 | 2850 | 5.2688 | | 0.0801 | 12.83 | 2900 | 5.2780 | | 0.0216 | 13.05 | 2950 | 5.3691 | | 0.0002 | 13.27 | 3000 | 5.5237 | | 0.0092 | 13.5 | 3050 | 5.3662 | | 0.0124 | 13.72 | 3100 | 5.4474 | | 0.0515 | 13.94 | 3150 | 5.3623 | | 0.0032 | 14.16 | 3200 | 5.4168 | | 0.0051 | 14.38 | 3250 | 5.2897 | | 0.0002 | 14.6 | 3300 | 5.3205 | | 0.014 | 14.82 | 3350 | 5.2114 | | 0.0004 | 15.04 | 3400 | 5.2342 | | 0.0104 | 15.27 | 3450 | 5.2562 | | 0.0107 | 15.49 | 3500 | 5.1112 | | 0.0002 | 15.71 | 3550 | 5.1515 | | 0.0002 | 15.93 | 3600 | 5.2054 | | 0.0002 | 16.15 | 3650 | 5.1968 | | 0.0003 | 16.37 | 3700 | 5.3196 | | 0.0246 | 16.59 | 3750 | 5.3111 | | 0.0054 | 16.81 | 3800 | 5.3335 | | 0.0001 | 17.04 | 3850 | 5.3488 | | 0.0243 | 17.26 | 3900 | 5.2597 | | 0.0217 | 17.48 | 3950 | 5.2834 | | 0.0002 | 17.7 | 4000 | 5.2947 | | 0.0002 | 17.92 | 4050 | 5.3131 | | 0.0001 | 18.14 | 4100 | 5.3240 | | 0.0016 | 18.36 | 4150 | 5.3129 | | 0.0133 | 18.58 | 4200 | 5.3241 | | 0.0002 | 18.81 | 4250 | 5.3382 | | 0.0159 | 19.03 | 4300 | 5.3764 | | 0.003 | 19.25 | 4350 | 5.3776 | | 0.0516 | 19.47 | 4400 | 5.3389 | | 0.016 | 19.69 | 4450 | 5.3275 | | 0.0105 | 19.91 | 4500 | 5.3353 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cpu - Datasets 2.14.5 - Tokenizers 0.13.3
amanpelago/pelago-sentence-transformer-v1
amanpelago
2023-10-10T10:13:38Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-10-10T04:28:10Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # amanpelago/pelago-sentence-transformer-v1 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('amanpelago/pelago-sentence-transformer-v1') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=amanpelago/pelago-sentence-transformer-v1) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 3181 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.TripletLoss.TripletLoss` with parameters: ``` {'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5} ``` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
digiplay/BeenReal_diffusers
digiplay
2023-10-10T10:11:22Z
5,709
6
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-06-25T22:47:35Z
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info: https://pixai.art/model/1621642635946443255 https://aitool.ai/model/76296
SparkExpedition/Ticket-Classifier-dolly-7B
SparkExpedition
2023-10-10T10:04:59Z
0
0
peft
[ "peft", "arxiv:1910.09700", "base_model:diegi97/dolly-v2-6.9b-sharded-bf16", "base_model:adapter:diegi97/dolly-v2-6.9b-sharded-bf16", "region:us" ]
null
2023-10-10T09:15:45Z
--- library_name: peft base_model: diegi97/dolly-v2-6.9b-sharded-bf16 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.0.dev0
ali5341/videomae-base-finetuned-ucf101-subset
ali5341
2023-10-10T10:03:57Z
59
0
transformers
[ "transformers", "pytorch", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-base", "base_model:finetune:MCG-NJU/videomae-base", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2023-10-05T17:07:37Z
--- license: cc-by-nc-4.0 base_model: MCG-NJU/videomae-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: videomae-base-finetuned-ucf101-subset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-finetuned-ucf101-subset This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6506 - Accuracy: 0.5587 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 29760 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 4.1471 | 0.2 | 5952 | 4.1601 | 0.0382 | | 4.0729 | 1.2 | 11904 | 3.6134 | 0.1013 | | 2.6787 | 2.2 | 17856 | 2.9397 | 0.2193 | | 1.722 | 3.2 | 23808 | 2.0974 | 0.4241 | | 0.6968 | 4.2 | 29760 | 1.7135 | 0.5362 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu117 - Datasets 2.14.5 - Tokenizers 0.13.3
Utkarshquytech/sd-german-shepherd
Utkarshquytech
2023-10-10T10:03:20Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-10-10T09:50:27Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### sd-german-shepherd Dreambooth model trained by Utkarshquytech with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept: ![0](https://huggingface.co/Utkarshquytech/sd-german-shepherd/resolve/main/sample_images/German_Shepherd_(04).jpg) ![1](https://huggingface.co/Utkarshquytech/sd-german-shepherd/resolve/main/sample_images/German_Shepherd_(03).jpg) ![2](https://huggingface.co/Utkarshquytech/sd-german-shepherd/resolve/main/sample_images/German_Shepherd_(06).jpg) ![3](https://huggingface.co/Utkarshquytech/sd-german-shepherd/resolve/main/sample_images/German_Shepherd_(01).jpg) ![4](https://huggingface.co/Utkarshquytech/sd-german-shepherd/resolve/main/sample_images/German_Shepherd_(05).jpg) ![5](https://huggingface.co/Utkarshquytech/sd-german-shepherd/resolve/main/sample_images/German_Shepherd_(02).jpg)
digiplay/elegantEntropy_v1.1
digiplay
2023-10-10T09:57:04Z
249
2
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-06-22T01:44:34Z
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info : https://civitai.com/models/78341/elegant-entropy Original Author's DEMO images : ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/4365ef14-2fe0-4275-90c9-ae4fd8dd0813/width=512/00004-2644148705.jpeg) ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/8e703631-c2af-43f3-8fd3-c319b5374301/width=512/00001-369409155.jpeg) Sample image I made: ![05b61b0c-c47b-4506-93da-b4adc29e50ff.jpeg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/9L4ugnijNwHcobyewOsvB.jpeg)
aghorbani/opus-mt-tc-big-ar-en
aghorbani
2023-10-10T09:56:54Z
112
2
transformers
[ "transformers", "pytorch", "tf", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "ar", "en", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-11-14T08:29:32Z
--- language: - ar - en tags: - translation - opus-mt-tc license: cc-by-4.0 model-index: - name: opus-mt-tc-big-ar-en results: - task: name: Translation ara-eng type: translation args: ara-eng dataset: name: flores101-devtest type: flores_101 args: ara eng devtest metrics: - name: BLEU type: bleu value: 42.6 - task: name: Translation ara-eng type: translation args: ara-eng dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: ara-eng metrics: - name: BLEU type: bleu value: 47.3 - task: name: Translation ara-eng type: translation args: ara-eng dataset: name: tico19-test type: tico19-test args: ara-eng metrics: - name: BLEU type: bleu value: 44.4 --- # opus-mt-tc-big-ar-en Neural machine translation model for translating from Arabic (ar) to English (en). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2022-03-09 * source language(s): afb ara arz * target language(s): eng * model: transformer-big * data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807+bt_transformer-big_2022-03-09.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-eng/opusTCv20210807+bt_transformer-big_2022-03-09.zip) * more information released models: [OPUS-MT ara-eng README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-eng/README.md) ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "اتبع قلبك فحسب.", "وين راهي دّوش؟" ] model_name = "pytorch-models/opus-mt-tc-big-ar-en" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # Just follow your heart. # Wayne Rahi Dosh? ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-ar-en") print(pipe("اتبع قلبك فحسب.")) # expected output: Just follow your heart. ``` ## Benchmarks * test set translations: [opusTCv20210807+bt_transformer-big_2022-03-09.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-eng/opusTCv20210807+bt_transformer-big_2022-03-09.test.txt) * test set scores: [opusTCv20210807+bt_transformer-big_2022-03-09.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-eng/opusTCv20210807+bt_transformer-big_2022-03-09.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | ara-eng | tatoeba-test-v2021-08-07 | 0.63477 | 47.3 | 10305 | 76975 | | ara-eng | flores101-devtest | 0.66987 | 42.6 | 1012 | 24721 | | ara-eng | tico19-test | 0.68521 | 44.4 | 2100 | 56323 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 3405783 * port time: Wed Apr 13 18:17:57 EEST 2022 * port machine: LM0-400-22516.local
Srish117/gpt2-wikitext2
Srish117
2023-10-10T09:53:51Z
226
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-10-10T09:06:40Z
--- license: mit tags: - generated_from_trainer model-index: - name: gpt2-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-wikitext2 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 6.1117 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.5587 | 1.0 | 2249 | 6.4672 | | 6.1907 | 2.0 | 4498 | 6.1993 | | 6.0153 | 3.0 | 6747 | 6.1117 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
rbel/llama2-test-new
rbel
2023-10-10T09:43:01Z
4
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "question-answering", "en", "dataset:rbel/jobtitles", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
question-answering
2023-10-10T08:46:22Z
--- license: apache-2.0 datasets: - rbel/jobtitles language: - en library_name: transformers pipeline_tag: question-answering ---
m-aliabbas1/med_ner_2
m-aliabbas1
2023-10-10T09:34:55Z
106
0
transformers
[ "transformers", "pytorch", "bert", "token-classification", "generated_from_trainer", "base_model:prajjwal1/bert-tiny", "base_model:finetune:prajjwal1/bert-tiny", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-10-10T09:34:42Z
--- license: mit base_model: prajjwal1/bert-tiny tags: - generated_from_trainer model-index: - name: med_ner_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # med_ner_2 This model is a fine-tuned version of [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0295 - Overall Precision: 1.0 - Overall Recall: 0.9831 - Overall F1: 0.9915 - Overall Accuracy: 0.9977 - Age F1: 0.9888 - Yob F1: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 250 ### Training results | Training Loss | Epoch | Step | Validation Loss | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | Age F1 | Yob F1 | |:-------------:|:------:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:----------------:|:------:|:------:| | 0.0 | 47.62 | 1000 | 0.0364 | 1.0 | 0.9831 | 0.9915 | 0.9977 | 0.9888 | 1.0 | | 0.0 | 95.24 | 2000 | 0.0363 | 1.0 | 0.9831 | 0.9915 | 0.9977 | 0.9888 | 1.0 | | 0.0 | 142.86 | 3000 | 0.0279 | 1.0 | 0.9831 | 0.9915 | 0.9977 | 0.9888 | 1.0 | | 0.0 | 190.48 | 4000 | 0.0265 | 1.0 | 0.9831 | 0.9915 | 0.9977 | 0.9888 | 1.0 | | 0.0 | 238.1 | 5000 | 0.0295 | 1.0 | 0.9831 | 0.9915 | 0.9977 | 0.9888 | 1.0 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
shubhamgantayat/EleutherAI-gpt-neo-125m-wet-strength-model
shubhamgantayat
2023-10-10T09:34:20Z
117
0
transformers
[ "transformers", "pytorch", "gpt_neo", "text-generation", "generated_from_trainer", "base_model:EleutherAI/gpt-neo-125m", "base_model:finetune:EleutherAI/gpt-neo-125m", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-10-10T09:25:52Z
--- license: mit base_model: EleutherAI/gpt-neo-125m tags: - generated_from_trainer model-index: - name: EleutherAI-gpt-neo-125m-wet-strength-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # EleutherAI-gpt-neo-125m-wet-strength-model This model is a fine-tuned version of [EleutherAI/gpt-neo-125m](https://huggingface.co/EleutherAI/gpt-neo-125m) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1