modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-08 06:28:05
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
546 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-08 06:27:40
card
stringlengths
11
1.01M
lauer/distilbert-base-uncased-finetuned-emotion
lauer
2022-08-08T15:47:05Z
4
1
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-08-08T15:14:30Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9305 - name: F1 type: f1 value: 0.930580220497549 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2133 - Accuracy: 0.9305 - F1: 0.9306 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.825 | 1.0 | 250 | 0.2957 | 0.909 | 0.9069 | | 0.2399 | 2.0 | 500 | 0.2133 | 0.9305 | 0.9306 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
MLRS/mBERTu
MLRS
2022-08-08T15:44:11Z
34
3
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "mt", "dataset:MLRS/korpus_malti", "license:cc-by-nc-sa-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-04-14T13:54:27Z
--- language: - mt datasets: - MLRS/korpus_malti model-index: - name: mBERTu results: - task: type: dependency-parsing name: Dependency Parsing dataset: type: universal_dependencies args: mt_mudt name: Maltese Universal Dependencies Treebank (MUDT) metrics: - type: uas value: 92.10 name: Unlabelled Attachment Score - type: las value: 87.87 name: Labelled Attachment Score - task: type: part-of-speech-tagging name: Part-of-Speech Tagging dataset: type: mlrs_pos name: MLRS POS dataset metrics: - type: accuracy value: 98.66 name: UPOS Accuracy args: upos - type: accuracy value: 98.58 name: XPOS Accuracy args: xpos - task: type: named-entity-recognition name: Named Entity Recognition dataset: type: wikiann name: WikiAnn (Maltese) args: mt metrics: - type: f1 args: span value: 86.60 name: Span-based F1 - task: type: sentiment-analysis name: Sentiment Analysis dataset: type: mt-sentiment-analysis name: Maltese Sentiment Analysis Dataset metrics: - type: f1 args: macro value: 76.79 name: Macro-averaged F1 license: cc-by-nc-sa-4.0 widget: - text: "Malta huwa pajjiż fl-[MASK]." --- # mBERTu A Maltese multilingual model pre-trained on the Korpus Malti v4.0 using multilingual BERT as the initial checkpoint. ## License This work is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa]. Permissions beyond the scope of this license may be available at [https://mlrs.research.um.edu.mt/](https://mlrs.research.um.edu.mt/). [![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa] [cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/ [cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png ## Citation This work was first presented in [Pre-training Data Quality and Quantity for a Low-Resource Language: New Corpus and BERT Models for Maltese](https://aclanthology.org/2022.deeplo-1.10/). Cite it as follows: ```bibtex @inproceedings{BERTu, title = "Pre-training Data Quality and Quantity for a Low-Resource Language: New Corpus and {BERT} Models for {M}altese", author = "Micallef, Kurt and Gatt, Albert and Tanti, Marc and van der Plas, Lonneke and Borg, Claudia", booktitle = "Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing", month = jul, year = "2022", address = "Hybrid", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.deeplo-1.10", doi = "10.18653/v1/2022.deeplo-1.10", pages = "90--101", } ```
wenkai-li/finetuned-marktextepoch-n500
wenkai-li
2022-08-08T15:03:23Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-08-08T04:03:26Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: finetuned-marktextepoch-n500 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-marktextepoch-n500 This model is a fine-tuned version of [leokai/finetuned-marktextepoch-n200](https://huggingface.co/leokai/finetuned-marktextepoch-n200) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 2.4281 - eval_runtime: 11.4175 - eval_samples_per_second: 279.571 - eval_steps_per_second: 34.946 - epoch: 218.0 - step: 350108 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 300 ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
neskue/ppo-LunarLander-v2
neskue
2022-08-08T14:57:11Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-08-08T14:56:33Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 27.28 +/- 144.51 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
sofiaoliveira/q-FrozenLake-v1-4x4-noSlippery
sofiaoliveira
2022-08-08T13:54:54Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-08-08T13:49:17Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="sofiaoliveira/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
dminiotas05/distilbert-base-uncased-finetuned-ft1500_norm500_aug1
dminiotas05
2022-08-08T13:27:41Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-08-08T11:37:51Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-ft1500_norm500_aug1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ft1500_norm500_aug1 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.9086 - Mse: 3.6357 - Mae: 1.0762 - R2: 0.2894 - Accuracy: 0.5170 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:--------:| | 1.5856 | 1.0 | 5847 | 3.3101 | 4.1376 | 1.1447 | 0.1913 | 0.4965 | | 0.442 | 2.0 | 11694 | 2.7448 | 3.4311 | 1.0934 | 0.3294 | 0.4523 | | 0.2703 | 3.0 | 17541 | 2.9300 | 3.6625 | 1.0907 | 0.2841 | 0.4933 | | 0.1699 | 4.0 | 23388 | 2.7979 | 3.4973 | 1.0808 | 0.3164 | 0.4805 | | 0.1168 | 5.0 | 29235 | 2.9086 | 3.6357 | 1.0762 | 0.2894 | 0.5170 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Lvxue/distilled-mt5-small-0.5
Lvxue
2022-08-08T10:41:51Z
8
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "generated_from_trainer", "en", "ro", "dataset:wmt16", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-08-08T08:12:07Z
--- language: - en - ro license: apache-2.0 tags: - generated_from_trainer datasets: - wmt16 metrics: - bleu model-index: - name: distilled-mt5-small-0.5 results: - task: name: Translation type: translation dataset: name: wmt16 ro-en type: wmt16 args: ro-en metrics: - name: Bleu type: bleu value: 1.2575 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilled-mt5-small-0.5 This model is a distilled version of [Lvxue/finetuned-mt5-base](https://huggingface.co/Lvxue/finetuned-mt5-base) on [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset. It achieves the following results on the evaluation set: - Loss: 3.7455 - Bleu: 1.2575 - Gen Len: 94.3597 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
osanseviero/distilroberta-base-sentence-transformer
osanseviero
2022-08-08T09:33:52Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "roberta", "feature-extraction", "sentence-similarity", "transformers", "dataset:embedding-data/QQP_triplets", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-08-08T09:33:42Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers datasets: - embedding-data/QQP_triplets --- # osanseviero/distilroberta-base-sentence-transformer This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('osanseviero/distilroberta-base-sentence-transformer') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('osanseviero/distilroberta-base-sentence-transformer') model = AutoModel.from_pretrained('osanseviero/distilroberta-base-sentence-transformer') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=osanseviero/distilroberta-base-sentence-transformer) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 63 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.TripletLoss.TripletLoss` with parameters: ``` {'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5} ``` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 63, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Lvxue/distilled-mt5-small-0.9
Lvxue
2022-08-08T09:27:38Z
6
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "generated_from_trainer", "en", "ro", "dataset:wmt16", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-08-08T08:18:23Z
--- language: - en - ro license: apache-2.0 tags: - generated_from_trainer datasets: - wmt16 metrics: - bleu model-index: - name: distilled-mt5-small-0.9 results: - task: name: Translation type: translation dataset: name: wmt16 ro-en type: wmt16 args: ro-en metrics: - name: Bleu type: bleu value: 2.6938 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilled-mt5-small-0.9 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset. It achieves the following results on the evaluation set: - Loss: 3.4137 - Bleu: 2.6938 - Gen Len: 69.7484 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
Mahmoud7/q-FrozenLake-v1-4x4-noSlippery
Mahmoud7
2022-08-08T09:14:33Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-08-08T09:14:25Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Mahmoud7/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
202015004/Spoof_detection
202015004
2022-08-08T07:48:41Z
37
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-08-05T09:32:36Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: Spoof_detection results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Spoof_detection This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.7448 - Wer: 0.1090 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 95.9046 | 0.66 | 500 | 992.2993 | 0.6180 | | 14.0322 | 1.33 | 1000 | 1.8873 | 0.1090 | | 1.8659 | 1.99 | 1500 | 1.7827 | 0.1090 | | 1.851 | 2.65 | 2000 | 1.8489 | 0.1090 | | 1.8218 | 3.32 | 2500 | 1.8943 | 0.1090 | | 1.8108 | 3.98 | 3000 | 1.9250 | 0.1090 | | 1.8228 | 4.64 | 3500 | 1.7555 | 0.1090 | | 1.832 | 5.31 | 4000 | 1.7837 | 0.1090 | | 1.8403 | 5.97 | 4500 | 1.6644 | 0.1090 | | 1.8292 | 6.63 | 5000 | 1.6906 | 0.1090 | | 1.8223 | 7.29 | 5500 | 1.6966 | 0.1090 | | 1.8007 | 7.96 | 6000 | 1.6951 | 0.1090 | | 1.7986 | 8.62 | 6500 | 1.7436 | 0.1090 | | 1.7933 | 9.28 | 7000 | 1.8169 | 0.1090 | | 1.7861 | 9.95 | 7500 | 1.7209 | 0.1090 | | 1.7843 | 10.61 | 8000 | 1.9379 | 0.1090 | | 1.7743 | 11.27 | 8500 | 1.9834 | 0.1090 | | 1.7721 | 11.94 | 9000 | 1.9279 | 0.1090 | | 1.7719 | 12.6 | 9500 | 1.8187 | 0.1090 | | 1.7616 | 13.26 | 10000 | 1.7804 | 0.1090 | | 1.7638 | 13.93 | 10500 | 1.7884 | 0.1090 | | 1.7651 | 14.59 | 11000 | 1.7476 | 0.1090 | | 1.7603 | 15.25 | 11500 | 1.7570 | 0.1090 | | 1.7543 | 15.92 | 12000 | 1.7356 | 0.1090 | | 1.7556 | 16.58 | 12500 | 1.7140 | 0.1090 | | 1.751 | 17.24 | 13000 | 1.7453 | 0.1090 | | 1.75 | 17.9 | 13500 | 1.7648 | 0.1090 | | 1.7492 | 18.57 | 14000 | 1.7338 | 0.1090 | | 1.7484 | 19.23 | 14500 | 1.7093 | 0.1090 | | 1.7461 | 19.89 | 15000 | 1.7393 | 0.1090 | | 1.7429 | 20.56 | 15500 | 1.7605 | 0.1090 | | 1.7446 | 21.22 | 16000 | 1.7782 | 0.1090 | | 1.7435 | 21.88 | 16500 | 1.6749 | 0.1090 | | 1.7392 | 22.55 | 17000 | 1.7468 | 0.1090 | | 1.741 | 23.21 | 17500 | 1.7406 | 0.1090 | | 1.7394 | 23.87 | 18000 | 1.7787 | 0.1090 | | 1.739 | 24.54 | 18500 | 1.7969 | 0.1090 | | 1.7341 | 25.2 | 19000 | 1.7490 | 0.1090 | | 1.7371 | 25.86 | 19500 | 1.7783 | 0.1090 | | 1.735 | 26.53 | 20000 | 1.7540 | 0.1090 | | 1.7353 | 27.19 | 20500 | 1.7735 | 0.1090 | | 1.7331 | 27.85 | 21000 | 1.7188 | 0.1090 | | 1.7308 | 28.51 | 21500 | 1.7349 | 0.1090 | | 1.7341 | 29.18 | 22000 | 1.7531 | 0.1090 | | 1.7305 | 29.84 | 22500 | 1.7448 | 0.1090 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu102 - Datasets 1.16.1 - Tokenizers 0.12.1
naveenkb/ppo-LunarLander-v2
naveenkb
2022-08-08T07:35:10Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-08-08T07:34:35Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 193.11 +/- 17.14 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
osanseviero/osans
osanseviero
2022-08-08T07:05:08Z
0
0
keras
[ "keras", "tf-keras", "region:us" ]
null
2022-08-08T07:04:37Z
--- library_name: keras --- ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: | name | learning_rate | decay | beta_1 | beta_2 | epsilon | amsgrad | training_precision | |----|-------------|-----|------|------|-------|-------|------------------| |Adam|0.001|0.0|0.9|0.999|1e-07|False|float32| ## Model Plot <details> <summary>View Model Plot</summary> ![Model Image](./model.png) </details>
wenkai-li/finetuned-marktextepoch-n200
wenkai-li
2022-08-08T03:37:00Z
161
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-08-07T17:29:25Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: finetuned-marktextepoch-n200 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-marktextepoch-n200 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0880 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 2.5742 | 1.0 | 1606 | 2.4071 | | 2.4441 | 2.0 | 3212 | 2.2715 | | 2.3699 | 3.0 | 4818 | 2.2896 | | 2.3074 | 4.0 | 6424 | 2.2295 | | 2.2667 | 5.0 | 8030 | 2.2147 | | 2.2376 | 6.0 | 9636 | 2.1886 | | 2.2161 | 7.0 | 11242 | 2.1816 | | 2.1611 | 8.0 | 12848 | 2.1690 | | 2.1418 | 9.0 | 14454 | 2.1541 | | 2.1198 | 10.0 | 16060 | 2.1355 | | 2.1033 | 11.0 | 17666 | 2.1132 | | 2.0738 | 12.0 | 19272 | 2.1441 | | 2.0581 | 13.0 | 20878 | 2.1068 | | 2.0555 | 14.0 | 22484 | 2.1035 | | 2.0375 | 15.0 | 24090 | 2.1000 | | 2.0071 | 16.0 | 25696 | 2.1084 | | 1.9942 | 17.0 | 27302 | 2.0711 | | 1.9554 | 18.0 | 28908 | 2.0978 | | 1.9469 | 19.0 | 30514 | 2.0705 | | 1.9414 | 20.0 | 32120 | 2.0597 | | 1.9331 | 21.0 | 33726 | 2.0782 | | 1.9132 | 22.0 | 35332 | 2.0622 | | 1.9003 | 23.0 | 36938 | 2.0426 | | 1.9019 | 24.0 | 38544 | 2.0562 | | 1.8733 | 25.0 | 40150 | 2.0419 | | 1.8556 | 26.0 | 41756 | 2.0572 | | 1.8399 | 27.0 | 43362 | 2.0453 | | 1.8332 | 28.0 | 44968 | 2.0517 | | 1.8296 | 29.0 | 46574 | 2.0580 | | 1.788 | 30.0 | 48180 | 2.0454 | | 1.7944 | 31.0 | 49786 | 2.0193 | | 1.7716 | 32.0 | 51392 | 2.0595 | | 1.7799 | 33.0 | 52998 | 2.0379 | | 1.7633 | 34.0 | 54604 | 2.0392 | | 1.7477 | 35.0 | 56210 | 2.0122 | | 1.7407 | 36.0 | 57816 | 2.0293 | | 1.7163 | 37.0 | 59422 | 2.0339 | | 1.72 | 38.0 | 61028 | 1.9987 | | 1.729 | 39.0 | 62634 | 2.0135 | | 1.709 | 40.0 | 64240 | 2.0455 | | 1.7019 | 41.0 | 65846 | 2.0206 | | 1.6958 | 42.0 | 67452 | 2.0408 | | 1.6789 | 43.0 | 69058 | 2.0470 | | 1.6907 | 44.0 | 70664 | 2.0280 | | 1.6531 | 45.0 | 72270 | 2.0514 | | 1.6563 | 46.0 | 73876 | 2.0428 | | 1.6364 | 47.0 | 75482 | 2.0305 | | 1.6534 | 48.0 | 77088 | 2.0200 | | 1.6312 | 49.0 | 78694 | 2.0444 | | 1.6092 | 50.0 | 80300 | 2.0154 | | 1.5998 | 51.0 | 81906 | 2.0249 | | 1.5808 | 52.0 | 83512 | 2.0235 | | 1.5945 | 53.0 | 85118 | 2.0286 | | 1.6004 | 54.0 | 86724 | 2.0288 | | 1.5802 | 55.0 | 88330 | 2.0346 | | 1.5665 | 56.0 | 89936 | 2.0120 | | 1.5723 | 57.0 | 91542 | 2.0257 | | 1.5553 | 58.0 | 93148 | 2.0146 | | 1.5445 | 59.0 | 94754 | 2.0333 | | 1.5669 | 60.0 | 96360 | 2.0325 | | 1.5318 | 61.0 | 97966 | 2.0250 | | 1.5117 | 62.0 | 99572 | 2.0343 | | 1.5248 | 63.0 | 101178 | 2.0183 | | 1.5149 | 64.0 | 102784 | 2.0422 | | 1.5087 | 65.0 | 104390 | 2.0236 | | 1.5087 | 66.0 | 105996 | 2.0275 | | 1.4938 | 67.0 | 107602 | 2.0384 | | 1.5008 | 68.0 | 109208 | 2.0167 | | 1.4871 | 69.0 | 110814 | 2.0456 | | 1.4931 | 70.0 | 112420 | 2.0083 | | 1.467 | 71.0 | 114026 | 2.0313 | | 1.4519 | 72.0 | 115632 | 2.0254 | | 1.448 | 73.0 | 117238 | 2.0289 | | 1.4475 | 74.0 | 118844 | 2.0051 | | 1.4522 | 75.0 | 120450 | 2.0378 | | 1.4508 | 76.0 | 122056 | 2.0612 | | 1.4428 | 77.0 | 123662 | 2.0479 | | 1.4496 | 78.0 | 125268 | 2.0082 | | 1.4305 | 79.0 | 126874 | 2.0376 | | 1.4072 | 80.0 | 128480 | 2.0294 | | 1.4148 | 81.0 | 130086 | 2.0565 | | 1.4078 | 82.0 | 131692 | 2.0309 | | 1.3931 | 83.0 | 133298 | 2.0371 | | 1.4038 | 84.0 | 134904 | 2.0318 | | 1.3893 | 85.0 | 136510 | 2.0413 | | 1.3862 | 86.0 | 138116 | 2.0503 | | 1.3782 | 87.0 | 139722 | 2.0182 | | 1.3757 | 88.0 | 141328 | 2.0253 | | 1.3879 | 89.0 | 142934 | 2.0357 | | 1.3768 | 90.0 | 144540 | 2.0405 | | 1.3494 | 91.0 | 146146 | 2.0495 | | 1.3492 | 92.0 | 147752 | 2.0586 | | 1.353 | 93.0 | 149358 | 2.0779 | | 1.3397 | 94.0 | 150964 | 2.0564 | | 1.3486 | 95.0 | 152570 | 2.0459 | | 1.3262 | 96.0 | 154176 | 2.0692 | | 1.349 | 97.0 | 155782 | 2.0765 | | 1.3228 | 98.0 | 157388 | 2.0443 | | 1.3291 | 99.0 | 158994 | 2.0617 | | 1.3211 | 100.0 | 160600 | 2.0552 | | 1.3344 | 101.0 | 162206 | 2.0626 | | 1.307 | 102.0 | 163812 | 2.0492 | | 1.2968 | 103.0 | 165418 | 2.0461 | | 1.2919 | 104.0 | 167024 | 2.0725 | | 1.3004 | 105.0 | 168630 | 2.0424 | | 1.303 | 106.0 | 170236 | 2.0484 | | 1.2847 | 107.0 | 171842 | 2.0083 | | 1.2861 | 108.0 | 173448 | 2.0491 | | 1.2763 | 109.0 | 175054 | 2.0505 | | 1.2852 | 110.0 | 176660 | 2.0691 | | 1.2611 | 111.0 | 178266 | 2.0711 | | 1.2739 | 112.0 | 179872 | 2.0730 | | 1.278 | 113.0 | 181478 | 2.0551 | | 1.2581 | 114.0 | 183084 | 2.0554 | | 1.2532 | 115.0 | 184690 | 2.0513 | | 1.2322 | 116.0 | 186296 | 2.0292 | | 1.2774 | 117.0 | 187902 | 2.0409 | | 1.242 | 118.0 | 189508 | 2.0517 | | 1.2476 | 119.0 | 191114 | 2.0612 | | 1.2314 | 120.0 | 192720 | 2.0795 | | 1.2379 | 121.0 | 194326 | 2.0679 | | 1.2291 | 122.0 | 195932 | 2.0472 | | 1.2515 | 123.0 | 197538 | 2.0829 | | 1.2467 | 124.0 | 199144 | 2.0662 | | 1.2437 | 125.0 | 200750 | 2.0962 | | 1.2373 | 126.0 | 202356 | 2.0692 | | 1.2099 | 127.0 | 203962 | 2.0688 | | 1.1911 | 128.0 | 205568 | 2.0803 | | 1.2311 | 129.0 | 207174 | 2.0765 | | 1.2095 | 130.0 | 208780 | 2.0697 | | 1.2093 | 131.0 | 210386 | 2.0507 | | 1.2065 | 132.0 | 211992 | 2.0658 | | 1.1964 | 133.0 | 213598 | 2.0542 | | 1.2085 | 134.0 | 215204 | 2.0722 | | 1.1871 | 135.0 | 216810 | 2.0806 | | 1.1863 | 136.0 | 218416 | 2.0691 | | 1.1763 | 137.0 | 220022 | 2.0869 | | 1.1816 | 138.0 | 221628 | 2.0780 | | 1.1854 | 139.0 | 223234 | 2.0462 | | 1.1902 | 140.0 | 224840 | 2.0880 | | 1.1762 | 141.0 | 226446 | 2.0682 | | 1.1551 | 142.0 | 228052 | 2.0837 | | 1.171 | 143.0 | 229658 | 2.1028 | | 1.1571 | 144.0 | 231264 | 2.0726 | | 1.1627 | 145.0 | 232870 | 2.0863 | | 1.1537 | 146.0 | 234476 | 2.0857 | | 1.1695 | 147.0 | 236082 | 2.0620 | | 1.1477 | 148.0 | 237688 | 2.0817 | | 1.1592 | 149.0 | 239294 | 2.0705 | | 1.1478 | 150.0 | 240900 | 2.0841 | | 1.1398 | 151.0 | 242506 | 2.0886 | | 1.144 | 152.0 | 244112 | 2.0673 | | 1.1646 | 153.0 | 245718 | 2.0620 | | 1.12 | 154.0 | 247324 | 2.0821 | | 1.1419 | 155.0 | 248930 | 2.0632 | | 1.1436 | 156.0 | 250536 | 2.0817 | | 1.1365 | 157.0 | 252142 | 2.0663 | | 1.1318 | 158.0 | 253748 | 2.0796 | | 1.1219 | 159.0 | 255354 | 2.0825 | | 1.1306 | 160.0 | 256960 | 2.0837 | | 1.1295 | 161.0 | 258566 | 2.0564 | | 1.1261 | 162.0 | 260172 | 2.0722 | | 1.1273 | 163.0 | 261778 | 2.1058 | | 1.1143 | 164.0 | 263384 | 2.0963 | | 1.1276 | 165.0 | 264990 | 2.0948 | | 1.1238 | 166.0 | 266596 | 2.0695 | | 1.1222 | 167.0 | 268202 | 2.0801 | | 1.1145 | 168.0 | 269808 | 2.0768 | | 1.1093 | 169.0 | 271414 | 2.0664 | | 1.1141 | 170.0 | 273020 | 2.0903 | | 1.0936 | 171.0 | 274626 | 2.1012 | | 1.1048 | 172.0 | 276232 | 2.1033 | | 1.0991 | 173.0 | 277838 | 2.0761 | | 1.1164 | 174.0 | 279444 | 2.0689 | | 1.0935 | 175.0 | 281050 | 2.0754 | | 1.1032 | 176.0 | 282656 | 2.0810 | | 1.1124 | 177.0 | 284262 | 2.0790 | | 1.1107 | 178.0 | 285868 | 2.0762 | | 1.085 | 179.0 | 287474 | 2.0697 | | 1.093 | 180.0 | 289080 | 2.0856 | | 1.1034 | 181.0 | 290686 | 2.0734 | | 1.0983 | 182.0 | 292292 | 2.0837 | | 1.0972 | 183.0 | 293898 | 2.1063 | | 1.0909 | 184.0 | 295504 | 2.0873 | | 1.0805 | 185.0 | 297110 | 2.0888 | | 1.0893 | 186.0 | 298716 | 2.0498 | | 1.096 | 187.0 | 300322 | 2.0906 | | 1.0781 | 188.0 | 301928 | 2.0905 | | 1.0981 | 189.0 | 303534 | 2.0767 | | 1.093 | 190.0 | 305140 | 2.0695 | | 1.0814 | 191.0 | 306746 | 2.0763 | | 1.0862 | 192.0 | 308352 | 2.0890 | | 1.0833 | 193.0 | 309958 | 2.1026 | | 1.0806 | 194.0 | 311564 | 2.0978 | | 1.0834 | 195.0 | 313170 | 2.1004 | | 1.0807 | 196.0 | 314776 | 2.0953 | | 1.0827 | 197.0 | 316382 | 2.1129 | | 1.0826 | 198.0 | 317988 | 2.1069 | | 1.0796 | 199.0 | 319594 | 2.0867 | | 1.0881 | 200.0 | 321200 | 2.0880 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
jjjjjjjjjj/Reinforce-PixelCopterV1
jjjjjjjjjj
2022-08-08T03:32:35Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2022-08-08T03:22:44Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-PixelCopterV1 results: - metrics: - type: mean_reward value: 18.50 +/- 12.47 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
abdulmatinomotoso/multi_news_article_title_25000_1
abdulmatinomotoso
2022-08-08T02:32:22Z
10
0
transformers
[ "transformers", "pytorch", "tensorboard", "pegasus", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-08-07T23:09:49Z
--- tags: - generated_from_trainer model-index: - name: multi_news_article_title_25000_1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # multi_news_article_title_25000_1 This model is a fine-tuned version of [google/pegasus-multi_news](https://huggingface.co/google/pegasus-multi_news) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1973 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.3431 | 0.32 | 500 | 0.2731 | | 0.2171 | 0.64 | 1000 | 0.2056 | | 0.2211 | 0.96 | 1500 | 0.1973 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
ariesutiono/scibert-lm-v2-finetuned-20
ariesutiono
2022-08-08T02:13:38Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "dataset:conll2003", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-08-08T01:28:54Z
--- tags: - generated_from_trainer datasets: - conll2003 model-index: - name: scibert-lm-v2-finetuned-20 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # scibert-lm-v2-finetuned-20 This model is a fine-tuned version of [allenai/scibert_scivocab_cased](https://huggingface.co/allenai/scibert_scivocab_cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 15.7952 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0211 | 1.0 | 878 | 15.1971 | | 0.0001 | 2.0 | 1756 | 16.8774 | | 0.0001 | 3.0 | 2634 | 15.7952 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
jjjjjjjjjj/testpyramidsrnd
jjjjjjjjjj
2022-08-08T02:10:40Z
2
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2022-08-08T02:07:17Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids library_name: ml-agents --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: jjjjjjjjjj/testpyramidsrnd 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
ultra-coder54732/comment-detection-prop-16
ultra-coder54732
2022-08-08T00:29:14Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-08-07T05:37:42Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: comment-detection-prop-16 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # comment-detection-prop-16 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
srcocotero/bert-qa-es
srcocotero
2022-08-07T21:19:06Z
12
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad_es", "endpoints_compatible", "region:us" ]
question-answering
2022-08-07T18:16:10Z
--- tags: - generated_from_trainer datasets: - squad_es model-index: - name: bert-qa-es results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-qa-es This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on the squad_es dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0 ### Training results ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Izarel/bert-base-uncased_title_fine_tuned
Izarel
2022-08-07T20:05:32Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-08-07T16:18:39Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - recall - precision - f1 model-index: - name: bert-base-uncased_title_fine_tuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased_title_fine_tuned This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3368 - Accuracy: {'accuracy': 0.8810840405146455} - Recall: {'recall': 0.8611674554879423} - Precision: {'precision': 0.890468422279189} - F1: {'f1': 0.8755728689275893} ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | Precision | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------------------------------:|:------------------------------:|:---------------------------------:|:--------------------------:| | 0.3224 | 1.0 | 3045 | 0.3079 | {'accuracy': 0.8730358609362168} | {'recall': 0.8139508677034032} | {'precision': 0.915346597389431} | {'f1': 0.861676110945422} | | 0.2818 | 2.0 | 6090 | 0.3153 | {'accuracy': 0.8814672871612373} | {'recall': 0.8299526707234618} | {'precision': 0.9182146864480738} | {'f1': 0.8718555785735426} | | 0.2394 | 3.0 | 9135 | 0.3104 | {'accuracy': 0.8830002737476047} | {'recall': 0.8548568852828488} | {'precision': 0.8993479549496147} | {'f1': 0.8765382171124848} | | 0.204 | 4.0 | 12180 | 0.3368 | {'accuracy': 0.8810840405146455} | {'recall': 0.8611674554879423} | {'precision': 0.890468422279189} | {'f1': 0.8755728689275893} | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
theicfire/ppo-LunarLander-v2
theicfire
2022-08-07T19:48:16Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-08-07T19:47:53Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 245.21 +/- 40.58 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
ycchen/TrOCR-base-ver021-v1
ycchen
2022-08-07T19:23:45Z
45
0
transformers
[ "transformers", "pytorch", "vision-encoder-decoder", "image-text-to-text", "endpoints_compatible", "region:us" ]
image-text-to-text
2022-08-07T18:49:55Z
### How to use Here is how to use this model in PyTorch: ```python from transformers import TrOCRProcessor, VisionEncoderDecoderModel from PIL import Image import requests # load image from the IAM database (actually this model is meant to be used on printed text) url = 'https://fki.tic.heia-fr.ch/static/img/a01-122-02-00.jpg' image = Image.open(requests.get(url, stream=True).raw).convert("RGB") processor = TrOCRProcessor.from_pretrained('ycchen/TrOCR-base-ver021-v1') model = VisionEncoderDecoderModel.from_pretrained('ycchen/TrOCR-base-ver021-v1') pixel_values = processor(images=image, return_tensors="pt").pixel_values generated_ids = model.generate(pixel_values) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] ```
cataluna84/xlm-roberta-base-finetuned-panx-fr
cataluna84
2022-08-07T17:37:32Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-07T17:17:34Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-fr results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.fr metrics: - name: F1 type: f1 value: 0.8346456692913387 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2763 - F1: 0.8346 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5779 | 1.0 | 191 | 0.3701 | 0.7701 | | 0.2735 | 2.0 | 382 | 0.2908 | 0.8254 | | 0.1769 | 3.0 | 573 | 0.2763 | 0.8346 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
ai4bharat/indic-bert
ai4bharat
2022-08-07T17:32:41Z
859,331
44
transformers
[ "transformers", "pytorch", "albert", "as", "bn", "en", "gu", "hi", "kn", "ml", "mr", "or", "pa", "ta", "te", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: - as - bn - en - gu - hi - kn - ml - mr - or - pa - ta - te license: mit datasets: - AI4Bharat IndicNLP Corpora --- # IndicBERT IndicBERT is a multilingual ALBERT model pretrained exclusively on 12 major Indian languages. It is pre-trained on our novel monolingual corpus of around 9 billion tokens and subsequently evaluated on a set of diverse tasks. IndicBERT has much fewer parameters than other multilingual models (mBERT, XLM-R etc.) while it also achieves a performance on-par or better than these models. The 12 languages covered by IndicBERT are: Assamese, Bengali, English, Gujarati, Hindi, Kannada, Malayalam, Marathi, Oriya, Punjabi, Tamil, Telugu. The code can be found [here](https://github.com/divkakwani/indic-bert). For more information, checkout our [project page](https://indicnlp.ai4bharat.org/) or our [paper](https://indicnlp.ai4bharat.org/papers/arxiv2020_indicnlp_corpus.pdf). ## Pretraining Corpus We pre-trained indic-bert on AI4Bharat's monolingual corpus. The corpus has the following distribution of languages: | Language | as | bn | en | gu | hi | kn | | | ----------------- | ------ | ------ | ------ | ------ | ------ | ------ | ------- | | **No. of Tokens** | 36.9M | 815M | 1.34B | 724M | 1.84B | 712M | | | **Language** | **ml** | **mr** | **or** | **pa** | **ta** | **te** | **all** | | **No. of Tokens** | 767M | 560M | 104M | 814M | 549M | 671M | 8.9B | ## Evaluation Results IndicBERT is evaluated on IndicGLUE and some additional tasks. The results are summarized below. For more details about the tasks, refer our [official repo](https://github.com/divkakwani/indic-bert) #### IndicGLUE Task | mBERT | XLM-R | IndicBERT -----| ----- | ----- | ------ News Article Headline Prediction | 89.58 | 95.52 | **95.87** Wikipedia Section Title Prediction| **73.66** | 66.33 | 73.31 Cloze-style multiple-choice QA | 39.16 | 27.98 | **41.87** Article Genre Classification | 90.63 | 97.03 | **97.34** Named Entity Recognition (F1-score) | **73.24** | 65.93 | 64.47 Cross-Lingual Sentence Retrieval Task | 21.46 | 13.74 | **27.12** Average | 64.62 | 61.09 | **66.66** #### Additional Tasks Task | Task Type | mBERT | XLM-R | IndicBERT -----| ----- | ----- | ------ | ----- BBC News Classification | Genre Classification | 60.55 | **75.52** | 74.60 IIT Product Reviews | Sentiment Analysis | 74.57 | **78.97** | 71.32 IITP Movie Reviews | Sentiment Analaysis | 56.77 | **61.61** | 59.03 Soham News Article | Genre Classification | 80.23 | **87.6** | 78.45 Midas Discourse | Discourse Analysis | 71.20 | **79.94** | 78.44 iNLTK Headlines Classification | Genre Classification | 87.95 | 93.38 | **94.52** ACTSA Sentiment Analysis | Sentiment Analysis | 48.53 | 59.33 | **61.18** Winograd NLI | Natural Language Inference | 56.34 | 55.87 | **56.34** Choice of Plausible Alternative (COPA) | Natural Language Inference | 54.92 | 51.13 | **58.33** Amrita Exact Paraphrase | Paraphrase Detection | **93.81** | 93.02 | 93.75 Amrita Rough Paraphrase | Paraphrase Detection | 83.38 | 82.20 | **84.33** Average | | 69.84 | **74.42** | 73.66 \* Note: all models have been restricted to a max_seq_length of 128. ## Downloads The model can be downloaded [here](https://storage.googleapis.com/ai4bharat-public-indic-nlp-corpora/models/indic-bert-v1.tar.gz). Both tf checkpoints and pytorch binaries are included in the archive. Alternatively, you can also download it from [Huggingface](https://huggingface.co/ai4bharat/indic-bert). ## Citing If you are using any of the resources, please cite the following article: ``` @inproceedings{kakwani2020indicnlpsuite, title={{IndicNLPSuite: Monolingual Corpora, Evaluation Benchmarks and Pre-trained Multilingual Language Models for Indian Languages}}, author={Divyanshu Kakwani and Anoop Kunchukuttan and Satish Golla and Gokul N.C. and Avik Bhattacharyya and Mitesh M. Khapra and Pratyush Kumar}, year={2020}, booktitle={Findings of EMNLP}, } ``` We would like to hear from you if: - You are using our resources. Please let us know how you are putting these resources to use. - You have any feedback on these resources. ## License The IndicBERT code (and models) are released under the MIT License. ## Contributors - Divyanshu Kakwani - Anoop Kunchukuttan - Gokul NC - Satish Golla - Avik Bhattacharyya - Mitesh Khapra - Pratyush Kumar This work is the outcome of a volunteer effort as part of [AI4Bharat initiative](https://ai4bharat.org). ## Contact - Anoop Kunchukuttan ([anoop.kunchukuttan@gmail.com](mailto:anoop.kunchukuttan@gmail.com)) - Mitesh Khapra ([miteshk@cse.iitm.ac.in](mailto:miteshk@cse.iitm.ac.in)) - Pratyush Kumar ([pratyush@cse.iitm.ac.in](mailto:pratyush@cse.iitm.ac.in))
ultra-coder54732/stance-detection-prop-16
ultra-coder54732
2022-08-07T17:32:24Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-08-07T06:06:49Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: stance-detection-prop-16 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # stance-detection-prop-16 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
cataluna84/xlm-roberta-base-finetuned-panx-de-fr
cataluna84
2022-08-07T17:16:11Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-07T16:47:10Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1608 - F1: 0.8593 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2888 | 1.0 | 715 | 0.1779 | 0.8233 | | 0.1437 | 2.0 | 1430 | 0.1570 | 0.8497 | | 0.0931 | 3.0 | 2145 | 0.1608 | 0.8593 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
ai4bharat/IndicBART
ai4bharat
2022-08-07T17:12:33Z
1,663
29
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "multilingual", "nlp", "indicnlp", "as", "bn", "gu", "hi", "kn", "ml", "mr", "or", "pa", "ta", "te", "arxiv:2109.02903", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: - as - bn - gu - hi - kn - ml - mr - or - pa - ta - te tags: - multilingual - nlp - indicnlp --- IndicBART is a multilingual, sequence-to-sequence pre-trained model focusing on Indic languages and English. It currently supports 11 Indian languages and is based on the mBART architecture. You can use IndicBART model to build natural language generation applications for Indian languages by finetuning the model with supervised training data for tasks like machine translation, summarization, question generation, etc. Some salient features of the IndicBART are: <ul> <li >Supported languages: Assamese, Bengali, Gujarati, Hindi, Marathi, Odiya, Punjabi, Kannada, Malayalam, Tamil, Telugu and English. Not all of these languages are supported by mBART50 and mT5. </li> <li >The model is much smaller than the mBART and mT5(-base) models, so less computationally expensive for finetuning and decoding. </li> <li> Trained on large Indic language corpora (452 million sentences and 9 billion tokens) which also includes Indian English content. </li> <li> All languages, except English, have been represented in Devanagari script to encourage transfer learning among the related languages. </li> </ul> You can read more about IndicBART in this <a href="https://arxiv.org/abs/2109.02903">paper</a>. For detailed documentation, look here: https://github.com/AI4Bharat/indic-bart/ and https://indicnlp.ai4bharat.org/indic-bart/ # Pre-training corpus We used the <a href="https://indicnlp.ai4bharat.org/corpora/">IndicCorp</a> data spanning 12 languages with 452 million sentences (9 billion tokens). The model was trained using the text-infilling objective used in mBART. # Usage: ``` from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM from transformers import AlbertTokenizer, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("ai4bharat/IndicBART", do_lower_case=False, use_fast=False, keep_accents=True) # Or use tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/IndicBART", do_lower_case=False, use_fast=False, keep_accents=True) model = AutoModelForSeq2SeqLM.from_pretrained("ai4bharat/IndicBART") # Or use model = MBartForConditionalGeneration.from_pretrained("ai4bharat/IndicBART") # Some initial mapping bos_id = tokenizer._convert_token_to_id_with_added_voc("<s>") eos_id = tokenizer._convert_token_to_id_with_added_voc("</s>") pad_id = tokenizer._convert_token_to_id_with_added_voc("<pad>") # To get lang_id use any of ['<2as>', '<2bn>', '<2en>', '<2gu>', '<2hi>', '<2kn>', '<2ml>', '<2mr>', '<2or>', '<2pa>', '<2ta>', '<2te>'] # First tokenize the input and outputs. The format below is how IndicBART was trained so the input should be "Sentence </s> <2xx>" where xx is the language code. Similarly, the output should be "<2yy> Sentence </s>". inp = tokenizer("I am a boy </s> <2en>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids # tensor([[ 466, 1981, 80, 25573, 64001, 64004]]) out = tokenizer("<2hi> मैं एक लड़का हूँ </s>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids # tensor([[64006, 942, 43, 32720, 8384, 64001]]) # Note that if you use any language other than Hindi or Marathi, you should convert its script to Devanagari using the Indic NLP Library. model_outputs=model(input_ids=inp, decoder_input_ids=out[:,0:-1], labels=out[:,1:]) # For loss model_outputs.loss ## This is not label smoothed. # For logits model_outputs.logits # For generation. Pardon the messiness. Note the decoder_start_token_id. model.eval() # Set dropouts to zero model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>")) # Decode to get output strings decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False) print(decoded_output) # I am a boy # Note that if your output language is not Hindi or Marathi, you should convert its script from Devanagari to the desired language using the Indic NLP Library. # What if we mask? inp = tokenizer("I am [MASK] </s> <2en>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>")) decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False) print(decoded_output) # I am happy inp = tokenizer("मैं [MASK] हूँ </s> <2hi>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>")) decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False) print(decoded_output) # मैं जानता हूँ inp = tokenizer("मला [MASK] पाहिजे </s> <2mr>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>")) decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False) print(decoded_output) # मला ओळखलं पाहिजे ``` # Notes: 1. This is compatible with the latest version of transformers but was developed with version 4.3.2 so consider using 4.3.2 if possible. 2. While I have only shown how to get logits and loss and how to generate outputs, you can do pretty much everything the MBartForConditionalGeneration class can do as in https://huggingface.co/docs/transformers/model_doc/mbart#transformers.MBartForConditionalGeneration 3. Note that the tokenizer I have used is based on sentencepiece and not BPE. Therefore, I used the AlbertTokenizer class and not the MBartTokenizer class. 4. If you wish to use any language written in a non-Devanagari script (except English), then you should first convert it to Devanagari using the <a href="https://github.com/anoopkunchukuttan/indic_nlp_library">Indic NLP Library</a>. After you get the output, you should convert it back into the original script. # Fine-tuning on a downstream task 1. If you wish to fine-tune this model, then you can do so using the <a href="https://github.com/prajdabre/yanmtt">YANMTT</a> toolkit, following the instructions <a href="https://github.com/AI4Bharat/indic-bart ">here</a>. 2. (Untested) Alternatively, you may use the official huggingface scripts for <a href="https://github.com/huggingface/transformers/tree/master/examples/pytorch/translation">translation</a> and <a href="https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization">summarization</a>. # Contributors <ul> <li> Raj Dabre </li> <li> Himani Shrotriya </li> <li> Anoop Kunchukuttan </li> <li> Ratish Puduppully </li> <li> Mitesh M. Khapra </li> <li> Pratyush Kumar </li> </ul> # Paper If you use IndicBART, please cite the following paper: ``` @misc{dabre2021indicbart, title={IndicBART: A Pre-trained Model for Natural Language Generation of Indic Languages}, author={Raj Dabre and Himani Shrotriya and Anoop Kunchukuttan and Ratish Puduppully and Mitesh M. Khapra and Pratyush Kumar}, year={2021}, eprint={2109.02903}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` # License The model is available under the MIT License.
cataluna84/xlm-roberta-base-finetuned-panx-de
cataluna84
2022-08-07T16:42:10Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-07T15:11:07Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8648740833380706 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1365 - F1: 0.8649 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2553 | 1.0 | 525 | 0.1575 | 0.8279 | | 0.1284 | 2.0 | 1050 | 0.1386 | 0.8463 | | 0.0813 | 3.0 | 1575 | 0.1365 | 0.8649 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
Yuri/xlm-roberta-base-finetuned-panx-de-fr
Yuri
2022-08-07T16:26:58Z
3
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-07T16:01:49Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1608 - F1: 0.8593 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2888 | 1.0 | 715 | 0.1779 | 0.8233 | | 0.1437 | 2.0 | 1430 | 0.1570 | 0.8497 | | 0.0931 | 3.0 | 2145 | 0.1608 | 0.8593 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
huggingtweets/apesahoy-jrc1921-spicymoregano
huggingtweets
2022-08-07T16:05:52Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-08-07T16:02:45Z
--- language: en thumbnail: http://www.huggingtweets.com/apesahoy-jrc1921-spicymoregano/1659888347907/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1196519479364268034/5QpniWSP_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1516733295194775556/DnbEYpX0_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1551342266194731009/DRWm6dXG_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Humongous Ape MP & j & morg</div> <div style="text-align: center; font-size: 14px;">@apesahoy-jrc1921-spicymoregano</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Humongous Ape MP & j & morg. | Data | Humongous Ape MP | j | morg | | --- | --- | --- | --- | | Tweets downloaded | 3245 | 3206 | 3222 | | Retweets | 199 | 1204 | 853 | | Short tweets | 612 | 791 | 465 | | Tweets kept | 2434 | 1211 | 1904 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/jvgggd2b/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @apesahoy-jrc1921-spicymoregano's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2jt78qwg) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2jt78qwg/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/apesahoy-jrc1921-spicymoregano') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
paola-md/recipe-distil
paola-md
2022-08-07T14:54:15Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-08-07T12:35:56Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: recipe-distil results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # recipe-distil This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.3731 - Rmse: 1.8366 - Mse: 3.3731 - Mae: 1.6145 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae | |:-------------:|:-----:|:------:|:---------------:|:------:|:------:|:------:| | 3.3242 | 1.0 | 12809 | 3.3718 | 1.8362 | 3.3718 | 1.6145 | | 3.3237 | 2.0 | 25618 | 3.3720 | 1.8363 | 3.3720 | 1.6145 | | 3.3214 | 3.0 | 38427 | 3.3719 | 1.8363 | 3.3719 | 1.6145 | | 3.3207 | 4.0 | 51236 | 3.3759 | 1.8374 | 3.3759 | 1.6145 | | 3.32 | 5.0 | 64045 | 3.3721 | 1.8363 | 3.3721 | 1.6145 | | 3.3199 | 6.0 | 76854 | 3.3730 | 1.8366 | 3.3730 | 1.6145 | | 3.3191 | 7.0 | 89663 | 3.3728 | 1.8365 | 3.3728 | 1.6145 | | 3.3189 | 8.0 | 102472 | 3.3718 | 1.8363 | 3.3718 | 1.6145 | | 3.319 | 9.0 | 115281 | 3.3731 | 1.8366 | 3.3731 | 1.6145 | ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.9.0+cu111 - Datasets 2.4.0 - Tokenizers 0.12.1
VietAI/gpt-neo-1.3B-vietnamese-news
VietAI
2022-08-07T14:32:07Z
1,341
28
transformers
[ "transformers", "pytorch", "gpt_neo", "text-generation", "causal-lm", "gpt", "vi", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: - vi tags: - pytorch - causal-lm - gpt --- # GPT-Neo 1.3B on Vietnamese News Details will be available soon. For more information, please contact anhduongng.1001@gmail.com (Dương) / imthanhlv@gmail.com (Thành) / nguyenvulebinh@gmail.com (Bình). ### How to use ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("VietAI/gpt-neo-1.3B-vietnamese-news") model = AutoModelForCausalLM.from_pretrained("VietAI/gpt-neo-1.3B-vietnamese-news", low_cpu_mem_usage=True) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) prompt = "Tiềm năng của trí tuệ nhân tạo" # your input sentence input_ids = tokenizer(prompt, return_tensors="pt")['input_ids'].to(device) gen_tokens = model.generate( input_ids, max_length=max_length, do_sample=True, temperature=0.9, top_k=20, ) gen_text = tokenizer.batch_decode(gen_tokens)[0] print(gen_text) ```
constanter/q-FrozenLake-v1-4x4-noSlippery
constanter
2022-08-07T13:56:36Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-08-07T13:56:28Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="constanter/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
mohammedbriman/t5-small-finetuned-tf-xsum
mohammedbriman
2022-08-07T13:52:08Z
4
0
transformers
[ "transformers", "tf", "tensorboard", "t5", "text2text-generation", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-07-16T14:46:07Z
--- tags: - generated_from_keras_callback model-index: - name: t5-small-finetuned-tf-xsum results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-tf-xsum This model was trained from scratch on xsum dataset. It achieves the following results on the evaluation set: - Train Loss: 2.3494 - Validation Loss: 2.1933 - Train Rouge1: 32.0241 - Train Rouge2: 10.1025 - Train Rougel: 25.8834 - Train Rougelsum: 25.9662 - Train Gen Len: 18.69 - Epoch: 8 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch | | :--------: | :-------------: | :----------: | :----------: | :----------: | :-------------: | :-----------: | :---: | | 2.7197 | 2.4028 | 29.6376 | 8.8596 | 22.8598 | 22.8401 | 18.82 | 1 | | 2.5822 | 2.3407 | 30.6849 | 9.3100 | 23.8971 | 23.9096 | 18.745 | 2 | | 2.5174 | 2.2979 | 32.3706 | 11.5463 | 26.4253 | 26.3525 | 18.75 | 3 | | 2.4711 | 2.2703 | 32.2768 | 11.0460 | 26.2472 | 26.1540 | 18.825 | 4 | | 2.4305 | 2.2432 | 29.3935 | 8.3337 | 22.2859 | 22.3557 | 18.65 | 5 | | 2.3994 | 2.2237 | 31.0993 | 8.7932 | 23.6971 | 23.7702 | 18.815 | 6 | | 2.3732 | 2.2071 | 31.4819 | 10.0677 | 25.1846 | 25.2829 | 18.675 | 7 | | 2.3494 | 2.1933 | 32.0241 | 10.1025 | 25.8834 | 25.9662 | 18.69 | 8 | ### Framework versions - Transformers 4.21.1 - TensorFlow 2.8.2 - Datasets 2.4.0 - Tokenizers 0.12.1
mikesun112233/t5-base-finetuned-question-generation-ap
mikesun112233
2022-08-07T12:47:58Z
6
0
transformers
[ "transformers", "pytorch", "tf", "t5", "text2text-generation", "en", "dataset:squad", "arxiv:1910.10683", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-08-07T11:34:23Z
--- language: en datasets: - squad widget: - text: "answer: Manuel context: Manuel has created RuPERTa-base with the support of HF-Transformers and Google" --- # T5-base fine-tuned on SQuAD for **Question Generation** [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned on [SQuAD v1.1](https://rajpurkar.github.io/SQuAD-explorer/) for **Question Generation** by just prepending the *answer* to the *context*. ## Details of T5 The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://i.imgur.com/jVFMMWR.png) ## Details of the downstream task (Q&A) - Dataset 📚 🧐 ❓ Dataset ID: ```squad``` from [Huggingface/NLP](https://github.com/huggingface/nlp) | Dataset | Split | # samples | | -------- | ----- | --------- | | squad | train | 87599 | | squad | valid | 10570 | How to load it from [nlp](https://github.com/huggingface/nlp) ```python train_dataset = nlp.load_dataset('squad', split=nlp.Split.TRAIN) valid_dataset = nlp.load_dataset('squad', split=nlp.Split.VALIDATION) ``` Check out more about this dataset and others in [NLP Viewer](https://huggingface.co/nlp/viewer/) ## Model fine-tuning 🏋️‍ The training script is a slightly modified version of [this awesome one](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) by [Suraj Patil](https://twitter.com/psuraj28) He also made a great research on [**Question Generation**](https://github.com/patil-suraj/question_generation) ## Model in Action 🚀 ```python # Tip: By now, install transformers from source from transformers import AutoModelWithLMHead, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-question-generation-ap") model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-base-finetuned-question-generation-ap") def get_question(answer, context, max_length=64): input_text = "answer: %s context: %s </s>" % (answer, context) features = tokenizer([input_text], return_tensors='pt') output = model.generate(input_ids=features['input_ids'], attention_mask=features['attention_mask'], max_length=max_length) return tokenizer.decode(output[0]) context = "Manuel has created RuPERTa-base with the support of HF-Transformers and Google" answer = "Manuel" get_question(answer, context) # output: question: Who created the RuPERTa-base? ``` ## Citation If you want to cite this model you can use this: ```bibtex @misc{mromero2021t5-base-finetuned-question-generation-ap, title={T5 (base) fine-tuned on SQUAD for QG via AP}, author={Romero, Manuel}, publisher={Hugging Face}, journal={Hugging Face Hub}, howpublished={\url{https://huggingface.co/mrm8488/t5-base-finetuned-question-generation-ap}}, year={2021} } ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
sunilkumardash9/finetuning-sentiment-model-3000-samples
sunilkumardash9
2022-08-07T12:16:22Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-08-07T12:04:41Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: train args: plain_text metrics: - name: Accuracy type: accuracy value: 0.8666666666666667 - name: F1 type: f1 value: 0.8684210526315789 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3213 - Accuracy: 0.8667 - F1: 0.8684 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
yhavinga/t5-small-24L-ccmatrix-multi
yhavinga
2022-08-07T12:07:16Z
8
2
transformers
[ "transformers", "pytorch", "jax", "tensorboard", "t5", "text2text-generation", "translation", "seq2seq", "nl", "en", "dataset:yhavinga/mc4_nl_cleaned", "dataset:yhavinga/ccmatrix", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
translation
2022-04-27T09:46:23Z
--- language: - nl - en datasets: - yhavinga/mc4_nl_cleaned - yhavinga/ccmatrix tags: - t5 - translation - seq2seq pipeline_tag: translation widget: - text: "It is a painful and tragic spectacle that rises before me: I have drawn back the curtain from the rottenness of man. This word, in my mouth, is at least free from one suspicion: that it involves a moral accusation against humanity." - text: "Young Wehling was hunched in his chair, his head in his hand. He was so rumpled, so still and colorless as to be virtually invisible. His camouflage was perfect, since the waiting room had a disorderly and demoralized air, too. Chairs and ashtrays had been moved away from the walls. The floor was paved with spattered dropcloths." license: apache-2.0 --- # t5-small-24L-ccmatrix-multi A [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) model finetuned for Dutch to English and English to Dutch translation on the CCMatrix dataset. Evaluation metrics of this model are listed in the **Translation models** section below. You can use this model directly with a pipeline for text translation: ```python model_name = "yhavinga/t5-small-24L-ccmatrix-multi" from transformers import AutoTokenizer from transformers import AutoModelForSeq2SeqLM from transformers import pipeline import torch device_num = 0 if torch.cuda.is_available() else -1 device = "cpu" if device_num < 0 else f"cuda:{device_num}" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSeq2SeqLM.from_pretrained(model_name).to(device) params = {"max_length": 128, "num_beams": 4, "early_stopping": True} en_to_nl = pipeline("translation_en_to_nl", tokenizer=tokenizer, model=model, device=device_num) print(en_to_nl("""Young Wehling was hunched in his chair, his head in his hand. He was so rumpled, so still and colorless as to be virtually invisible.""", **params)[0]['translation_text']) nl_to_en = pipeline("translation_nl_to_en", tokenizer=tokenizer, model=model, device=device_num) print(nl_to_en("""De jonge Wehling zat gebogen in zijn stoel, zijn hoofd in zijn hand. Hij was zo stoffig, zo stil en kleurloos dat hij vrijwel onzichtbaar was.""", **params)[0]['translation_text']) ``` This **t5 eff** model has **249M** parameters. It was pre-trained with masked language modeling (denoise token span corruption) objective on the dataset `mc4_nl_cleaned` config `large_en_nl` for **1** epoch(s) and a duration of **4d10h**, with a sequence length of **512**, batch size **128** and **851852** total steps (**56B** tokens). Pre-training evaluation loss and accuracy are **1,18** and **0,74**. Refer to the evaluation section below for a comparison of the pre-trained models on summarization and translation. ## Tokenizer The model uses a cased SentencePiece tokenizer configured with the `Nmt, NFKC, Replace multi-space to single-space` normalizers and has 32003 tokens. It was trained on Dutch and English with scripts from the Huggingface Transformers [Flax examples](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling). See [./raw/main/tokenizer.json](tokenizer.json) for details. ## Dataset(s) All models listed below are pre-trained on [cleaned Dutch mC4](https://huggingface.co/datasets/yhavinga/mc4_nl_cleaned), which is the original mC4, except * Documents that contained words from a selection of the Dutch and English [List of Dirty Naught Obscene and Otherwise Bad Words](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words) are removed * Sentences with less than 3 words are removed * Sentences with a word of more than 1000 characters are removed * Documents with less than 5 sentences are removed * Documents with "javascript", "lorum ipsum", "terms of use", "privacy policy", "cookie policy", "uses cookies", "use of cookies", "use cookies", "elementen ontbreken", "deze printversie" are removed. The Dutch and English models are pre-trained on a 50/50% mix of Dutch mC4 and English C4. The translation models are fine-tuned on [CCMatrix](https://huggingface.co/datasets/yhavinga/ccmatrix). ## Dutch T5 Models Three types of [Dutch T5 models have been trained (blog)](https://huggingface.co/spaces/yhavinga/pre-training-dutch-t5-models). `t5-base-dutch` is the only model with an original T5 config. The other model types t5-v1.1 and t5-eff have `gated-relu` instead of `relu` as activation function, and trained with a drop-out of `0.0` unless training would diverge (`t5-v1.1-large-dutch-cased`). The T5-eff models are models that differ in their number of layers. The table will list the several dimensions of these models. Not all t5-eff models are efficient, the best example being the inefficient `t5-xl-4L-dutch-english-cased`. | | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1.1-large-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-large-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-xl-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-xl-8l-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) | |:------------------|:----------------|:-----------------------------|:---------------------------|:----------------------------|:-----------------------------------|:----------------------------------------|:-----------------------------|:-------------------------------|:----------------------------------|:-----------------------------------|:--------------------------------------| | *type* | t5 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5 eff | t5 eff | t5 eff | t5 eff | t5 eff | | *d_model* | 768 | 768 | 768 | 1024 | 768 | 768 | 512 | 2048 | 768 | 1024 | 1024 | | *d_ff* | 3072 | 2048 | 2048 | 2816 | 2048 | 2048 | 1920 | 5120 | 2560 | 16384 | 4096 | | *num_heads* | 12 | 12 | 12 | 16 | 12 | 12 | 8 | 32 | 12 | 32 | 16 | | *d_kv* | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 128 | 64 | | *num_layers* | 12 | 12 | 12 | 24 | 12 | 12 | 24 | 4 | 36 | 8 | 8 | | *num parameters* | 223M | 248M | 248M | 783M | 248M | 248M | 250M | 585M | 729M | 1241M | 335M | | *feed_forward_proj* | relu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | | *dropout* | 0.1 | 0.0 | 0.0 | 0.1 | 0.0 | 0.0 | 0.0 | 0.1 | 0.0 | 0.0 | 0.0 | | *dataset* | mc4_nl_cleaned | mc4_nl_cleaned full | mc4_nl_cleaned full | mc4_nl_cleaned | mc4_nl_cleaned small_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | | *tr. seq len* | 512 | 1024 | 1024 | 512 | 512 | 1024 | 512 | 512 | 512 | 512 | 512 | | *batch size* | 128 | 64 | 64 | 64 | 128 | 64 | 128 | 512 | 512 | 64 | 128 | | *total steps* | 527500 | 1014525 | 1210154 | 1120k/2427498 | 2839630 | 1520k/3397024 | 851852 | 212963 | 212963 | 538k/1703705 | 851850 | | *epochs* | 1 | 2 | 2 | 2 | 10 | 4 | 1 | 1 | 1 | 1 | 1 | | *duration* | 2d9h | 5d5h | 6d6h | 8d13h | 11d18h | 9d1h | 4d10h | 6d1h | 17d15h | 4d 19h | 3d 23h | | *optimizer* | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | | *lr* | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.009 | 0.005 | 0.005 | | *warmup* | 10000.0 | 10000.0 | 10000.0 | 10000.0 | 10000.0 | 5000.0 | 20000.0 | 2500.0 | 1000.0 | 1500.0 | 1500.0 | | *eval loss* | 1,38 | 1,20 | 0,96 | 1,07 | 1,11 | 1,13 | 1,18 | 1,27 | 1,05 | 1,3019 | 1,15 | | *eval acc* | 0,70 | 0,73 | 0,78 | 0,76 | 0,75 | 0,74 | 0,74 | 0,72 | 0,76 | 0,71 | 0,74 | ## Evaluation Most models from the list above have been fine-tuned for summarization and translation. The figure below shows the evaluation scores, where the x-axis shows the translation Bleu score (higher is better) and y-axis the summarization Rouge1 translation score (higher is better). Point size is proportional to the model size. Models with faster inference speed are green, slower inference speed is plotted as bleu. ![Evaluation T5 Dutch English](evaluation_t5_dutch_english.png) Evaluation was run on fine-tuned models trained with the following settings: | | Summarization | Translation | |---------------:|------------------|-------------------| | Dataset | CNN Dailymail NL | CCMatrix en -> nl | | #train samples | 50K | 50K | | Optimizer | Adam | Adam | | learning rate | 0.001 | 0.0005 | | source length | 1024 | 128 | | target length | 142 | 128 | |label smoothing | 0.05 | 0.1 | | #eval samples | 1000 | 1000 | Note that the amount of training data is limited to a fraction of the total dataset sizes, therefore the scores below can only be used to compare the 'transfer-learning' strength. The fine-tuned checkpoints for this evaluation are not saved, since they were trained for comparison of pre-trained models only. The numbers for summarization are the Rouge scores on 1000 documents from the test split. | | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) | mt5-base | |:------------------------|----------------:|-----------------------------:|---------------------------:|-----------------------------------:|----------------------------------------:|-----------------------------:|-------------------------------:|----------------------------------:|--------------------------------------:|-----------:| | *rouge1* | 33.38 | 33.97 | 34.39 | 33.38 | 34.97 | 34.38 | 30.35 | **35.04** | 34.04 | 33.25 | | *rouge2* | 13.32 | 13.85 | 13.98 | 13.47 | 14.01 | 13.89 | 11.57 | **14.23** | 13.76 | 12.74 | | *rougeL* | 24.22 | 24.72 | 25.1 | 24.34 | 24.99 | **25.25** | 22.69 | 25.05 | 24.75 | 23.5 | | *rougeLsum* | 30.23 | 30.9 | 31.44 | 30.51 | 32.01 | 31.38 | 27.5 | **32.12** | 31.12 | 30.15 | | *samples_per_second* | 3.18 | 3.02 | 2.99 | 3.22 | 2.97 | 1.57 | 2.8 | 0.61 | **3.27** | 1.22 | The models below have been evaluated for English to Dutch translation. Note that the first four models are pre-trained on Dutch only. That they still perform adequate is probably because the translation direction is English to Dutch. The numbers reported are the Bleu scores on 1000 documents from the test split. | | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1.1-large-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-large-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) | mt5-base | |:-------------------------------|----------------:|-----------------------------:|---------------------------:|----------------------------:|-----------------------------------:|----------------------------------------:|-----------------------------:|-------------------------------:|----------------------------------:|--------------------------------------:|-----------:| | *precision_ng1* | 74.17 | 78.09 | 77.08 | 72.12 | 77.19 | 78.76 | 78.59 | 77.3 | **79.75** | 78.88 | 73.47 | | *precision_ng2* | 52.42 | 57.52 | 55.31 | 48.7 | 55.39 | 58.01 | 57.83 | 55.27 | **59.89** | 58.27 | 50.12 | | *precision_ng3* | 39.55 | 45.2 | 42.54 | 35.54 | 42.25 | 45.13 | 45.02 | 42.06 | **47.4** | 45.95 | 36.59 | | *precision_ng4* | 30.23 | 36.04 | 33.26 | 26.27 | 32.74 | 35.72 | 35.41 | 32.61 | **38.1** | 36.91 | 27.26 | | *bp* | 0.99 | 0.98 | 0.97 | 0.98 | 0.98 | 0.98 | 0.98 | 0.97 | 0.98 | 0.98 | 0.98 | | *score* | 45.88 | 51.21 | 48.31 | 41.59 | 48.17 | 51.31 | 50.82 | 47.83 | **53** | 51.79 | 42.74 | | *samples_per_second* | **45.19** | 45.05 | 38.67 | 10.12 | 42.19 | 42.61 | 12.85 | 33.74 | 9.07 | 37.86 | 9.03 | ## Translation models The models `t5-small-24L-dutch-english` and `t5-base-36L-dutch-english` have been fine-tuned for both language directions on the first 25M samples from CCMatrix, giving a total of 50M training samples. Evaluation is performed on out-of-sample CCMatrix and also on Tatoeba and Opus Books. The `_bp` columns list the *brevity penalty*. The `avg_bleu` score is the bleu score averaged over all three evaluation datasets. The best scores displayed in bold for both translation directions. | | [t5-base-36L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-base-36L-ccmatrix-multi) | [t5-base-36L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-base-36L-ccmatrix-multi) | [t5-small-24L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-small-24L-ccmatrix-multi) | [t5-small-24L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-small-24L-ccmatrix-multi) | |:-----------------------|:-----------------------------|:-----------------------------|:------------------------------|:------------------------------| | *source_lang* | en | nl | en | nl | | *target_lang* | nl | en | nl | en | | *source_prefix* | translate English to Dutch: | translate Dutch to English: | translate English to Dutch: | translate Dutch to English: | | *ccmatrix_bleu* | **56.8** | 62.8 | 57.4 | **63.1** | | *tatoeba_bleu* | **46.6** | **52.8** | 46.4 | 51.7 | | *opus_books_bleu* | **13.5** | **24.9** | 12.9 | 23.4 | | *ccmatrix_bp* | 0.95 | 0.96 | 0.95 | 0.96 | | *tatoeba_bp* | 0.97 | 0.94 | 0.98 | 0.94 | | *opus_books_bp* | 0.8 | 0.94 | 0.77 | 0.89 | | *avg_bleu* | **38.96** | **46.86** | 38.92 | 46.06 | | *max_source_length* | 128 | 128 | 128 | 128 | | *max_target_length* | 128 | 128 | 128 | 128 | | *adam_beta1* | 0.9 | 0.9 | 0.9 | 0.9 | | *adam_beta2* | 0.997 | 0.997 | 0.997 | 0.997 | | *weight_decay* | 0.05 | 0.05 | 0.002 | 0.002 | | *lr* | 5e-05 | 5e-05 | 0.0005 | 0.0005 | | *label_smoothing_factor* | 0.15 | 0.15 | 0.1 | 0.1 | | *train_batch_size* | 128 | 128 | 128 | 128 | | *warmup_steps* | 2000 | 2000 | 2000 | 2000 | | *total steps* | 390625 | 390625 | 390625 | 390625 | | *duration* | 4d 5h | 4d 5h | 3d 2h | 3d 2h | | *num parameters* | 729M | 729M | 250M | 250M | ## Acknowledgements This project would not have been possible without compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/). The HuggingFace 🤗 ecosystem was instrumental in all parts of the training. Weights & Biases made it possible to keep track of many training sessions and orchestrate hyper-parameter sweeps with insightful visualizations. The following repositories where helpful in setting up the TPU-VM, and getting an idea what sensible hyper-parameters are for training gpt2 from scratch: * [Gsarti's Pretrain and Fine-tune a T5 model with Flax on GCP](https://github.com/gsarti/t5-flax-gcp) * [Flax/Jax Community week t5-base-dutch](https://huggingface.co/flax-community/t5-base-dutch) Created by [Yeb Havinga](https://www.linkedin.com/in/yeb-havinga-86530825/)
yhavinga/t5-base-36L-ccmatrix-multi
yhavinga
2022-08-07T12:07:12Z
13
1
transformers
[ "transformers", "pytorch", "jax", "tensorboard", "t5", "text2text-generation", "translation", "seq2seq", "nl", "en", "dataset:yhavinga/mc4_nl_cleaned", "dataset:yhavinga/ccmatrix", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
translation
2022-04-22T07:56:31Z
--- language: - nl - en datasets: - yhavinga/mc4_nl_cleaned - yhavinga/ccmatrix tags: - t5 - translation - seq2seq pipeline_tag: translation widget: - text: "It is a painful and tragic spectacle that rises before me: I have drawn back the curtain from the rottenness of man. This word, in my mouth, is at least free from one suspicion: that it involves a moral accusation against humanity." - text: "Young Wehling was hunched in his chair, his head in his hand. He was so rumpled, so still and colorless as to be virtually invisible. His camouflage was perfect, since the waiting room had a disorderly and demoralized air, too. Chairs and ashtrays had been moved away from the walls. The floor was paved with spattered dropcloths." license: apache-2.0 --- # t5-base-36L-ccmatrix-multi A [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) model finetuned for Dutch to English and English to Dutch translation on the CCMatrix dataset. Evaluation metrics of this model are listed in the **Translation models** section below. You can use this model directly with a pipeline for text translation: ```python model_name = "yhavinga/t5-base-36L-ccmatrix-multi" from transformers import AutoTokenizer from transformers import AutoModelForSeq2SeqLM from transformers import pipeline import torch device_num = 0 if torch.cuda.is_available() else -1 device = "cpu" if device_num < 0 else f"cuda:{device_num}" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSeq2SeqLM.from_pretrained(model_name).to(device) params = {"max_length": 128, "num_beams": 4, "early_stopping": True} en_to_nl = pipeline("translation_en_to_nl", tokenizer=tokenizer, model=model, device=device_num) print(en_to_nl("""Young Wehling was hunched in his chair, his head in his hand. He was so rumpled, so still and colorless as to be virtually invisible.""", **params)[0]['translation_text']) nl_to_en = pipeline("translation_nl_to_en", tokenizer=tokenizer, model=model, device=device_num) print(nl_to_en("""De jonge Wehling zat gebogen in zijn stoel, zijn hoofd in zijn hand. Hij was zo stoffig, zo stil en kleurloos dat hij vrijwel onzichtbaar was.""", **params)[0]['translation_text']) ``` This **t5 eff** model has **728M** parameters. It was pre-trained with masked language modeling (denoise token span corruption) objective on the dataset `mc4_nl_cleaned` config `large_en_nl` for **1** epoch(s) and a duration of **17d15h**, with a sequence length of **512**, batch size **512** and **212963** total steps (**56B** tokens). Pre-training evaluation loss and accuracy are **1,05** and **0,76**. Refer to the evaluation section below for a comparison of the pre-trained models on summarization and translation. ## Tokenizer The model uses a cased SentencePiece tokenizer configured with the `Nmt, NFKC, Replace multi-space to single-space` normalizers and has 32003 tokens. It was trained on Dutch and English with scripts from the Huggingface Transformers [Flax examples](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling). See [./raw/main/tokenizer.json](tokenizer.json) for details. ## Dataset(s) All models listed below are pre-trained on [cleaned Dutch mC4](https://huggingface.co/datasets/yhavinga/mc4_nl_cleaned), which is the original mC4, except * Documents that contained words from a selection of the Dutch and English [List of Dirty Naught Obscene and Otherwise Bad Words](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words) are removed * Sentences with less than 3 words are removed * Sentences with a word of more than 1000 characters are removed * Documents with less than 5 sentences are removed * Documents with "javascript", "lorum ipsum", "terms of use", "privacy policy", "cookie policy", "uses cookies", "use of cookies", "use cookies", "elementen ontbreken", "deze printversie" are removed. The Dutch and English models are pre-trained on a 50/50% mix of Dutch mC4 and English C4. The translation models are fine-tuned on [CCMatrix](https://huggingface.co/datasets/yhavinga/ccmatrix). ## Dutch T5 Models Three types of [Dutch T5 models have been trained (blog)](https://huggingface.co/spaces/yhavinga/pre-training-dutch-t5-models). `t5-base-dutch` is the only model with an original T5 config. The other model types t5-v1.1 and t5-eff have `gated-relu` instead of `relu` as activation function, and trained with a drop-out of `0.0` unless training would diverge (`t5-v1.1-large-dutch-cased`). The T5-eff models are models that differ in their number of layers. The table will list the several dimensions of these models. Not all t5-eff models are efficient, the best example being the inefficient `t5-xl-4L-dutch-english-cased`. | | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1.1-large-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-large-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-xl-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-xl-8l-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) | |:------------------|:----------------|:-----------------------------|:---------------------------|:----------------------------|:-----------------------------------|:----------------------------------------|:-----------------------------|:-------------------------------|:----------------------------------|:-----------------------------------|:--------------------------------------| | *type* | t5 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5 eff | t5 eff | t5 eff | t5 eff | t5 eff | | *d_model* | 768 | 768 | 768 | 1024 | 768 | 768 | 512 | 2048 | 768 | 1024 | 1024 | | *d_ff* | 3072 | 2048 | 2048 | 2816 | 2048 | 2048 | 1920 | 5120 | 2560 | 16384 | 4096 | | *num_heads* | 12 | 12 | 12 | 16 | 12 | 12 | 8 | 32 | 12 | 32 | 16 | | *d_kv* | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 128 | 64 | | *num_layers* | 12 | 12 | 12 | 24 | 12 | 12 | 24 | 4 | 36 | 8 | 8 | | *num parameters* | 223M | 248M | 248M | 783M | 248M | 248M | 250M | 585M | 729M | 1241M | 335M | | *feed_forward_proj* | relu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | | *dropout* | 0.1 | 0.0 | 0.0 | 0.1 | 0.0 | 0.0 | 0.0 | 0.1 | 0.0 | 0.0 | 0.0 | | *dataset* | mc4_nl_cleaned | mc4_nl_cleaned full | mc4_nl_cleaned full | mc4_nl_cleaned | mc4_nl_cleaned small_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | | *tr. seq len* | 512 | 1024 | 1024 | 512 | 512 | 1024 | 512 | 512 | 512 | 512 | 512 | | *batch size* | 128 | 64 | 64 | 64 | 128 | 64 | 128 | 512 | 512 | 64 | 128 | | *total steps* | 527500 | 1014525 | 1210154 | 1120k/2427498 | 2839630 | 1520k/3397024 | 851852 | 212963 | 212963 | 538k/1703705 | 851850 | | *epochs* | 1 | 2 | 2 | 2 | 10 | 4 | 1 | 1 | 1 | 1 | 1 | | *duration* | 2d9h | 5d5h | 6d6h | 8d13h | 11d18h | 9d1h | 4d10h | 6d1h | 17d15h | 4d 19h | 3d 23h | | *optimizer* | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | | *lr* | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.009 | 0.005 | 0.005 | | *warmup* | 10000.0 | 10000.0 | 10000.0 | 10000.0 | 10000.0 | 5000.0 | 20000.0 | 2500.0 | 1000.0 | 1500.0 | 1500.0 | | *eval loss* | 1,38 | 1,20 | 0,96 | 1,07 | 1,11 | 1,13 | 1,18 | 1,27 | 1,05 | 1,3019 | 1,15 | | *eval acc* | 0,70 | 0,73 | 0,78 | 0,76 | 0,75 | 0,74 | 0,74 | 0,72 | 0,76 | 0,71 | 0,74 | ## Evaluation Most models from the list above have been fine-tuned for summarization and translation. The figure below shows the evaluation scores, where the x-axis shows the translation Bleu score (higher is better) and y-axis the summarization Rouge1 translation score (higher is better). Point size is proportional to the model size. Models with faster inference speed are green, slower inference speed is plotted as bleu. ![Evaluation T5 Dutch English](evaluation_t5_dutch_english.png) Evaluation was run on fine-tuned models trained with the following settings: | | Summarization | Translation | |---------------:|------------------|-------------------| | Dataset | CNN Dailymail NL | CCMatrix en -> nl | | #train samples | 50K | 50K | | Optimizer | Adam | Adam | | learning rate | 0.001 | 0.0005 | | source length | 1024 | 128 | | target length | 142 | 128 | |label smoothing | 0.05 | 0.1 | | #eval samples | 1000 | 1000 | Note that the amount of training data is limited to a fraction of the total dataset sizes, therefore the scores below can only be used to compare the 'transfer-learning' strength. The fine-tuned checkpoints for this evaluation are not saved, since they were trained for comparison of pre-trained models only. The numbers for summarization are the Rouge scores on 1000 documents from the test split. | | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) | mt5-base | |:------------------------|----------------:|-----------------------------:|---------------------------:|-----------------------------------:|----------------------------------------:|-----------------------------:|-------------------------------:|----------------------------------:|--------------------------------------:|-----------:| | *rouge1* | 33.38 | 33.97 | 34.39 | 33.38 | 34.97 | 34.38 | 30.35 | **35.04** | 34.04 | 33.25 | | *rouge2* | 13.32 | 13.85 | 13.98 | 13.47 | 14.01 | 13.89 | 11.57 | **14.23** | 13.76 | 12.74 | | *rougeL* | 24.22 | 24.72 | 25.1 | 24.34 | 24.99 | **25.25** | 22.69 | 25.05 | 24.75 | 23.5 | | *rougeLsum* | 30.23 | 30.9 | 31.44 | 30.51 | 32.01 | 31.38 | 27.5 | **32.12** | 31.12 | 30.15 | | *samples_per_second* | 3.18 | 3.02 | 2.99 | 3.22 | 2.97 | 1.57 | 2.8 | 0.61 | **3.27** | 1.22 | The models below have been evaluated for English to Dutch translation. Note that the first four models are pre-trained on Dutch only. That they still perform adequate is probably because the translation direction is English to Dutch. The numbers reported are the Bleu scores on 1000 documents from the test split. | | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1.1-large-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-large-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) | mt5-base | |:-------------------------------|----------------:|-----------------------------:|---------------------------:|----------------------------:|-----------------------------------:|----------------------------------------:|-----------------------------:|-------------------------------:|----------------------------------:|--------------------------------------:|-----------:| | *precision_ng1* | 74.17 | 78.09 | 77.08 | 72.12 | 77.19 | 78.76 | 78.59 | 77.3 | **79.75** | 78.88 | 73.47 | | *precision_ng2* | 52.42 | 57.52 | 55.31 | 48.7 | 55.39 | 58.01 | 57.83 | 55.27 | **59.89** | 58.27 | 50.12 | | *precision_ng3* | 39.55 | 45.2 | 42.54 | 35.54 | 42.25 | 45.13 | 45.02 | 42.06 | **47.4** | 45.95 | 36.59 | | *precision_ng4* | 30.23 | 36.04 | 33.26 | 26.27 | 32.74 | 35.72 | 35.41 | 32.61 | **38.1** | 36.91 | 27.26 | | *bp* | 0.99 | 0.98 | 0.97 | 0.98 | 0.98 | 0.98 | 0.98 | 0.97 | 0.98 | 0.98 | 0.98 | | *score* | 45.88 | 51.21 | 48.31 | 41.59 | 48.17 | 51.31 | 50.82 | 47.83 | **53** | 51.79 | 42.74 | | *samples_per_second* | **45.19** | 45.05 | 38.67 | 10.12 | 42.19 | 42.61 | 12.85 | 33.74 | 9.07 | 37.86 | 9.03 | ## Translation models The models `t5-small-24L-dutch-english` and `t5-base-36L-dutch-english` have been fine-tuned for both language directions on the first 25M samples from CCMatrix, giving a total of 50M training samples. Evaluation is performed on out-of-sample CCMatrix and also on Tatoeba and Opus Books. The `_bp` columns list the *brevity penalty*. The `avg_bleu` score is the bleu score averaged over all three evaluation datasets. The best scores displayed in bold for both translation directions. | | [t5-base-36L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-base-36L-ccmatrix-multi) | [t5-base-36L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-base-36L-ccmatrix-multi) | [t5-small-24L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-small-24L-ccmatrix-multi) | [t5-small-24L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-small-24L-ccmatrix-multi) | |:-----------------------|:-----------------------------|:-----------------------------|:------------------------------|:------------------------------| | *source_lang* | en | nl | en | nl | | *target_lang* | nl | en | nl | en | | *source_prefix* | translate English to Dutch: | translate Dutch to English: | translate English to Dutch: | translate Dutch to English: | | *ccmatrix_bleu* | **56.8** | 62.8 | 57.4 | **63.1** | | *tatoeba_bleu* | **46.6** | **52.8** | 46.4 | 51.7 | | *opus_books_bleu* | **13.5** | **24.9** | 12.9 | 23.4 | | *ccmatrix_bp* | 0.95 | 0.96 | 0.95 | 0.96 | | *tatoeba_bp* | 0.97 | 0.94 | 0.98 | 0.94 | | *opus_books_bp* | 0.8 | 0.94 | 0.77 | 0.89 | | *avg_bleu* | **38.96** | **46.86** | 38.92 | 46.06 | | *max_source_length* | 128 | 128 | 128 | 128 | | *max_target_length* | 128 | 128 | 128 | 128 | | *adam_beta1* | 0.9 | 0.9 | 0.9 | 0.9 | | *adam_beta2* | 0.997 | 0.997 | 0.997 | 0.997 | | *weight_decay* | 0.05 | 0.05 | 0.002 | 0.002 | | *lr* | 5e-05 | 5e-05 | 0.0005 | 0.0005 | | *label_smoothing_factor* | 0.15 | 0.15 | 0.1 | 0.1 | | *train_batch_size* | 128 | 128 | 128 | 128 | | *warmup_steps* | 2000 | 2000 | 2000 | 2000 | | *total steps* | 390625 | 390625 | 390625 | 390625 | | *duration* | 4d 5h | 4d 5h | 3d 2h | 3d 2h | | *num parameters* | 729M | 729M | 250M | 250M | ## Acknowledgements This project would not have been possible without compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/). The HuggingFace 🤗 ecosystem was instrumental in all parts of the training. Weights & Biases made it possible to keep track of many training sessions and orchestrate hyper-parameter sweeps with insightful visualizations. The following repositories where helpful in setting up the TPU-VM, and getting an idea what sensible hyper-parameters are for training gpt2 from scratch: * [Gsarti's Pretrain and Fine-tune a T5 model with Flax on GCP](https://github.com/gsarti/t5-flax-gcp) * [Flax/Jax Community week t5-base-dutch](https://huggingface.co/flax-community/t5-base-dutch) Created by [Yeb Havinga](https://www.linkedin.com/in/yeb-havinga-86530825/)
yhavinga/t5-small-24L-dutch-english
yhavinga
2022-08-07T12:06:40Z
22
1
transformers
[ "transformers", "jax", "t5", "text2text-generation", "seq2seq", "nl", "en", "dataset:yhavinga/mc4_nl_cleaned", "arxiv:1910.10683", "arxiv:2109.10686", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text2text-generation
2022-03-09T20:39:13Z
--- language: - nl - en datasets: - yhavinga/mc4_nl_cleaned tags: - t5 - seq2seq inference: false license: apache-2.0 --- # t5-small-24L-dutch-english A [T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) sequence to sequence model pre-trained from scratch on [cleaned Dutch 🇳🇱🇧🇪 mC4 and cleaned English 🇬🇧 C4](https://huggingface.co/datasets/yhavinga/mc4_nl_cleaned). This **t5 eff** model has **249M** parameters. It was pre-trained with masked language modeling (denoise token span corruption) objective on the dataset `mc4_nl_cleaned` config `large_en_nl` for **1** epoch(s) and a duration of **4d10h**, with a sequence length of **512**, batch size **128** and **851852** total steps (**56B** tokens). Pre-training evaluation loss and accuracy are **1,18** and **0,74**. Refer to the evaluation section below for a comparison of the pre-trained models on summarization and translation. * Pre-trained T5 models need to be finetuned before they can be used for downstream tasks, therefore the inference widget on the right has been turned off. * For a demo of the Dutch CNN summarization models, head over to the Hugging Face Spaces for the **[Netherformer 📰](https://huggingface.co/spaces/flax-community/netherformer)** example application! Please refer to the original T5 papers and Scale Efficiently papers for more information about the T5 architecture and configs, though it must be noted that this model (t5-small-24L-dutch-english) is unrelated to these projects and not an 'official' checkpoint. * **[Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)** by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*. * **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. ## Tokenizer The model uses a cased SentencePiece tokenizer configured with the `Nmt, NFKC, Replace multi-space to single-space` normalizers and has 32003 tokens. It was trained on Dutch and English with scripts from the Huggingface Transformers [Flax examples](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling). See [./raw/main/tokenizer.json](tokenizer.json) for details. ## Dataset(s) All models listed below are pre-trained on [cleaned Dutch mC4](https://huggingface.co/datasets/yhavinga/mc4_nl_cleaned), which is the original mC4, except * Documents that contained words from a selection of the Dutch and English [List of Dirty Naught Obscene and Otherwise Bad Words](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words) are removed * Sentences with less than 3 words are removed * Sentences with a word of more than 1000 characters are removed * Documents with less than 5 sentences are removed * Documents with "javascript", "lorum ipsum", "terms of use", "privacy policy", "cookie policy", "uses cookies", "use of cookies", "use cookies", "elementen ontbreken", "deze printversie" are removed. The Dutch and English models are pre-trained on a 50/50% mix of Dutch mC4 and English C4. The translation models are fine-tuned on [CCMatrix](https://huggingface.co/datasets/yhavinga/ccmatrix). ## Dutch T5 Models Three types of [Dutch T5 models have been trained (blog)](https://huggingface.co/spaces/yhavinga/pre-training-dutch-t5-models). `t5-base-dutch` is the only model with an original T5 config. The other model types t5-v1.1 and t5-eff have `gated-relu` instead of `relu` as activation function, and trained with a drop-out of `0.0` unless training would diverge (`t5-v1.1-large-dutch-cased`). The T5-eff models are models that differ in their number of layers. The table will list the several dimensions of these models. Not all t5-eff models are efficient, the best example being the inefficient `t5-xl-4L-dutch-english-cased`. | | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1.1-large-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-large-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-xl-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-xl-8l-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) | |:------------------|:----------------|:-----------------------------|:---------------------------|:----------------------------|:-----------------------------------|:----------------------------------------|:-----------------------------|:-------------------------------|:----------------------------------|:-----------------------------------|:--------------------------------------| | *type* | t5 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5 eff | t5 eff | t5 eff | t5 eff | t5 eff | | *d_model* | 768 | 768 | 768 | 1024 | 768 | 768 | 512 | 2048 | 768 | 1024 | 1024 | | *d_ff* | 3072 | 2048 | 2048 | 2816 | 2048 | 2048 | 1920 | 5120 | 2560 | 16384 | 4096 | | *num_heads* | 12 | 12 | 12 | 16 | 12 | 12 | 8 | 32 | 12 | 32 | 16 | | *d_kv* | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 128 | 64 | | *num_layers* | 12 | 12 | 12 | 24 | 12 | 12 | 24 | 4 | 36 | 8 | 8 | | *num parameters* | 223M | 248M | 248M | 783M | 248M | 248M | 250M | 585M | 729M | 1241M | 335M | | *feed_forward_proj* | relu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | | *dropout* | 0.1 | 0.0 | 0.0 | 0.1 | 0.0 | 0.0 | 0.0 | 0.1 | 0.0 | 0.0 | 0.0 | | *dataset* | mc4_nl_cleaned | mc4_nl_cleaned full | mc4_nl_cleaned full | mc4_nl_cleaned | mc4_nl_cleaned small_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | | *tr. seq len* | 512 | 1024 | 1024 | 512 | 512 | 1024 | 512 | 512 | 512 | 512 | 512 | | *batch size* | 128 | 64 | 64 | 64 | 128 | 64 | 128 | 512 | 512 | 64 | 128 | | *total steps* | 527500 | 1014525 | 1210154 | 1120k/2427498 | 2839630 | 1520k/3397024 | 851852 | 212963 | 212963 | 538k/1703705 | 851850 | | *epochs* | 1 | 2 | 2 | 2 | 10 | 4 | 1 | 1 | 1 | 1 | 1 | | *duration* | 2d9h | 5d5h | 6d6h | 8d13h | 11d18h | 9d1h | 4d10h | 6d1h | 17d15h | 4d 19h | 3d 23h | | *optimizer* | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | | *lr* | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.009 | 0.005 | 0.005 | | *warmup* | 10000.0 | 10000.0 | 10000.0 | 10000.0 | 10000.0 | 5000.0 | 20000.0 | 2500.0 | 1000.0 | 1500.0 | 1500.0 | | *eval loss* | 1,38 | 1,20 | 0,96 | 1,07 | 1,11 | 1,13 | 1,18 | 1,27 | 1,05 | 1,3019 | 1,15 | | *eval acc* | 0,70 | 0,73 | 0,78 | 0,76 | 0,75 | 0,74 | 0,74 | 0,72 | 0,76 | 0,71 | 0,74 | ## Evaluation Most models from the list above have been fine-tuned for summarization and translation. The figure below shows the evaluation scores, where the x-axis shows the translation Bleu score (higher is better) and y-axis the summarization Rouge1 translation score (higher is better). Point size is proportional to the model size. Models with faster inference speed are green, slower inference speed is plotted as bleu. ![Evaluation T5 Dutch English](evaluation_t5_dutch_english.png) Evaluation was run on fine-tuned models trained with the following settings: | | Summarization | Translation | |---------------:|------------------|-------------------| | Dataset | CNN Dailymail NL | CCMatrix en -> nl | | #train samples | 50K | 50K | | Optimizer | Adam | Adam | | learning rate | 0.001 | 0.0005 | | source length | 1024 | 128 | | target length | 142 | 128 | |label smoothing | 0.05 | 0.1 | | #eval samples | 1000 | 1000 | Note that the amount of training data is limited to a fraction of the total dataset sizes, therefore the scores below can only be used to compare the 'transfer-learning' strength. The fine-tuned checkpoints for this evaluation are not saved, since they were trained for comparison of pre-trained models only. The numbers for summarization are the Rouge scores on 1000 documents from the test split. | | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) | mt5-base | |:------------------------|----------------:|-----------------------------:|---------------------------:|-----------------------------------:|----------------------------------------:|-----------------------------:|-------------------------------:|----------------------------------:|--------------------------------------:|-----------:| | *rouge1* | 33.38 | 33.97 | 34.39 | 33.38 | 34.97 | 34.38 | 30.35 | **35.04** | 34.04 | 33.25 | | *rouge2* | 13.32 | 13.85 | 13.98 | 13.47 | 14.01 | 13.89 | 11.57 | **14.23** | 13.76 | 12.74 | | *rougeL* | 24.22 | 24.72 | 25.1 | 24.34 | 24.99 | **25.25** | 22.69 | 25.05 | 24.75 | 23.5 | | *rougeLsum* | 30.23 | 30.9 | 31.44 | 30.51 | 32.01 | 31.38 | 27.5 | **32.12** | 31.12 | 30.15 | | *samples_per_second* | 3.18 | 3.02 | 2.99 | 3.22 | 2.97 | 1.57 | 2.8 | 0.61 | **3.27** | 1.22 | The models below have been evaluated for English to Dutch translation. Note that the first four models are pre-trained on Dutch only. That they still perform adequate is probably because the translation direction is English to Dutch. The numbers reported are the Bleu scores on 1000 documents from the test split. | | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1.1-large-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-large-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) | mt5-base | |:-------------------------------|----------------:|-----------------------------:|---------------------------:|----------------------------:|-----------------------------------:|----------------------------------------:|-----------------------------:|-------------------------------:|----------------------------------:|--------------------------------------:|-----------:| | *precision_ng1* | 74.17 | 78.09 | 77.08 | 72.12 | 77.19 | 78.76 | 78.59 | 77.3 | **79.75** | 78.88 | 73.47 | | *precision_ng2* | 52.42 | 57.52 | 55.31 | 48.7 | 55.39 | 58.01 | 57.83 | 55.27 | **59.89** | 58.27 | 50.12 | | *precision_ng3* | 39.55 | 45.2 | 42.54 | 35.54 | 42.25 | 45.13 | 45.02 | 42.06 | **47.4** | 45.95 | 36.59 | | *precision_ng4* | 30.23 | 36.04 | 33.26 | 26.27 | 32.74 | 35.72 | 35.41 | 32.61 | **38.1** | 36.91 | 27.26 | | *bp* | 0.99 | 0.98 | 0.97 | 0.98 | 0.98 | 0.98 | 0.98 | 0.97 | 0.98 | 0.98 | 0.98 | | *score* | 45.88 | 51.21 | 48.31 | 41.59 | 48.17 | 51.31 | 50.82 | 47.83 | **53** | 51.79 | 42.74 | | *samples_per_second* | **45.19** | 45.05 | 38.67 | 10.12 | 42.19 | 42.61 | 12.85 | 33.74 | 9.07 | 37.86 | 9.03 | ## Translation models The models `t5-small-24L-dutch-english` and `t5-base-36L-dutch-english` have been fine-tuned for both language directions on the first 25M samples from CCMatrix, giving a total of 50M training samples. Evaluation is performed on out-of-sample CCMatrix and also on Tatoeba and Opus Books. The `_bp` columns list the *brevity penalty*. The `avg_bleu` score is the bleu score averaged over all three evaluation datasets. The best scores displayed in bold for both translation directions. | | [t5-base-36L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-base-36L-ccmatrix-multi) | [t5-base-36L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-base-36L-ccmatrix-multi) | [t5-small-24L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-small-24L-ccmatrix-multi) | [t5-small-24L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-small-24L-ccmatrix-multi) | |:-----------------------|:-----------------------------|:-----------------------------|:------------------------------|:------------------------------| | *source_lang* | en | nl | en | nl | | *target_lang* | nl | en | nl | en | | *source_prefix* | translate English to Dutch: | translate Dutch to English: | translate English to Dutch: | translate Dutch to English: | | *ccmatrix_bleu* | **56.8** | 62.8 | 57.4 | **63.1** | | *tatoeba_bleu* | **46.6** | **52.8** | 46.4 | 51.7 | | *opus_books_bleu* | **13.5** | **24.9** | 12.9 | 23.4 | | *ccmatrix_bp* | 0.95 | 0.96 | 0.95 | 0.96 | | *tatoeba_bp* | 0.97 | 0.94 | 0.98 | 0.94 | | *opus_books_bp* | 0.8 | 0.94 | 0.77 | 0.89 | | *avg_bleu* | **38.96** | **46.86** | 38.92 | 46.06 | | *max_source_length* | 128 | 128 | 128 | 128 | | *max_target_length* | 128 | 128 | 128 | 128 | | *adam_beta1* | 0.9 | 0.9 | 0.9 | 0.9 | | *adam_beta2* | 0.997 | 0.997 | 0.997 | 0.997 | | *weight_decay* | 0.05 | 0.05 | 0.002 | 0.002 | | *lr* | 5e-05 | 5e-05 | 0.0005 | 0.0005 | | *label_smoothing_factor* | 0.15 | 0.15 | 0.1 | 0.1 | | *train_batch_size* | 128 | 128 | 128 | 128 | | *warmup_steps* | 2000 | 2000 | 2000 | 2000 | | *total steps* | 390625 | 390625 | 390625 | 390625 | | *duration* | 4d 5h | 4d 5h | 3d 2h | 3d 2h | | *num parameters* | 729M | 729M | 250M | 250M | ## Acknowledgements This project would not have been possible without compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/). The HuggingFace 🤗 ecosystem was instrumental in all parts of the training. Weights & Biases made it possible to keep track of many training sessions and orchestrate hyper-parameter sweeps with insightful visualizations. The following repositories where helpful in setting up the TPU-VM, and getting an idea what sensible hyper-parameters are for training gpt2 from scratch: * [Gsarti's Pretrain and Fine-tune a T5 model with Flax on GCP](https://github.com/gsarti/t5-flax-gcp) * [Flax/Jax Community week t5-base-dutch](https://huggingface.co/flax-community/t5-base-dutch) Created by [Yeb Havinga](https://www.linkedin.com/in/yeb-havinga-86530825/)
yhavinga/t5-v1_1-base-dutch-english-cased
yhavinga
2022-08-07T12:06:31Z
3
0
transformers
[ "transformers", "jax", "t5", "text2text-generation", "seq2seq", "nl", "en", "dataset:yhavinga/mc4_nl_cleaned", "arxiv:1910.10683", "arxiv:2109.10686", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: - nl - en datasets: - yhavinga/mc4_nl_cleaned tags: - t5 - seq2seq inference: false license: apache-2.0 --- # t5-v1_1-base-dutch-english-cased A [T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) sequence to sequence model pre-trained from scratch on [cleaned Dutch 🇳🇱🇧🇪 mC4 and cleaned English 🇬🇧 C4](https://huggingface.co/datasets/yhavinga/mc4_nl_cleaned). This **t5-v1.1** model has **247M** parameters. It was pre-trained with masked language modeling (denoise token span corruption) objective on the dataset `mc4_nl_cleaned` config `small_en_nl` for **10** epoch(s) and a duration of **11d18h**, with a sequence length of **512**, batch size **128** and **2839630** total steps (**186B** tokens). Pre-training evaluation loss and accuracy are **1,11** and **0,75**. Refer to the evaluation section below for a comparison of the pre-trained models on summarization and translation. * Pre-trained T5 models need to be finetuned before they can be used for downstream tasks, therefore the inference widget on the right has been turned off. * For a demo of the Dutch CNN summarization models, head over to the Hugging Face Spaces for the **[Netherformer 📰](https://huggingface.co/spaces/flax-community/netherformer)** example application! Please refer to the original T5 papers and Scale Efficiently papers for more information about the T5 architecture and configs, though it must be noted that this model (t5-v1_1-base-dutch-english-cased) is unrelated to these projects and not an 'official' checkpoint. * **[Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)** by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*. * **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. ## Tokenizer The model uses a cased SentencePiece tokenizer configured with the `Nmt, NFKC, Replace multi-space to single-space` normalizers and has 32003 tokens. It was trained on Dutch and English with scripts from the Huggingface Transformers [Flax examples](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling). See [./raw/main/tokenizer.json](tokenizer.json) for details. ## Dataset(s) All models listed below are pre-trained on [cleaned Dutch mC4](https://huggingface.co/datasets/yhavinga/mc4_nl_cleaned), which is the original mC4, except * Documents that contained words from a selection of the Dutch and English [List of Dirty Naught Obscene and Otherwise Bad Words](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words) are removed * Sentences with less than 3 words are removed * Sentences with a word of more than 1000 characters are removed * Documents with less than 5 sentences are removed * Documents with "javascript", "lorum ipsum", "terms of use", "privacy policy", "cookie policy", "uses cookies", "use of cookies", "use cookies", "elementen ontbreken", "deze printversie" are removed. The Dutch and English models are pre-trained on a 50/50% mix of Dutch mC4 and English C4. The translation models are fine-tuned on [CCMatrix](https://huggingface.co/datasets/yhavinga/ccmatrix). ## Dutch T5 Models Three types of [Dutch T5 models have been trained (blog)](https://huggingface.co/spaces/yhavinga/pre-training-dutch-t5-models). `t5-base-dutch` is the only model with an original T5 config. The other model types t5-v1.1 and t5-eff have `gated-relu` instead of `relu` as activation function, and trained with a drop-out of `0.0` unless training would diverge (`t5-v1.1-large-dutch-cased`). The T5-eff models are models that differ in their number of layers. The table will list the several dimensions of these models. Not all t5-eff models are efficient, the best example being the inefficient `t5-xl-4L-dutch-english-cased`. | | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1.1-large-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-large-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-xl-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-xl-8l-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) | |:------------------|:----------------|:-----------------------------|:---------------------------|:----------------------------|:-----------------------------------|:----------------------------------------|:-----------------------------|:-------------------------------|:----------------------------------|:-----------------------------------|:--------------------------------------| | *type* | t5 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5 eff | t5 eff | t5 eff | t5 eff | t5 eff | | *d_model* | 768 | 768 | 768 | 1024 | 768 | 768 | 512 | 2048 | 768 | 1024 | 1024 | | *d_ff* | 3072 | 2048 | 2048 | 2816 | 2048 | 2048 | 1920 | 5120 | 2560 | 16384 | 4096 | | *num_heads* | 12 | 12 | 12 | 16 | 12 | 12 | 8 | 32 | 12 | 32 | 16 | | *d_kv* | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 128 | 64 | | *num_layers* | 12 | 12 | 12 | 24 | 12 | 12 | 24 | 4 | 36 | 8 | 8 | | *num parameters* | 223M | 248M | 248M | 783M | 248M | 248M | 250M | 585M | 729M | 1241M | 335M | | *feed_forward_proj* | relu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | | *dropout* | 0.1 | 0.0 | 0.0 | 0.1 | 0.0 | 0.0 | 0.0 | 0.1 | 0.0 | 0.0 | 0.0 | | *dataset* | mc4_nl_cleaned | mc4_nl_cleaned full | mc4_nl_cleaned full | mc4_nl_cleaned | mc4_nl_cleaned small_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | | *tr. seq len* | 512 | 1024 | 1024 | 512 | 512 | 1024 | 512 | 512 | 512 | 512 | 512 | | *batch size* | 128 | 64 | 64 | 64 | 128 | 64 | 128 | 512 | 512 | 64 | 128 | | *total steps* | 527500 | 1014525 | 1210154 | 1120k/2427498 | 2839630 | 1520k/3397024 | 851852 | 212963 | 212963 | 538k/1703705 | 851850 | | *epochs* | 1 | 2 | 2 | 2 | 10 | 4 | 1 | 1 | 1 | 1 | 1 | | *duration* | 2d9h | 5d5h | 6d6h | 8d13h | 11d18h | 9d1h | 4d10h | 6d1h | 17d15h | 4d 19h | 3d 23h | | *optimizer* | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | | *lr* | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.009 | 0.005 | 0.005 | | *warmup* | 10000.0 | 10000.0 | 10000.0 | 10000.0 | 10000.0 | 5000.0 | 20000.0 | 2500.0 | 1000.0 | 1500.0 | 1500.0 | | *eval loss* | 1,38 | 1,20 | 0,96 | 1,07 | 1,11 | 1,13 | 1,18 | 1,27 | 1,05 | 1,3019 | 1,15 | | *eval acc* | 0,70 | 0,73 | 0,78 | 0,76 | 0,75 | 0,74 | 0,74 | 0,72 | 0,76 | 0,71 | 0,74 | ## Evaluation Most models from the list above have been fine-tuned for summarization and translation. The figure below shows the evaluation scores, where the x-axis shows the translation Bleu score (higher is better) and y-axis the summarization Rouge1 translation score (higher is better). Point size is proportional to the model size. Models with faster inference speed are green, slower inference speed is plotted as bleu. ![Evaluation T5 Dutch English](evaluation_t5_dutch_english.png) Evaluation was run on fine-tuned models trained with the following settings: | | Summarization | Translation | |---------------:|------------------|-------------------| | Dataset | CNN Dailymail NL | CCMatrix en -> nl | | #train samples | 50K | 50K | | Optimizer | Adam | Adam | | learning rate | 0.001 | 0.0005 | | source length | 1024 | 128 | | target length | 142 | 128 | |label smoothing | 0.05 | 0.1 | | #eval samples | 1000 | 1000 | Note that the amount of training data is limited to a fraction of the total dataset sizes, therefore the scores below can only be used to compare the 'transfer-learning' strength. The fine-tuned checkpoints for this evaluation are not saved, since they were trained for comparison of pre-trained models only. The numbers for summarization are the Rouge scores on 1000 documents from the test split. | | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) | mt5-base | |:------------------------|----------------:|-----------------------------:|---------------------------:|-----------------------------------:|----------------------------------------:|-----------------------------:|-------------------------------:|----------------------------------:|--------------------------------------:|-----------:| | *rouge1* | 33.38 | 33.97 | 34.39 | 33.38 | 34.97 | 34.38 | 30.35 | **35.04** | 34.04 | 33.25 | | *rouge2* | 13.32 | 13.85 | 13.98 | 13.47 | 14.01 | 13.89 | 11.57 | **14.23** | 13.76 | 12.74 | | *rougeL* | 24.22 | 24.72 | 25.1 | 24.34 | 24.99 | **25.25** | 22.69 | 25.05 | 24.75 | 23.5 | | *rougeLsum* | 30.23 | 30.9 | 31.44 | 30.51 | 32.01 | 31.38 | 27.5 | **32.12** | 31.12 | 30.15 | | *samples_per_second* | 3.18 | 3.02 | 2.99 | 3.22 | 2.97 | 1.57 | 2.8 | 0.61 | **3.27** | 1.22 | The models below have been evaluated for English to Dutch translation. Note that the first four models are pre-trained on Dutch only. That they still perform adequate is probably because the translation direction is English to Dutch. The numbers reported are the Bleu scores on 1000 documents from the test split. | | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1.1-large-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-large-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) | mt5-base | |:-------------------------------|----------------:|-----------------------------:|---------------------------:|----------------------------:|-----------------------------------:|----------------------------------------:|-----------------------------:|-------------------------------:|----------------------------------:|--------------------------------------:|-----------:| | *precision_ng1* | 74.17 | 78.09 | 77.08 | 72.12 | 77.19 | 78.76 | 78.59 | 77.3 | **79.75** | 78.88 | 73.47 | | *precision_ng2* | 52.42 | 57.52 | 55.31 | 48.7 | 55.39 | 58.01 | 57.83 | 55.27 | **59.89** | 58.27 | 50.12 | | *precision_ng3* | 39.55 | 45.2 | 42.54 | 35.54 | 42.25 | 45.13 | 45.02 | 42.06 | **47.4** | 45.95 | 36.59 | | *precision_ng4* | 30.23 | 36.04 | 33.26 | 26.27 | 32.74 | 35.72 | 35.41 | 32.61 | **38.1** | 36.91 | 27.26 | | *bp* | 0.99 | 0.98 | 0.97 | 0.98 | 0.98 | 0.98 | 0.98 | 0.97 | 0.98 | 0.98 | 0.98 | | *score* | 45.88 | 51.21 | 48.31 | 41.59 | 48.17 | 51.31 | 50.82 | 47.83 | **53** | 51.79 | 42.74 | | *samples_per_second* | **45.19** | 45.05 | 38.67 | 10.12 | 42.19 | 42.61 | 12.85 | 33.74 | 9.07 | 37.86 | 9.03 | ## Translation models The models `t5-small-24L-dutch-english` and `t5-base-36L-dutch-english` have been fine-tuned for both language directions on the first 25M samples from CCMatrix, giving a total of 50M training samples. Evaluation is performed on out-of-sample CCMatrix and also on Tatoeba and Opus Books. The `_bp` columns list the *brevity penalty*. The `avg_bleu` score is the bleu score averaged over all three evaluation datasets. The best scores displayed in bold for both translation directions. | | [t5-base-36L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-base-36L-ccmatrix-multi) | [t5-base-36L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-base-36L-ccmatrix-multi) | [t5-small-24L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-small-24L-ccmatrix-multi) | [t5-small-24L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-small-24L-ccmatrix-multi) | |:-----------------------|:-----------------------------|:-----------------------------|:------------------------------|:------------------------------| | *source_lang* | en | nl | en | nl | | *target_lang* | nl | en | nl | en | | *source_prefix* | translate English to Dutch: | translate Dutch to English: | translate English to Dutch: | translate Dutch to English: | | *ccmatrix_bleu* | **56.8** | 62.8 | 57.4 | **63.1** | | *tatoeba_bleu* | **46.6** | **52.8** | 46.4 | 51.7 | | *opus_books_bleu* | **13.5** | **24.9** | 12.9 | 23.4 | | *ccmatrix_bp* | 0.95 | 0.96 | 0.95 | 0.96 | | *tatoeba_bp* | 0.97 | 0.94 | 0.98 | 0.94 | | *opus_books_bp* | 0.8 | 0.94 | 0.77 | 0.89 | | *avg_bleu* | **38.96** | **46.86** | 38.92 | 46.06 | | *max_source_length* | 128 | 128 | 128 | 128 | | *max_target_length* | 128 | 128 | 128 | 128 | | *adam_beta1* | 0.9 | 0.9 | 0.9 | 0.9 | | *adam_beta2* | 0.997 | 0.997 | 0.997 | 0.997 | | *weight_decay* | 0.05 | 0.05 | 0.002 | 0.002 | | *lr* | 5e-05 | 5e-05 | 0.0005 | 0.0005 | | *label_smoothing_factor* | 0.15 | 0.15 | 0.1 | 0.1 | | *train_batch_size* | 128 | 128 | 128 | 128 | | *warmup_steps* | 2000 | 2000 | 2000 | 2000 | | *total steps* | 390625 | 390625 | 390625 | 390625 | | *duration* | 4d 5h | 4d 5h | 3d 2h | 3d 2h | | *num parameters* | 729M | 729M | 250M | 250M | ## Acknowledgements This project would not have been possible without compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/). The HuggingFace 🤗 ecosystem was instrumental in all parts of the training. Weights & Biases made it possible to keep track of many training sessions and orchestrate hyper-parameter sweeps with insightful visualizations. The following repositories where helpful in setting up the TPU-VM, and getting an idea what sensible hyper-parameters are for training gpt2 from scratch: * [Gsarti's Pretrain and Fine-tune a T5 model with Flax on GCP](https://github.com/gsarti/t5-flax-gcp) * [Flax/Jax Community week t5-base-dutch](https://huggingface.co/flax-community/t5-base-dutch) Created by [Yeb Havinga](https://www.linkedin.com/in/yeb-havinga-86530825/)
ilan541/sbert_ssid
ilan541
2022-08-07T09:28:17Z
3
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "en", "arxiv:1904.06472", "arxiv:2102.07033", "arxiv:2104.08727", "arxiv:1704.05179", "arxiv:1810.09305", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-08-07T09:28:04Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity language: en license: apache-2.0 datasets: - s2orc - flax-sentence-embeddings/stackexchange_xml - MS Marco - gooaq - yahoo_answers_topics - code_search_net - search_qa - eli5 - snli - multi_nli - wikihow - natural_questions - trivia_qa - embedding-data/sentence-compression - embedding-data/flickr30k-captions - embedding-data/altlex - embedding-data/simple-wiki - embedding-data/QQP - embedding-data/SPECTER - embedding-data/PAQ_pairs - embedding-data/WikiAnswers --- # all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2') model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-MiniLM-L6-v2) ------ ## Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 256 word pieces is truncated. ## Training procedure ### Pre-training We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure. ### Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. #### Hyper parameters We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`. #### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |--------------------------------------------------------|:----------------------------------------:|:--------------------------:| | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 | | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | **Total** | | **1,170,060,424** |
osanseviero/is_it_a_llama
osanseviero
2022-08-07T07:02:11Z
0
0
fastai
[ "fastai", "region:us" ]
null
2022-08-07T07:02:02Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
btsas/Reinforce-cartpole
btsas
2022-08-07T06:43:47Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2022-08-07T06:41:39Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-cartpole results: - metrics: - type: mean_reward value: 98.60 +/- 33.04 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
btsas/testpyramidsrnd
btsas
2022-08-07T06:22:15Z
5
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2022-08-07T06:22:07Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids library_name: ml-agents --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: btsas/testpyramidsrnd 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
pritoms/opt-350m-opty-350m-lectures
pritoms
2022-08-07T06:08:51Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "opt", "text-generation", "generated_from_trainer", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-08-07T05:52:41Z
--- license: other tags: - generated_from_trainer model-index: - name: opt-350m-opty-350m-lectures results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opt-350m-opty-350m-lectures This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3830 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 5 | 2.7828 | | No log | 2.0 | 10 | 2.4889 | | No log | 3.0 | 15 | 2.3830 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
JTH/twitter_classification
JTH
2022-08-07T04:28:59Z
4
1
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-08-06T15:46:21Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: twitter_classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # twitter_classification Label mapping: ``` { 'Business': 2, 'Civil Society': 4, 'Government': 5, 'Individual': 0, 'News & Info': 1, 'Research & Academia': 3 } ``` This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
jaybeeja/q-Taxi-v3
jaybeeja
2022-08-07T04:28:49Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-08-07T04:27:05Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - metrics: - type: mean_reward value: 7.52 +/- 2.73 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="jaybeeja/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
sherwinseah/Fine-tuned-T5-for-MCQGenerator
sherwinseah
2022-08-07T04:18:04Z
10
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:squad_modified_for_t5_qg", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-08-07T03:07:33Z
--- tags: - generated_from_trainer datasets: - squad_modified_for_t5_qg model-index: - name: Fine-tuned-T5-for-MCQGenerator results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Fine-tuned-T5-for-MCQGenerator This model is a fine-tuned version of [ramsrigouthamg/t5_squad_v1](https://huggingface.co/ramsrigouthamg/t5_squad_v1) on the squad_modified_for_t5_qg dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 ### Training results ### Framework versions - Transformers 4.21.1 - Pytorch 1.11.0+cpu - Datasets 2.4.0 - Tokenizers 0.12.1
jaybeeja/q-FrozenLake-v1-4x4-noSlippery
jaybeeja
2022-08-07T04:12:53Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-08-06T22:45:05Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="jaybeeja/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
yewwdunsay/t5-end2end-questions-generation
yewwdunsay
2022-08-07T02:47:20Z
8
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "dataset:squad_modified_for_t5_qg", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-08-06T09:54:00Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad_modified_for_t5_qg model-index: - name: t5-end2end-questions-generation results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-end2end-questions-generation This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the squad_modified_for_t5_qg dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
arinze/t5_finetuned_xsum_hr
arinze
2022-08-07T02:11:35Z
17
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "dataset:xsum", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-08-05T14:50:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - xsum metrics: - rouge model-index: - name: t5_finetuned_xsum_hr results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: xsum type: xsum config: default split: train args: default metrics: - name: Rouge1 type: rouge value: 29.1239 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5_finetuned_xsum_hr This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 2.4224 - Rouge1: 29.1239 - Rouge2: 8.3165 - Rougel: 22.9946 - Rougelsum: 22.9996 - Gen Len: 18.819 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.6839 | 1.0 | 12753 | 2.4474 | 28.721 | 8.0397 | 22.6568 | 22.6527 | 18.8324 | | 2.6425 | 2.0 | 25506 | 2.4224 | 29.1239 | 8.3165 | 22.9946 | 22.9996 | 18.819 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0 - Datasets 2.4.0 - Tokenizers 0.12.1
Einmalumdiewelt/MT5_small_sum-de_GNAD
Einmalumdiewelt
2022-08-07T01:51:24Z
15
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "generated_from_trainer", "de", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-08-06T21:56:49Z
--- language: - de tags: - generated_from_trainer metrics: - rouge model-index: - name: MT5_small_sum-de_GNAD results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MT5_small_sum-de_GNAD This model is a fine-tuned version of [Einmalumdiewelt/MT5_small_sum-de_GNAD](https://huggingface.co/Einmalumdiewelt/MT5_small_sum-de_GNAD) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4236 - Rouge1: 26.0749 - Rouge2: 8.4015 - Rougel: 18.7914 - Rougelsum: 22.9903 - Gen Len: 48.9027 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.22.0.dev0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
huggingartists/gspd
huggingartists
2022-08-06T23:30:23Z
5
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/gspd", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/gspd tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/9409ae2b38424a74b42cb1e4bb66b83a.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">GSPD</div> <a href="https://genius.com/artists/gspd"> <div style="text-align: center; font-size: 14px;">@gspd</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from GSPD. Dataset is available [here](https://huggingface.co/datasets/huggingartists/gspd). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/gspd") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/3jof0sex/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on GSPD's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/2nxhrny4) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/2nxhrny4/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/gspd') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/gspd") model = AutoModelWithLMHead.from_pretrained("huggingartists/gspd") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/morgenshtern
huggingartists
2022-08-06T23:27:49Z
11
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/morgenshtern", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/morgenshtern tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/cdfb190640789439daae426c799e5e32.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">MORGENSHTERN</div> <a href="https://genius.com/artists/morgenshtern"> <div style="text-align: center; font-size: 14px;">@morgenshtern</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from MORGENSHTERN. Dataset is available [here](https://huggingface.co/datasets/huggingartists/morgenshtern). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/morgenshtern") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/lmrnk6sz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on MORGENSHTERN's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1m2jynlh) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1m2jynlh/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/morgenshtern') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/morgenshtern") model = AutoModelWithLMHead.from_pretrained("huggingartists/morgenshtern") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
theojolliffe/bart-paraphrase-v0.75-e1
theojolliffe
2022-08-06T23:10:51Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-08-06T22:14:40Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: bart-paraphrase-v0.75-e1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-paraphrase-v0.75-e1 This model is a fine-tuned version of [eugenesiow/bart-paraphrase](https://huggingface.co/eugenesiow/bart-paraphrase) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1865 - Rouge1: 71.3427 - Rouge2: 66.0011 - Rougel: 69.8855 - Rougelsum: 69.9796 - Gen Len: 19.6036 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 0.1373 | 1.0 | 2660 | 0.1865 | 71.3427 | 66.0011 | 69.8855 | 69.9796 | 19.6036 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
huggingtweets/jimmie_graham-twittels
huggingtweets
2022-08-06T22:50:47Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-08-06T22:50:02Z
--- language: en thumbnail: http://www.huggingtweets.com/jimmie_graham-twittels/1659826242804/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1156915847/2591287537_ba409f68d3_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/540350928487202817/HM-MTuCU_400x400.jpeg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Harris Wittels & Jimmie Graham</div> <div style="text-align: center; font-size: 14px;">@jimmie_graham-twittels</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Harris Wittels & Jimmie Graham. | Data | Harris Wittels | Jimmie Graham | | --- | --- | --- | | Tweets downloaded | 2936 | 1576 | | Retweets | 83 | 252 | | Short tweets | 155 | 163 | | Tweets kept | 2698 | 1161 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2v94h9dg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jimmie_graham-twittels's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3w440hrh) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3w440hrh/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/jimmie_graham-twittels') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/xnicoleanistonx
huggingtweets
2022-08-06T21:56:20Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-08-06T21:52:41Z
--- language: en thumbnail: http://www.huggingtweets.com/xnicoleanistonx/1659822957190/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1537858520871149569/meL8yHqZ_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">XNicoleAnistonX.eth</div> <div style="text-align: center; font-size: 14px;">@xnicoleanistonx</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from XNicoleAnistonX.eth. | Data | XNicoleAnistonX.eth | | --- | --- | | Tweets downloaded | 3223 | | Retweets | 250 | | Short tweets | 375 | | Tweets kept | 2598 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/374xiz3q/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @xnicoleanistonx's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/22ey42je) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/22ey42je/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/xnicoleanistonx') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
theojolliffe/bart-paraphrase-v0.5-e1
theojolliffe
2022-08-06T21:18:32Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-08-06T20:20:52Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: bart-paraphrase-v0.5-e1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-paraphrase-v0.5-e1 This model is a fine-tuned version of [eugenesiow/bart-paraphrase](https://huggingface.co/eugenesiow/bart-paraphrase) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1769 - Rouge1: 71.0569 - Rouge2: 64.3725 - Rougel: 68.4976 - Rougelsum: 68.8318 - Gen Len: 19.3131 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 0.2141 | 1.0 | 1774 | 0.1769 | 71.0569 | 64.3725 | 68.4976 | 68.8318 | 19.3131 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
bassemessam/wav2vec2-large-xls-r-300m-arabic-saudi-colab
bassemessam
2022-08-06T20:37:36Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-08-06T19:09:00Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-large-xls-r-300m-arabic-saudi-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-arabic-saudi-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.21.1 - Pytorch 1.10.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
inarikami/monogptari-6.7b
inarikami
2022-08-06T20:16:05Z
19
0
transformers
[ "transformers", "pytorch", "opt", "text-generation", "generated_from_trainer", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-08-06T02:05:32Z
--- license: other tags: - generated_from_trainer metrics: - accuracy model-index: - name: output_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # monogptari-6.7b This model is a fine-tuned version of [facebook/opt-6.7b](https://huggingface.co/facebook/opt-6.7b) on an english monogatari (物語) dataset. It achieves the following results on the evaluation set: - Loss: 0.7030 - Accuracy: 0.8436 ## Quick start ```python from transformers import pipeline generator = pipeline('text-generation', model="tensorcat/monogptari-6.7b" , device=0, use_fast=False) generator("I think its about time I talked about Kiss-Shot", min_length=100, max_length=800, do_sample=True, early_stopping=True, temperature=.98, top_k=50, top_p=1.0) ``` ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 8 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.22.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Yousef7/U
Yousef7
2022-08-06T19:50:32Z
0
0
null
[ "region:us" ]
null
2022-08-06T19:43:17Z
pip install rubika pip install colorama pip install rubpy
jackoyoungblood/q-FrozenLake-v1-4x4-noSlippery
jackoyoungblood
2022-08-06T19:13:27Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-08-06T19:13:22Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
pcuenq/ddpm-ema-pets-64
pcuenq
2022-08-06T18:25:43Z
11
0
diffusers
[ "diffusers", "tensorboard", "en", "dataset:pcuenq/oxford-pets", "license:apache-2.0", "diffusers:DDPMPipeline", "region:us" ]
null
2022-08-06T15:41:29Z
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: pcuenq/oxford-pets metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-ema-pets-64 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `pcuenq/oxford-pets` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 128 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(0.95, 0.999), weight_decay=1e-06 and epsilon=1e-08 - lr_scheduler: cosine - lr_warmup_steps: 500 - ema_inv_gamma: 1.0 - ema_inv_gamma: 0.75 - ema_inv_gamma: 0.9999 - mixed_precision: no ### Training results 📈 [TensorBoard logs](https://huggingface.co/pcuenq/ddpm-ema-pets-64/tensorboard?#scalars)
sutd-ai/albert-base-v2-finetuned-squad
sutd-ai
2022-08-06T16:21:24Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "albert", "question-answering", "generated_from_trainer", "dataset:squad_v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-08-06T08:12:14Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad_v2 model-index: - name: albert-base-v2-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-base-v2-finetuned-squad This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 0.9650 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.0595 | 1.0 | 8248 | 1.4663 | | 0.6228 | 2.0 | 16496 | 0.8433 | | 0.4347 | 3.0 | 24744 | 0.9650 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
JTH/results
JTH
2022-08-06T15:36:15Z
10
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-07-29T16:43:15Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
L-macc/autotrain-Biomedical_sc_summ-1217846142
L-macc
2022-08-06T14:50:47Z
15
0
transformers
[ "transformers", "pytorch", "longt5", "text2text-generation", "autotrain", "summarization", "unk", "dataset:L-macc/autotrain-data-Biomedical_sc_summ", "model-index", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-08-04T07:45:06Z
--- tags: - autotrain - summarization language: - unk widget: - text: "I love AutoTrain \U0001F917" datasets: - L-macc/autotrain-data-Biomedical_sc_summ co2_eq_emissions: emissions: 16.211223325053414 model-index: - name: L-macc/autotrain-Biomedical_sc_summ-1217846142 results: - task: type: summarization name: Summarization dataset: name: Blaise-g/SumPubmed type: Blaise-g/SumPubmed config: Blaise-g--SumPubmed split: test metrics: - name: ROUGE-1 type: rouge value: 38.2324 verified: true - name: ROUGE-2 type: rouge value: 12.2914 verified: true - name: ROUGE-L type: rouge value: 21.4512 verified: true - name: ROUGE-LSUM type: rouge value: 34.3031 verified: true - name: loss type: loss value: 2.3273825645446777 verified: true - name: gen_len type: gen_len value: 130.3934 verified: true --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 1217846142 - CO2 Emissions (in grams): 16.2112 ## Validation Metrics - Loss: 2.159 - Rouge1: 40.236 - Rouge2: 12.161 - RougeL: 23.255 - RougeLsum: 35.138 - Gen Len: 121.504 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/L-macc/autotrain-Biomedical_sc_summ-1217846142 ```
theojolliffe/bart-paraphrase-v1-e1
theojolliffe
2022-08-06T14:27:35Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-08-06T13:13:46Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: bart-paraphrase-v1-e1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-paraphrase-v1-e1 This model is a fine-tuned version of [eugenesiow/bart-paraphrase](https://huggingface.co/eugenesiow/bart-paraphrase) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1525 - Rouge1: 71.1522 - Rouge2: 65.1426 - Rougel: 68.9323 - Rougelsum: 69.2231 - Gen Len: 19.4302 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 0.1102 | 1.0 | 3547 | 0.1525 | 71.1522 | 65.1426 | 68.9323 | 69.2231 | 19.4302 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
L-macc/autotrain-Biomedical_sc_summ-1217846148
L-macc
2022-08-06T12:54:46Z
11
0
transformers
[ "transformers", "pytorch", "longt5", "text2text-generation", "autotrain", "summarization", "unk", "dataset:L-macc/autotrain-data-Biomedical_sc_summ", "model-index", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-08-04T07:45:21Z
--- tags: - autotrain - summarization language: - unk widget: - text: "I love AutoTrain \U0001F917" datasets: - L-macc/autotrain-data-Biomedical_sc_summ co2_eq_emissions: emissions: 13.651986586580765 model-index: - name: L-macc/autotrain-Biomedical_sc_summ-1217846148 results: - task: type: summarization name: Summarization dataset: name: Blaise-g/SumPubmed type: Blaise-g/SumPubmed config: Blaise-g--SumPubmed split: test metrics: - name: ROUGE-1 type: rouge value: 40.1043 verified: true - name: ROUGE-2 type: rouge value: 13.0457 verified: true - name: ROUGE-L type: rouge value: 22.2288 verified: true - name: ROUGE-LSUM type: rouge value: 35.9295 verified: true - name: loss type: loss value: 2.090137004852295 verified: true - name: gen_len type: gen_len value: 139.7369 verified: true --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 1217846148 - CO2 Emissions (in grams): 13.6520 ## Validation Metrics - Loss: 2.503 - Rouge1: 38.768 - Rouge2: 10.791 - RougeL: 21.946 - RougeLsum: 33.780 - Gen Len: 123.331 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/L-macc/autotrain-Biomedical_sc_summ-1217846148 ```
paola-md/recipe-reg
paola-md
2022-08-06T11:51:52Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-08-05T12:31:23Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: recipe-reg results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # recipe-reg This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.3717 - Rmse: 1.8362 - Mse: 3.3717 - Mae: 1.6145 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae | |:-------------:|:-----:|:------:|:---------------:|:------:|:------:|:------:| | 3.3247 | 1.0 | 12809 | 3.3717 | 1.8362 | 3.3717 | 1.6145 | | 3.3238 | 2.0 | 25618 | 3.3722 | 1.8363 | 3.3722 | 1.6145 | | 3.3217 | 3.0 | 38427 | 3.3718 | 1.8362 | 3.3718 | 1.6145 | | 3.3215 | 4.0 | 51236 | 3.3754 | 1.8372 | 3.3754 | 1.6145 | | 3.3203 | 5.0 | 64045 | 3.3721 | 1.8363 | 3.3721 | 1.6145 | | 3.3199 | 6.0 | 76854 | 3.3731 | 1.8366 | 3.3731 | 1.6145 | | 3.319 | 7.0 | 89663 | 3.3731 | 1.8366 | 3.3731 | 1.6145 | | 3.3188 | 8.0 | 102472 | 3.3717 | 1.8362 | 3.3717 | 1.6145 | ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.9.0+cu111 - Datasets 2.4.0 - Tokenizers 0.12.1
nvidia/mit-b4
nvidia
2022-08-06T10:28:21Z
8,613
1
transformers
[ "transformers", "pytorch", "tf", "segformer", "image-classification", "vision", "dataset:imagenet_1k", "arxiv:2105.15203", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- license: other tags: - vision datasets: - imagenet_1k widget: - src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg example_title: House - src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg example_title: Castle --- # SegFormer (b4-sized) encoder pre-trained-only SegFormer encoder fine-tuned on Imagenet-1k. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer). Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset. This repository only contains the pre-trained hierarchical Transformer, hence it can be used for fine-tuning purposes. ## Intended uses & limitations You can use the model for fine-tuning of semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import SegformerFeatureExtractor, SegformerForImageClassification from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/mit-b4") model = SegformerForImageClassification.from_pretrained("nvidia/mit-b4") inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#). ### License The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2105-15203, author = {Enze Xie and Wenhai Wang and Zhiding Yu and Anima Anandkumar and Jose M. Alvarez and Ping Luo}, title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers}, journal = {CoRR}, volume = {abs/2105.15203}, year = {2021}, url = {https://arxiv.org/abs/2105.15203}, eprinttype = {arXiv}, eprint = {2105.15203}, timestamp = {Wed, 02 Jun 2021 11:46:42 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
nvidia/segformer-b4-finetuned-ade-512-512
nvidia
2022-08-06T10:25:42Z
9,842
1
transformers
[ "transformers", "pytorch", "tf", "segformer", "vision", "image-segmentation", "dataset:scene_parse_150", "arxiv:2105.15203", "license:other", "endpoints_compatible", "region:us" ]
image-segmentation
2022-03-02T23:29:05Z
--- license: other tags: - vision - image-segmentation datasets: - scene_parse_150 widget: - src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg example_title: House - src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg example_title: Castle --- # SegFormer (b4-sized) model fine-tuned on ADE20k SegFormer model fine-tuned on ADE20k at resolution 512x512. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer). Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset. ## Intended uses & limitations You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation from PIL import Image import requests feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b4-finetuned-ade-512-512") model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b4-finetuned-ade-512-512") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#). ### License The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2105-15203, author = {Enze Xie and Wenhai Wang and Zhiding Yu and Anima Anandkumar and Jose M. Alvarez and Ping Luo}, title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers}, journal = {CoRR}, volume = {abs/2105.15203}, year = {2021}, url = {https://arxiv.org/abs/2105.15203}, eprinttype = {arXiv}, eprint = {2105.15203}, timestamp = {Wed, 02 Jun 2021 11:46:42 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
nvidia/mit-b5
nvidia
2022-08-06T10:25:24Z
23,752
8
transformers
[ "transformers", "pytorch", "tf", "segformer", "image-classification", "vision", "dataset:imagenet_1k", "arxiv:2105.15203", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- license: other tags: - vision datasets: - imagenet_1k widget: - src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg example_title: House - src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg example_title: Castle --- # SegFormer (b5-sized) encoder pre-trained-only SegFormer encoder fine-tuned on Imagenet-1k. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer). Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset. This repository only contains the pre-trained hierarchical Transformer, hence it can be used for fine-tuning purposes. ## Intended uses & limitations You can use the model for fine-tuning of semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import SegformerFeatureExtractor, SegformerForImageClassification from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/mit-b5") model = SegformerForImageClassification.from_pretrained("nvidia/mit-b5") inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#). ### License The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2105-15203, author = {Enze Xie and Wenhai Wang and Zhiding Yu and Anima Anandkumar and Jose M. Alvarez and Ping Luo}, title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers}, journal = {CoRR}, volume = {abs/2105.15203}, year = {2021}, url = {https://arxiv.org/abs/2105.15203}, eprinttype = {arXiv}, eprint = {2105.15203}, timestamp = {Wed, 02 Jun 2021 11:46:42 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
nvidia/mit-b1
nvidia
2022-08-06T10:25:12Z
38,555
1
transformers
[ "transformers", "pytorch", "tf", "segformer", "image-classification", "vision", "dataset:imagenet_1k", "arxiv:2105.15203", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- license: other tags: - vision datasets: - imagenet_1k widget: - src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg example_title: House - src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg example_title: Castle --- # SegFormer (b1-sized) encoder pre-trained-only SegFormer encoder fine-tuned on Imagenet-1k. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer). Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset. This repository only contains the pre-trained hierarchical Transformer, hence it can be used for fine-tuning purposes. ## Intended uses & limitations You can use the model for fine-tuning of semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import SegformerFeatureExtractor, SegformerForImageClassification from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/mit-b1") model = SegformerForImageClassification.from_pretrained("nvidia/mit-b1") inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#). ### License The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2105-15203, author = {Enze Xie and Wenhai Wang and Zhiding Yu and Anima Anandkumar and Jose M. Alvarez and Ping Luo}, title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers}, journal = {CoRR}, volume = {abs/2105.15203}, year = {2021}, url = {https://arxiv.org/abs/2105.15203}, eprinttype = {arXiv}, eprint = {2105.15203}, timestamp = {Wed, 02 Jun 2021 11:46:42 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
nvidia/mit-b3
nvidia
2022-08-06T10:24:57Z
2,496
2
transformers
[ "transformers", "pytorch", "tf", "segformer", "image-classification", "vision", "dataset:imagenet_1k", "arxiv:2105.15203", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- license: other tags: - vision datasets: - imagenet_1k widget: - src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg example_title: House - src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg example_title: Castle --- # SegFormer (b3-sized) encoder pre-trained-only SegFormer encoder fine-tuned on Imagenet-1k. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer). Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset. This repository only contains the pre-trained hierarchical Transformer, hence it can be used for fine-tuning purposes. ## Intended uses & limitations You can use the model for fine-tuning of semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import SegformerFeatureExtractor, SegformerForImageClassification from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/mit-b3") model = SegformerForImageClassification.from_pretrained("nvidia/mit-b3") inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#). ### License The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2105-15203, author = {Enze Xie and Wenhai Wang and Zhiding Yu and Anima Anandkumar and Jose M. Alvarez and Ping Luo}, title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers}, journal = {CoRR}, volume = {abs/2105.15203}, year = {2021}, url = {https://arxiv.org/abs/2105.15203}, eprinttype = {arXiv}, eprint = {2105.15203}, timestamp = {Wed, 02 Jun 2021 11:46:42 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
nvidia/segformer-b1-finetuned-ade-512-512
nvidia
2022-08-06T10:08:05Z
1,308,711
4
transformers
[ "transformers", "pytorch", "tf", "segformer", "vision", "image-segmentation", "dataset:scene_parse_150", "arxiv:2105.15203", "license:other", "endpoints_compatible", "region:us" ]
image-segmentation
2022-03-02T23:29:05Z
--- license: other tags: - vision - image-segmentation datasets: - scene_parse_150 widget: - src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg example_title: House - src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg example_title: Castle --- # SegFormer (b1-sized) model fine-tuned on ADE20k SegFormer model fine-tuned on ADE20k at resolution 512x512. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer). Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset. ## Intended uses & limitations You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation from PIL import Image import requests feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b1-finetuned-ade-512-512") model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b1-finetuned-ade-512-512") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2105-15203, author = {Enze Xie and Wenhai Wang and Zhiding Yu and Anima Anandkumar and Jose M. Alvarez and Ping Luo}, title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers}, journal = {CoRR}, volume = {abs/2105.15203}, year = {2021}, url = {https://arxiv.org/abs/2105.15203}, eprinttype = {arXiv}, eprint = {2105.15203}, timestamp = {Wed, 02 Jun 2021 11:46:42 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
nvidia/segformer-b2-finetuned-ade-512-512
nvidia
2022-08-06T10:07:54Z
31,971
3
transformers
[ "transformers", "pytorch", "tf", "segformer", "vision", "image-segmentation", "dataset:scene_parse_150", "arxiv:2105.15203", "license:other", "endpoints_compatible", "region:us" ]
image-segmentation
2022-03-02T23:29:05Z
--- license: other tags: - vision - image-segmentation datasets: - scene_parse_150 widget: - src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg example_title: House - src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg example_title: Castle --- # SegFormer (b2-sized) model fine-tuned on ADE20k SegFormer model fine-tuned on ADE20k at resolution 512x512. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer). Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset. ## Intended uses & limitations You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation from PIL import Image import requests feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b2-finetuned-ade-512-512") model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b2-finetuned-ade-512-512") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2105-15203, author = {Enze Xie and Wenhai Wang and Zhiding Yu and Anima Anandkumar and Jose M. Alvarez and Ping Luo}, title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers}, journal = {CoRR}, volume = {abs/2105.15203}, year = {2021}, url = {https://arxiv.org/abs/2105.15203}, eprinttype = {arXiv}, eprint = {2105.15203}, timestamp = {Wed, 02 Jun 2021 11:46:42 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
Akbar-Ali/autotrain-News_Summariser_Eng-1224546522
Akbar-Ali
2022-08-06T09:59:35Z
13
0
transformers
[ "transformers", "pytorch", "pegasus", "text2text-generation", "autotrain", "summarization", "en", "dataset:Akbar-Ali/autotrain-data-News_Summariser_Eng", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-08-06T09:16:08Z
--- tags: - autotrain - summarization language: - en widget: - text: "I love AutoTrain 🤗" datasets: - Akbar-Ali/autotrain-data-News_Summariser_Eng co2_eq_emissions: emissions: 35.7814981860994 --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 1224546522 - CO2 Emissions (in grams): 35.7815 ## Validation Metrics - Loss: 0.638 - Rouge1: 44.532 - Rouge2: 33.731 - RougeL: 40.372 - RougeLsum: 40.653 - Gen Len: 57.730 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/Akbar-Ali/autotrain-News_Summariser_Eng-1224546522 ```
trtd56/ppo-CartPole
trtd56
2022-08-05T23:42:39Z
0
0
null
[ "tensorboard", "CartPole-v1", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2022-08-05T23:42:30Z
--- tags: - CartPole-v1 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: PPO results: - metrics: - type: mean_reward value: 166.60 +/- 82.10 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 --- # PPO Agent Playing CartPole-v1 This is a trained model of a PPO agent playing CartPole-v1. To learn to code your own PPO agent and train it Unit 8 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit8 # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'CartPole-v1' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'trtd56/ppo-CartPole' 'batch_size': 512 'minibatch_size': 128} ```
nikhilnayak/lyrics.ai
nikhilnayak
2022-08-05T23:02:19Z
2
0
transformers
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2022-08-05T18:55:44Z
--- widget: - text: "Jens Peter Hansen kommer fra Danmark" - pipeline-tag: text-generation ---
skr1125/xlm-roberta-base-finetuned-panx-it
skr1125
2022-08-05T21:02:57Z
3
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-05T20:47:33Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-it results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.it metrics: - name: F1 type: f1 value: 0.8245828245828245 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2401 - F1: 0.8246 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.8187 | 1.0 | 70 | 0.3325 | 0.7337 | | 0.2829 | 2.0 | 140 | 0.2554 | 0.8003 | | 0.1894 | 3.0 | 210 | 0.2401 | 0.8246 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
skr1125/xlm-roberta-base-finetuned-panx-fr
skr1125
2022-08-05T20:47:17Z
3
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-05T20:30:21Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-fr results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.fr metrics: - name: F1 type: f1 value: 0.8330673475901692 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2851 - F1: 0.8331 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5989 | 1.0 | 191 | 0.3200 | 0.7991 | | 0.2617 | 2.0 | 382 | 0.2897 | 0.8236 | | 0.1672 | 3.0 | 573 | 0.2851 | 0.8331 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
skr1125/xlm-roberta-base-finetuned-panx-de-fr
skr1125
2022-08-05T20:28:38Z
5
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-05T20:06:07Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1654 - F1: 0.8590 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2845 | 1.0 | 715 | 0.1831 | 0.8249 | | 0.1449 | 2.0 | 1430 | 0.1643 | 0.8479 | | 0.0929 | 3.0 | 2145 | 0.1654 | 0.8590 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
jackoyoungblood/ppo-LunarLander-v2d
jackoyoungblood
2022-08-05T19:52:17Z
3
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-08-05T19:51:55Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 269.16 +/- 19.09 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
jackoyoungblood/ppo-LunarLander-v2c
jackoyoungblood
2022-08-05T19:46:18Z
3
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-07-29T23:03:33Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 267.50 +/- 18.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
skr1125/xlm-roberta-base-finetuned-panx-de
skr1125
2022-08-05T17:50:14Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-02T01:50:37Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.863677639046538 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1343 - F1: 0.8637 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2578 | 1.0 | 525 | 0.1562 | 0.8273 | | 0.1297 | 2.0 | 1050 | 0.1330 | 0.8474 | | 0.0809 | 3.0 | 1575 | 0.1343 | 0.8637 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
apjanco/candy-first
apjanco
2022-08-05T13:03:29Z
56
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-08-05T13:03:25Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: candy-first results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.7436399459838867 --- # candy-first An initial attempt to identify candy in images. ## Example Images #### airheads ![airheads](images/airheads.jpg) #### candy corn ![candy corn](images/candy_corn.jpg) #### caramel ![caramel](images/caramel.jpg) #### chips ![chips](images/chips.jpg) #### chocolate ![chocolate](images/chocolate.jpg) #### fruit ![fruit](images/fruit.jpg) #### gum ![gum](images/gum.jpg) #### haribo ![haribo](images/haribo.jpg) #### jelly beans ![jelly beans](images/jelly_beans.jpg) #### lollipop ![lollipop](images/lollipop.jpg) #### m&ms ![m&ms](images/m&ms.jpg) #### marshmallow ![marshmallow](images/marshmallow.jpg) #### mentos ![mentos](images/mentos.jpg) #### mint ![mint](images/mint.jpg) #### nerds ![nerds](images/nerds.jpg) #### peeps ![peeps](images/peeps.jpg) #### pez ![pez](images/pez.jpg) #### popcorn ![popcorn](images/popcorn.jpg) #### pretzel ![pretzel](images/pretzel.jpg) #### reeses ![reeses](images/reeses.jpg) #### seeds ![seeds](images/seeds.jpg) #### skittles ![skittles](images/skittles.jpg) #### snickers ![snickers](images/snickers.jpg) #### soda ![soda](images/soda.jpg) #### sour ![sour](images/sour.jpg) #### swedish fish ![swedish fish](images/swedish_fish.jpg) #### taffy ![taffy](images/taffy.jpg) #### tootsie ![tootsie](images/tootsie.jpg) #### twix ![twix](images/twix.jpg) #### twizzlers ![twizzlers](images/twizzlers.jpg) #### warheads ![warheads](images/warheads.jpg) #### whoppers ![whoppers](images/whoppers.jpg)
AlexN/xls-r-300m-pt
AlexN
2022-08-05T11:24:32Z
36
1
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "robust-speech-event", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "hf-asr-leaderboard", "pt", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - robust-speech-event - mozilla-foundation/common_voice_8_0 - generated_from_trainer - hf-asr-leaderboard datasets: - mozilla-foundation/common_voice_8_0 model-index: - name: xls-r-300m-pt results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8.0 pt type: mozilla-foundation/common_voice_8_0 args: pt metrics: - name: Test WER type: wer value: 19.361 - name: Test CER type: cer value: 5.533 - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: fr metrics: - name: Validation WER type: wer value: 47.812 - name: Validation CER type: cer value: 18.805 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8.0 type: mozilla-foundation/common_voice_8_0 args: pt metrics: - name: Test WER type: wer value: 19.36 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: pt metrics: - name: Test WER type: wer value: 48.01 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Test Data type: speech-recognition-community-v2/eval_data args: pt metrics: - name: Test WER type: wer value: 49.21 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - PT dataset. It achieves the following results on the evaluation set: - Loss: 0.2290 - Wer: 0.2382 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1500 - num_epochs: 15.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.0952 | 0.64 | 500 | 3.0982 | 1.0 | | 1.7975 | 1.29 | 1000 | 0.7887 | 0.5651 | | 1.4138 | 1.93 | 1500 | 0.5238 | 0.4389 | | 1.344 | 2.57 | 2000 | 0.4775 | 0.4318 | | 1.2737 | 3.21 | 2500 | 0.4648 | 0.4075 | | 1.2554 | 3.86 | 3000 | 0.4069 | 0.3678 | | 1.1996 | 4.5 | 3500 | 0.3914 | 0.3668 | | 1.1427 | 5.14 | 4000 | 0.3694 | 0.3572 | | 1.1372 | 5.78 | 4500 | 0.3568 | 0.3501 | | 1.0831 | 6.43 | 5000 | 0.3331 | 0.3253 | | 1.1074 | 7.07 | 5500 | 0.3332 | 0.3352 | | 1.0536 | 7.71 | 6000 | 0.3131 | 0.3152 | | 1.0248 | 8.35 | 6500 | 0.3024 | 0.3023 | | 1.0075 | 9.0 | 7000 | 0.2948 | 0.3028 | | 0.979 | 9.64 | 7500 | 0.2796 | 0.2853 | | 0.9594 | 10.28 | 8000 | 0.2719 | 0.2789 | | 0.9172 | 10.93 | 8500 | 0.2620 | 0.2695 | | 0.9047 | 11.57 | 9000 | 0.2537 | 0.2596 | | 0.8777 | 12.21 | 9500 | 0.2438 | 0.2525 | | 0.8629 | 12.85 | 10000 | 0.2409 | 0.2493 | | 0.8575 | 13.5 | 10500 | 0.2366 | 0.2440 | | 0.8361 | 14.14 | 11000 | 0.2317 | 0.2385 | | 0.8126 | 14.78 | 11500 | 0.2290 | 0.2382 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
LiptaphX/wav2vec2-large-xls-r-300m-turkish-colab
LiptaphX
2022-08-05T11:15:04Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-08-05T10:46:20Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-turkish-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-turkish-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
abdulmatinomotoso/multi_news_article_title_1200
abdulmatinomotoso
2022-08-05T07:14:16Z
15
0
transformers
[ "transformers", "pytorch", "tensorboard", "pegasus", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-08-05T06:42:59Z
--- tags: - generated_from_trainer model-index: - name: multi_news_article_title_1200 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # multi_news_article_title_1200 This model is a fine-tuned version of [google/pegasus-multi_news](https://huggingface.co/google/pegasus-multi_news) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
sagar122/xperimentalilst_hackathon_2022
sagar122
2022-08-05T06:19:19Z
0
1
null
[ "arxiv:2205.02455", "license:cc-by-nc-4.0", "region:us" ]
null
2022-08-04T08:49:20Z
--- license: cc-by-nc-4.0 --- ## COGMEN; Official Pytorch Implementation [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/cogmen-contextualized-gnn-based-multimodal/multimodal-emotion-recognition-on-iemocap)](https://paperswithcode.com/sota/multimodal-emotion-recognition-on-iemocap?p=cogmen-contextualized-gnn-based-multimodal) **CO**ntextualized **G**NN based **M**ultimodal **E**motion recognitio**N** ![Teaser image](logo.png) **Picture:** *My sample picture for logo* This repository contains the official Pytorch implementation of the following paper: > **COGMEN: COntextualized GNN based Multimodal Emotion recognitioN**<br> > **Paper:** https://arxiv.org/abs/2205.02455 > **Authors:** Abhinav Joshi, Ashwani Bhat, Ayush Jain, Atin Vikram Singh, Ashutosh Modi<br> > > **Abstract:** *Emotions are an inherent part of human interactions, and consequently, it is imperative to develop AI systems that understand and recognize human emotions. During a conversation involving various people, a person’s emotions are influenced by the other speaker’s utterances and their own emotional state over the utterances. In this paper, we propose COntextualized Graph Neural Network based Multimodal Emotion recognitioN (COGMEN) system that leverages local information (i.e., inter/intra dependency between speakers) and global information (context). The proposed model uses Graph Neural Network (GNN) based architecture to model the complex dependencies (local and global information) in a conversation. Our model gives state-of-theart (SOTA) results on IEMOCAP and MOSEI datasets, and detailed ablation experiments show the importance of modeling information at both levels* ## Requirements - We use PyG (PyTorch Geometric) for the GNN component in our architecture. [RGCNConv](https://pytorch-geometric.readthedocs.io/en/latest/modules/nn.html#torch_geometric.nn.conv.RGCNConv) and [TransformerConv](https://pytorch-geometric.readthedocs.io/en/latest/modules/nn.html#torch_geometric.nn.conv.TransformerConv) - We use [comet](https://comet.ml) for logging all our experiments and its Bayesian optimizer for hyperparameter tuning. - For textual features we use [SBERT](https://www.sbert.net/). ### Installations - [Install PyTorch Geometric](https://pytorch-geometric.readthedocs.io/en/latest/notes/installation.html) - [Install Comet.ml](https://www.comet.ml/docs/python-sdk/advanced/) - [Install SBERT](https://www.sbert.net/) ## Preparing datasets for training python preprocess.py --dataset="iemocap_4" ## Training networks python train.py --dataset="iemocap_4" --modalities="atv" --from_begin --epochs=55 ## Run Evaluation [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1biIvonBdJWo2TiYyTiQkxZ_V88JEXa_d?usp=sharing) python eval.py --dataset="iemocap_4" --modalities="atv" Please cite the paper using following citation: ## Citation @inproceedings{joshi-etal-2022-cogmen, title = "{COGMEN}: {CO}ntextualized {GNN} based Multimodal Emotion recognitio{N}", author = "Joshi, Abhinav and Bhat, Ashwani and Jain, Ayush and Singh, Atin and Modi, Ashutosh", booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", month = jul, year = "2022", address = "Seattle, United States", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.naacl-main.306", pages = "4148--4164", abstract = "Emotions are an inherent part of human interactions, and consequently, it is imperative to develop AI systems that understand and recognize human emotions. During a conversation involving various people, a person{'}s emotions are influenced by the other speaker{'}s utterances and their own emotional state over the utterances. In this paper, we propose COntextualized Graph Neural Network based Multi- modal Emotion recognitioN (COGMEN) system that leverages local information (i.e., inter/intra dependency between speakers) and global information (context). The proposed model uses Graph Neural Network (GNN) based architecture to model the complex dependencies (local and global information) in a conversation. Our model gives state-of-the- art (SOTA) results on IEMOCAP and MOSEI datasets, and detailed ablation experiments show the importance of modeling information at both levels.",} ## Acknowledgments The structure of our code is inspired by [pytorch-DialogueGCN-mianzhang](https://github.com/mianzhang/dialogue_gcn).
alex-apostolo/roberta-base-filtered-cuad
alex-apostolo
2022-08-05T05:28:06Z
25
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "question-answering", "generated_from_trainer", "dataset:alex-apostolo/filtered-cuad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-08-04T09:12:07Z
--- license: mit tags: - generated_from_trainer datasets: - alex-apostolo/filtered-cuad model-index: - name: roberta-base-filtered-cuad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-filtered-cuad This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the cuad dataset. It achieves the following results on the evaluation set: - Loss: 0.0396 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.0502 | 1.0 | 8442 | 0.0467 | | 0.0397 | 2.0 | 16884 | 0.0436 | | 0.032 | 3.0 | 25326 | 0.0396 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
zhiguoxu/chinese-roberta-wwm-ext-finetuned2
zhiguoxu
2022-08-05T03:45:08Z
5
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-08-03T07:54:52Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: chinese-roberta-wwm-ext-finetuned2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # chinese-roberta-wwm-ext-finetuned2 This model is a fine-tuned version of [hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1448 - Accuracy: 1.0 - F1: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 1.4081 | 1.0 | 3 | 0.9711 | 0.7273 | 0.6573 | | 0.9516 | 2.0 | 6 | 0.8174 | 0.8182 | 0.8160 | | 0.8945 | 3.0 | 9 | 0.6617 | 0.9091 | 0.9124 | | 0.7042 | 4.0 | 12 | 0.5308 | 1.0 | 1.0 | | 0.6641 | 5.0 | 15 | 0.4649 | 1.0 | 1.0 | | 0.5731 | 6.0 | 18 | 0.4046 | 1.0 | 1.0 | | 0.5132 | 7.0 | 21 | 0.3527 | 1.0 | 1.0 | | 0.3999 | 8.0 | 24 | 0.3070 | 1.0 | 1.0 | | 0.4198 | 9.0 | 27 | 0.2673 | 1.0 | 1.0 | | 0.3677 | 10.0 | 30 | 0.2378 | 1.0 | 1.0 | | 0.3545 | 11.0 | 33 | 0.2168 | 1.0 | 1.0 | | 0.3237 | 12.0 | 36 | 0.1980 | 1.0 | 1.0 | | 0.3122 | 13.0 | 39 | 0.1860 | 1.0 | 1.0 | | 0.2802 | 14.0 | 42 | 0.1759 | 1.0 | 1.0 | | 0.2552 | 15.0 | 45 | 0.1671 | 1.0 | 1.0 | | 0.2475 | 16.0 | 48 | 0.1598 | 1.0 | 1.0 | | 0.2259 | 17.0 | 51 | 0.1541 | 1.0 | 1.0 | | 0.201 | 18.0 | 54 | 0.1492 | 1.0 | 1.0 | | 0.2083 | 19.0 | 57 | 0.1461 | 1.0 | 1.0 | | 0.2281 | 20.0 | 60 | 0.1448 | 1.0 | 1.0 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.12.0+cu102 - Datasets 2.1.0 - Tokenizers 0.12.1
ariesutiono/scibert-lm-v1-finetuned-20
ariesutiono
2022-08-05T03:07:59Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "dataset:conll2003", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-08-04T01:57:31Z
--- tags: - generated_from_trainer datasets: - conll2003 model-index: - name: scibert-lm-v1-finetuned-20 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # scibert-lm-v1-finetuned-20 This model is a fine-tuned version of [allenai/scibert_scivocab_cased](https://huggingface.co/allenai/scibert_scivocab_cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 22.6145 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.0118 | 1.0 | 1756 | 15.0609 | | 0.0001 | 2.0 | 3512 | 17.9265 | | 0.0 | 3.0 | 5268 | 18.6256 | | 0.0001 | 4.0 | 7024 | 19.5144 | | 0.0002 | 5.0 | 8780 | 19.8926 | | 0.0 | 6.0 | 10536 | 21.6975 | | 0.0 | 7.0 | 12292 | 22.2388 | | 0.0 | 8.0 | 14048 | 21.0441 | | 0.0 | 9.0 | 15804 | 21.6852 | | 0.0 | 10.0 | 17560 | 22.4439 | | 0.0 | 11.0 | 19316 | 20.9994 | | 0.0 | 12.0 | 21072 | 21.7275 | | 0.0 | 13.0 | 22828 | 22.1329 | | 0.0 | 14.0 | 24584 | 22.4599 | | 0.0 | 15.0 | 26340 | 22.5726 | | 0.0 | 16.0 | 28096 | 22.7823 | | 0.0 | 17.0 | 29852 | 22.4167 | | 0.0 | 18.0 | 31608 | 22.4075 | | 0.0 | 19.0 | 33364 | 22.5731 | | 0.0 | 20.0 | 35120 | 22.6145 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
tals/roberta_python
tals
2022-08-05T02:30:51Z
5
2
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "arxiv:2106.05784", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# roberta_python --- language: code datasets: - code_search_net - Fraser/python-lines tags: - python - code - masked-lm widget: - text "assert 6 == sum([i for i in range(<mask>)])" --- # Details This is a roBERTa-base model trained on the python part of [CodeSearchNet](https://github.com/github/CodeSearchNet) and reached a dev perplexity of 3.296 This model was used for the Programming Puzzles enumerative solver baseline detailed in [Programming Puzzles paper](https://arxiv.org/abs/2106.05784). See also the [Python Programming Puzzles (P3) Repository](https://github.com/microsoft/PythonProgrammingPuzzles) for more details. # Usage You can either load the model and further fine-tune it for a target task (as done for the puzzle solver), or you can experiment with mask-filling directly with this model as in the following example: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline tokenizer = AutoTokenizer.from_pretrained("tals/roberta_python") model = AutoModelWithLMHead.from_pretrained("tals/roberta_python") demo = pipeline("fill-mask", model=model, tokenizer=tokenizer) code = """sum= 0 for i in range(<mask>): sum += i assert sum == 6 """ demo(code) ``` # BibTeX entry and citation info ```bibtex @inproceedings{ schuster2021programming, title={Programming Puzzles}, author={Tal Schuster and Ashwin Kalyan and Alex Polozov and Adam Tauman Kalai}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1)}, year={2021}, url={https://openreview.net/forum?id=fe_hCc4RBrg} } ```
fzwd6666/NLTBert_multi_fine_tune_new
fzwd6666
2022-08-05T00:22:54Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-08-05T00:04:38Z
This model is a fine-tuned version of fzwd6666/Ged_bert_new with 4 layers on an NLT dataset. It achieves the following results on the evaluation set: {'precision': 0.9795081967213115} {'recall': 0.989648033126294} {'f1': 0.984552008238929} {'accuracy': 0.9843227424749164} Training hyperparameters: learning_rate: 1e-4 train_batch_size: 8 eval_batch_size: 8 optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08 weight_decay= 0.01 lr_scheduler_type: linear num_epochs: 3 It achieves the following results on the test set: Incorrect UD Padded: {'precision': 0.6878048780487804} {'recall': 0.2863913337846987} {'f1': 0.4043977055449331} {'accuracy': 0.4722575180008471} Incorrect UD Unigram: {'precision': 0.6348314606741573} {'recall': 0.3060257278266757} {'f1': 0.4129739607126542} {'accuracy': 0.4557390936044049} Incorrect UD Bigram: {'precision': 0.6588419405320813} {'recall': 0.28503723764387273} {'f1': 0.3979206049149338} {'accuracy': 0.4603981363828886} Incorrect UD All: {'precision': 0.4} {'recall': 0.0013540961408259986} {'f1': 0.002699055330634278} {'accuracy': 0.373994070309191} Incorrect Sentence: {'precision': 0.5} {'recall': 0.012186865267433988} {'f1': 0.02379378717779247} {'accuracy': 0.37441761965268955}
fzwd6666/NLTBert_singal_fine_tune_new
fzwd6666
2022-08-04T23:52:24Z
5
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-08-04T23:28:29Z
This model is a fine-tuned version of bert-base-uncase with 4 layers on an NLT dataset. It achieves the following results on the evaluation set: {'precision': 0.9804255319148936} {'recall': 0.9888412017167382} {'f1': 0.9846153846153847} {'accuracy': 0.9849498327759197} Training hyperparameters: learning_rate: 1e-4 train_batch_size: 8 eval_batch_size: 8 optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08 weight_decay= 0.01 lr_scheduler_type: linear num_epochs: 3 It achieves the following results on the test set: Incorrect UD Padded: {'precision': 0.6703910614525139} {'recall': 0.32498307379823965} {'f1': 0.43775649794801635} {'accuracy': 0.4777636594663278} Incorrect UD Unigram: {'precision': 0.6460176991150443} {'recall': 0.3953960731211916} {'f1': 0.4905501889962201} {'accuracy': 0.4862346463362982} Incorrect UD Bigram: {'precision': 0.6617647058823529} {'recall': 0.33513879485443465} {'f1': 0.44494382022471907} {'accuracy': 0.4769165607793308} Incorrect UD All: {'precision': 0.5714285714285714} {'recall': 0.002708192281651997} {'f1': 0.005390835579514824} {'accuracy': 0.3748411689961881} Incorrect Sentence: {'precision': 0.6274149034038639} {'recall': 0.923493568043331} {'f1': 0.7471925499863051} {'accuracy': 0.6090639559508683}
huggingtweets/dominic_w-lastmjs-vitalikbuterin
huggingtweets
2022-08-04T23:40:33Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-08-04T23:38:29Z
--- language: en thumbnail: http://www.huggingtweets.com/dominic_w-lastmjs-vitalikbuterin/1659656428920/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1376912180721766401/ZVhVhhQ7_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/977496875887558661/L86xyLF4_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/994681826286301184/ZNY20HQG_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">lastmjs.eth ∞ & vitalik.eth & dom.icp ∞</div> <div style="text-align: center; font-size: 14px;">@dominic_w-lastmjs-vitalikbuterin</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from lastmjs.eth ∞ & vitalik.eth & dom.icp ∞. | Data | lastmjs.eth ∞ | vitalik.eth | dom.icp ∞ | | --- | --- | --- | --- | | Tweets downloaded | 3250 | 3246 | 3249 | | Retweets | 14 | 236 | 322 | | Short tweets | 185 | 122 | 61 | | Tweets kept | 3051 | 2888 | 2866 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2rlc6tzy/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dominic_w-lastmjs-vitalikbuterin's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1hxl56uf) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1hxl56uf/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/dominic_w-lastmjs-vitalikbuterin') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
SharpAI/mal-tls-bert-base-w8a8
SharpAI
2022-08-04T23:39:11Z
4
0
transformers
[ "transformers", "pytorch", "tf", "bert", "text-classification", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-07-27T21:02:28Z
--- tags: - generated_from_keras_callback model-index: - name: mal-tls-bert-base-w8a8 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # mal-tls-bert-base-w8a8 This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.15.0 - TensorFlow 2.6.4 - Datasets 2.1.0 - Tokenizers 0.10.3
fzwd6666/NLI_new
fzwd6666
2022-08-04T22:33:38Z
5
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-08-04T21:42:12Z
This model is a fine-tuned version of bert-base-uncased on an NLI dataset. It achieves the following results on the evaluation set: {'precision': 0.9690210656753407} {'recall': 0.9722337339411521} {'f1': 0.9706247414149772} {'accuracy': 0.9535340314136126} Training hyperparameters: learning_rate: 2e-5 train_batch_size: 8 eval_batch_size: 8 optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08 weight_decay= 0.01 lr_scheduler_type: linear num_epochs: 3 It achieves the following results on the test set: Incorrect UD Padded: {'precision': 0.623370110330993} {'recall': 0.8415707515233581} {'f1': 0.7162201094785364} {'accuracy': 0.5828038966539602} Incorrect UD Unigram: {'precision': 0.6211431461810825} {'recall': 0.8314150304671631} {'f1': 0.7110596409959468} {'accuracy': 0.5772977551884795} Incorrect UD Bigram: {'precision': 0.6203980099502487} {'recall': 0.8442789438050101} {'f1': 0.7152279896759391} {'accuracy': 0.579415501905972} Incorrect UD All: {'precision': 0.605543710021322} {'recall': 0.1922816519972918} {'f1': 0.2918807810894142} {'accuracy': 0.4163490046590428} Incorrect Sentence: {'precision': 0.6411042944785276} {'recall': 0.4245091401489506} {'f1': 0.5107942973523422} {'accuracy': 0.4913172384582804}