modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-29 18:27:06
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
526 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-29 18:26:56
card
stringlengths
11
1.01M
huxxx657/roberta-base-finetuned-scrambled-squad-15
huxxx657
2022-05-10T21:13:58Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-05-10T19:13:39Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-base-finetuned-scrambled-squad-15 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-scrambled-squad-15 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.8722 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.8944 | 1.0 | 5590 | 1.8722 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
tjscollins/ppo-LunarLander-v2
tjscollins
2022-05-10T20:45:37Z
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-10T20:45:13Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 287.12 +/- 20.40 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
kosta-naumenko/ppo-LunarLander-v2-2
kosta-naumenko
2022-05-10T20:06:54Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-10T20:06:22Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 228.05 +/- 22.63 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
m-luebbers/mb-LunarLander-v1
m-luebbers
2022-05-10T19:17:16Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-10T19:16:46Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 224.96 +/- 73.06 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
huxxx657/roberta-base-finetuned-scrambled-squad-10
huxxx657
2022-05-10T19:05:14Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-05-10T17:05:40Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-base-finetuned-scrambled-squad-10 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-scrambled-squad-10 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.7200 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.7482 | 1.0 | 5532 | 1.7200 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
Extred/TEST2ppo-LunarLander-v2-MlpLnLstmPolicy
Extred
2022-05-10T19:02:28Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-10T18:17:58Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 203.89 +/- 88.13 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
Xuandong/HPD-TinyBERT-F128
Xuandong
2022-05-10T17:55:05Z
33
1
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "arxiv:2203.07687", "license:apache-2.0", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-05-10T17:04:19Z
--- license: apache-2.0 --- # HPD-TinyBERT-F128 This repository contains the pre-trained models for our paper [Compressing Sentence Representation for Semantic Retrieval via Homomorphic Projective Distillation](https://arxiv.org/abs/2203.07687). The sentence embedding model contains only 14M parameters and the model size is only 55MB. ## Overview We propose **H**omomorphic **P**rojective **D**istillation (HPD) to learn compressed sentence embeddings. Our method augments a small Transformer encoder model with learnable projection layers to produce compact representations while mimicking a large pre-trained language model to retain the sentence representation quality. ## Details This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 128 dimensional dense vector space and can be used for tasks like clustering or semantic search. The teacher model is [`princeton-nlp/sup-simcse-roberta-large`](https://huggingface.co/princeton-nlp/sup-simcse-bert-base-uncased) and the student model is [`nreimers/TinyBERT_L-4_H-312_v2`](https://huggingface.co/nreimers/TinyBERT_L-4_H-312_v2). ## Usage Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` After installing the package, you can simply load our model ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('Xuandong/HPD-TinyBERT-F128') ``` Then you can use our model for **encoding sentences into embeddings** ```python sentences = ['He plays guitar.', 'A street vendor is outside.'] sentence_embeddings = model.encode(sentences) for sentence, embedding in zip(sentences, sentence_embeddings): print("Sentence:", sentence) print("Embedding:", embedding) print("") ``` ## Evaluation Results We evaluate our model on semantic textual similarity (STS) tasks. The results are: | STS12 | STS13 | STS14 | STS15 | STS16 | STS-B | SICK-R | Avg. | |-------|-------|-------|-------|-------|--------------|-----------------|-------| | 74.29 | 83.05 | 78.80 | 84.62 | 81.17 | 84.36 | 80.83 | 81.02 | ## Training Please refer to the github repo (https://github.com/XuandongZhao/HPD) for the details about the training. ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 312, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Dense({'in_features': 312, 'out_features': 128, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) ) ``` ## Citation Please cite our paper if you use HPD in your work: ```bibtex @article{zhao2022compressing, title={Compressing Sentence Representation for Semantic Retrieval via Homomorphic Projective Distillation}, author={Zhao, Xuandong and Yu, Zhiguo and Wu, Ming and Li, Lei}, journal={arXiv preprint arXiv:2203.07687}, year={2022} } ```
Xuandong/HPD-MiniLM-F128
Xuandong
2022-05-10T17:54:43Z
5
0
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "arxiv:2203.07687", "license:apache-2.0", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-05-10T17:01:40Z
--- license: apache-2.0 --- # HPD-MiniLM-F128 This repository contains the pre-trained models for our paper [Compressing Sentence Representation for Semantic Retrieval via Homomorphic Projective Distillation](https://arxiv.org/abs/2203.07687). The sentence embedding model contains only 23M parameters and the model size is only 87MB. ## Overview We propose **H**omomorphic **P**rojective **D**istillation (HPD) to learn compressed sentence embeddings. Our method augments a small Transformer encoder model with learnable projection layers to produce compact representations while mimicking a large pre-trained language model to retain the sentence representation quality. ## Details This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 128 dimensional dense vector space and can be used for tasks like clustering or semantic search. The teacher model is [`princeton-nlp/sup-simcse-roberta-large`](https://huggingface.co/princeton-nlp/sup-simcse-bert-base-uncased) and the student model is [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased). ## Usage Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` After installing the package, you can simply load our model ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('Xuandong/HPD-MiniLM-F128') ``` Then you can use our model for **encoding sentences into embeddings** ```python sentences = ['He plays guitar.', 'A street vendor is outside.'] sentence_embeddings = model.encode(sentences) for sentence, embedding in zip(sentences, sentence_embeddings): print("Sentence:", sentence) print("Embedding:", embedding) print("") ``` ## Evaluation Results We evaluate our model on semantic textual similarity (STS) tasks. The results are: | STS12 | STS13 | STS14 | STS15 | STS16 | STS-B | SICK-R | Avg. | |-------|-------|-------|-------|-------|--------------|-----------------|-------| | 74.94 | 84.52 | 80.25 | 84.87 | 81.90 | 84.98 | 81.15 | 81.80 | ## Training Please refer to the github repo (https://github.com/XuandongZhao/HPD) for the details about the training. ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Dense({'in_features': 384, 'out_features': 128, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) ) ``` ## Citation Please cite our paper if you use HPD in your work: ```bibtex @article{zhao2022compressing, title={Compressing Sentence Representation for Semantic Retrieval via Homomorphic Projective Distillation}, author={Zhao, Xuandong and Yu, Zhiguo and Wu, Ming and Li, Lei}, journal={arXiv preprint arXiv:2203.07687}, year={2022} } ```
cmcmorrow/distilbert-rater
cmcmorrow
2022-05-10T17:52:42Z
6
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-10T17:47:22Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilbert-rater results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-rater This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.9.1 - Datasets 1.18.4 - Tokenizers 0.11.6
allenai/multicite-qa-qasper
allenai
2022-05-10T17:48:30Z
18
1
transformers
[ "transformers", "en", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-05-10T12:04:24Z
--- language: en license: mit --- # MultiCite: Multi-label Citation Intent Analysis as paper-level Q&A (NAACL 2022) This model has been trained on the data available here: https://github.com/allenai/multicite.
paultimothymooney/distilbert-rater
paultimothymooney
2022-05-10T17:40:47Z
17
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-10T16:11:45Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilbert-rater results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-rater This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.9.1 - Datasets 1.18.4 - Tokenizers 0.11.6
husnu/wav2vec2-large-xls-r-300m-turkish-colab_common_voice-8_5
husnu
2022-05-10T17:22:15Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-05-10T13:23:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-turkish-colab_common_voice-8_5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-turkish-colab_common_voice-8_5 This model is a fine-tuned version of [husnu/wav2vec2-large-xls-r-300m-turkish-colab_common_voice-8_4](https://huggingface.co/husnu/wav2vec2-large-xls-r-300m-turkish-colab_common_voice-8_4) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.3439 - Wer: 0.3634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.1243 | 0.51 | 400 | 0.4312 | 0.4202 | | 0.1956 | 1.02 | 800 | 0.4421 | 0.4498 | | 0.1816 | 1.53 | 1200 | 0.4012 | 0.4285 | | 0.1548 | 2.04 | 1600 | 0.3720 | 0.3845 | | 0.1171 | 2.55 | 2000 | 0.3439 | 0.3634 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 2.1.0 - Tokenizers 0.10.3
datauma/mt5-small-finetuned-amazon-en-es
datauma
2022-05-10T16:52:35Z
3
0
transformers
[ "transformers", "tf", "mt5", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-04T04:07:58Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: datauma/mt5-small-finetuned-amazon-en-es results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # datauma/mt5-small-finetuned-amazon-en-es This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 4.2505 - Validation Loss: 3.4530 - Epoch: 7 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 9672, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 11.9288 | 5.8713 | 0 | | 6.6821 | 4.3246 | 1 | | 5.6453 | 3.8715 | 2 | | 5.0908 | 3.6368 | 3 | | 4.7348 | 3.5496 | 4 | | 4.5106 | 3.4939 | 5 | | 4.3261 | 3.4659 | 6 | | 4.2505 | 3.4530 | 7 | ### Framework versions - Transformers 4.18.0 - TensorFlow 2.8.0 - Datasets 2.1.0 - Tokenizers 0.12.1
anuragshas/wav2vec2-xls-r-300m-ur-cv9-with-lm
anuragshas
2022-05-10T16:51:19Z
7
1
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_9_0", "generated_from_trainer", "ur", "dataset:mozilla-foundation/common_voice_9_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-05-04T14:27:44Z
--- language: - ur license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_9_0 - generated_from_trainer datasets: - mozilla-foundation/common_voice_9_0 metrics: - wer model-index: - name: XLS-R-300M - Urdu results: - task: type: automatic-speech-recognition name: Speech Recognition dataset: type: mozilla-foundation/common_voice_9_0 name: Common Voice 9 args: ur metrics: - type: wer value: 23.750 name: Test WER - name: Test CER type: cer value: 8.310 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_9_0 - UR dataset. It achieves the following results on the evaluation set: - Loss: 0.4147 - Wer: 0.3172 - Cer: 0.1050 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 5108 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:| | 3.2894 | 7.83 | 400 | 3.1501 | 1.0 | 1.0 | | 1.8586 | 15.68 | 800 | 0.8871 | 0.6721 | 0.2402 | | 1.3431 | 23.52 | 1200 | 0.5813 | 0.5502 | 0.1939 | | 1.2052 | 31.37 | 1600 | 0.4956 | 0.4788 | 0.1665 | | 1.1097 | 39.21 | 2000 | 0.4447 | 0.4143 | 0.1397 | | 1.0528 | 47.06 | 2400 | 0.4439 | 0.3961 | 0.1333 | | 0.9939 | 54.89 | 2800 | 0.4348 | 0.4014 | 0.1379 | | 0.9441 | 62.74 | 3200 | 0.4236 | 0.3653 | 0.1223 | | 0.913 | 70.58 | 3600 | 0.4309 | 0.3475 | 0.1157 | | 0.8678 | 78.43 | 4000 | 0.4270 | 0.3337 | 0.1110 | | 0.8414 | 86.27 | 4400 | 0.4158 | 0.3220 | 0.1070 | | 0.817 | 94.12 | 4800 | 0.4185 | 0.3231 | 0.1072 | ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.11.0+cu102 - Datasets 2.1.1.dev0 - Tokenizers 0.12.1
Joiner/ppoLunarLanding-v2
Joiner
2022-05-10T16:44:09Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-10T16:43:26Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 126.84 +/- 80.67 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
anuragshas/wav2vec2-xls-r-300m-bn-cv9-with-lm
anuragshas
2022-05-10T16:17:38Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_9_0", "generated_from_trainer", "bn", "dataset:mozilla-foundation/common_voice_9_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-05-06T03:54:55Z
--- language: - bn license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_9_0 - generated_from_trainer datasets: - mozilla-foundation/common_voice_9_0 metrics: - wer model-index: - name: XLS-R-300M - Bengali results: - task: type: automatic-speech-recognition name: Speech Recognition dataset: type: mozilla-foundation/common_voice_9_0 name: Common Voice 9 args: bn metrics: - type: wer value: 20.150 name: Test WER - name: Test CER type: cer value: 4.813 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_9_0 - BN dataset. It achieves the following results on the evaluation set: - Loss: 0.2297 - Wer: 0.2850 - Cer: 0.0660 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 8692 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:| | 3.675 | 2.3 | 400 | 3.5052 | 1.0 | 1.0 | | 3.0446 | 4.6 | 800 | 2.2759 | 1.0052 | 0.5215 | | 1.7276 | 6.9 | 1200 | 0.7083 | 0.6697 | 0.1969 | | 1.5171 | 9.2 | 1600 | 0.5328 | 0.5733 | 0.1568 | | 1.4176 | 11.49 | 2000 | 0.4571 | 0.5161 | 0.1381 | | 1.343 | 13.79 | 2400 | 0.3910 | 0.4522 | 0.1160 | | 1.2743 | 16.09 | 2800 | 0.3534 | 0.4137 | 0.1044 | | 1.2396 | 18.39 | 3200 | 0.3278 | 0.3877 | 0.0959 | | 1.2035 | 20.69 | 3600 | 0.3109 | 0.3741 | 0.0917 | | 1.1745 | 22.99 | 4000 | 0.2972 | 0.3618 | 0.0882 | | 1.1541 | 25.29 | 4400 | 0.2836 | 0.3427 | 0.0832 | | 1.1372 | 27.59 | 4800 | 0.2759 | 0.3357 | 0.0812 | | 1.1048 | 29.89 | 5200 | 0.2669 | 0.3284 | 0.0783 | | 1.0966 | 32.18 | 5600 | 0.2678 | 0.3249 | 0.0775 | | 1.0747 | 34.48 | 6000 | 0.2547 | 0.3134 | 0.0748 | | 1.0593 | 36.78 | 6400 | 0.2491 | 0.3077 | 0.0728 | | 1.0417 | 39.08 | 6800 | 0.2450 | 0.3012 | 0.0711 | | 1.024 | 41.38 | 7200 | 0.2402 | 0.2956 | 0.0694 | | 1.0106 | 43.68 | 7600 | 0.2351 | 0.2915 | 0.0681 | | 1.0014 | 45.98 | 8000 | 0.2328 | 0.2896 | 0.0673 | | 0.9999 | 48.28 | 8400 | 0.2318 | 0.2866 | 0.0667 | ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.11.0+cu102 - Datasets 2.1.1.dev0 - Tokenizers 0.12.1
joitandr/TEST2ppo-LunarLander-v2
joitandr
2022-05-10T15:13:17Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-10T15:12:47Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 249.46 +/- 20.60 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
akozlo/con_gpt_med
akozlo
2022-05-10T12:52:01Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-05-10T12:47:23Z
--- tags: - generated_from_trainer model-index: - name: con_gpt_med_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # con_gpt_med_model This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6 hello
darshanz/occupation-prediction
darshanz
2022-05-10T11:59:28Z
35
0
transformers
[ "transformers", "tf", "tensorboard", "vit", "image-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-05-08T04:35:30Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: darshanz/occupaion-prediction results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # darshanz/occupation-prediction This model is ViT base patch16. Which is pretrained on imagenet dataset, then trained on our custom dataset which is based on occupation prediction. This dataset contains facial images of Indian people which are labeled by occupation. This model predicts the occupation of a person from the facial image of a person. This model categorizes input facial images into 5 classes: Anchor, Athlete, Doctor, Professor, and Farmer. This model gives an accuracy of 84.43%. ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 0.0001, 'decay_steps': 70, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.4}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 ### Training results | Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch | |:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:| | 1.0840 | 0.6156 | 0.8813 | 0.6843 | 0.75 | 0.9700 | 0 | | 0.4686 | 0.8406 | 0.9875 | 0.5345 | 0.8100 | 0.9867 | 1 | | 0.2600 | 0.9312 | 0.9953 | 0.4805 | 0.8333 | 0.9800 | 2 | | 0.1515 | 0.9609 | 0.9969 | 0.5071 | 0.8267 | 0.9733 | 3 | | 0.0746 | 0.9875 | 1.0 | 0.4853 | 0.8500 | 0.9833 | 4 | | 0.0468 | 0.9953 | 1.0 | 0.5006 | 0.8433 | 0.9733 | 5 | | 0.0378 | 0.9953 | 1.0 | 0.4967 | 0.8433 | 0.9800 | 6 | ### Framework versions - Transformers 4.18.0 - TensorFlow 2.8.0 - Tokenizers 0.12.1
runjivu/TEST2ppo-LunarLander-v2
runjivu
2022-05-10T11:14:17Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-10T11:13:50Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 254.09 +/- 18.39 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
ebonazza2910/model-1h
ebonazza2910
2022-05-10T11:13:54Z
1
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-05-10T09:45:55Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: model-1h results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model-1h This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 1.8317 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 11.4106 | 1.24 | 10 | 7.1597 | 1.0 | | 4.777 | 2.47 | 20 | 3.9782 | 1.0 | | 3.6585 | 3.71 | 30 | 3.3961 | 1.0 | | 3.3143 | 4.94 | 40 | 3.1481 | 1.0 | | 3.3318 | 6.24 | 50 | 3.0596 | 1.0 | | 3.1368 | 7.47 | 60 | 2.9751 | 1.0 | | 3.1058 | 8.71 | 70 | 2.9510 | 1.0 | | 3.0605 | 9.94 | 80 | 2.9479 | 1.0 | | 3.2043 | 11.24 | 90 | 2.9270 | 1.0 | | 3.0424 | 12.47 | 100 | 2.9349 | 1.0 | | 3.0374 | 13.71 | 110 | 2.9316 | 1.0 | | 3.0256 | 14.94 | 120 | 2.9165 | 1.0 | | 3.1724 | 16.24 | 130 | 2.9076 | 1.0 | | 3.0119 | 17.47 | 140 | 2.9034 | 1.0 | | 2.9937 | 18.71 | 150 | 2.8812 | 1.0 | | 2.9775 | 19.94 | 160 | 2.8674 | 1.0 | | 3.0826 | 21.24 | 170 | 2.8147 | 1.0 | | 2.8717 | 22.47 | 180 | 2.7212 | 1.0 | | 2.7714 | 23.71 | 190 | 2.6149 | 0.9952 | | 2.634 | 24.94 | 200 | 2.4611 | 0.9984 | | 2.5637 | 26.24 | 210 | 2.2734 | 1.0 | | 2.237 | 27.47 | 220 | 2.0705 | 1.0 | | 2.0381 | 28.71 | 230 | 1.9216 | 1.0 | | 1.8788 | 29.94 | 240 | 1.8317 | 1.0 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1+cu102 - Datasets 1.18.3 - Tokenizers 0.10.3
mcsabai/huBert-fine-tuned-hungarian-squadv1
mcsabai
2022-05-10T10:59:53Z
11
3
transformers
[ "transformers", "pytorch", "tf", "bert", "question-answering", "hu", "endpoints_compatible", "region:us" ]
question-answering
2022-03-27T12:35:44Z
--- language: hu thumbnail: tags: - question-answering - bert widget: - text: "Melyik folyó szeli ketté Budapestet?" context: "Magyarország fővárosát, Budapestet a Duna folyó szeli ketté. A XIX. században épült Lánchíd a dimbes-dombos budai oldalt köti össze a sík Pesttel. A Várdomb oldalában futó siklóval juthatunk fel a budai Óvárosba, ahol a Budapesti Történeti Múzeum egészen a római időkig visszavezetve mutatja be a városi életet. A Szentháromság tér ad otthont a XIII. századi Mátyás-templomnak és a Halászbástya lőtornyainak, amelyekből messzire ellátva gyönyörködhetünk a városban." - text: "Mivel juthatunk fel az Óvárosba?" context: "Magyarország fővárosát, Budapestet a Duna folyó szeli ketté. A XIX. században épült Lánchíd a dimbes-dombos budai oldalt köti össze a sík Pesttel. A Várdomb oldalában futó siklóval juthatunk fel a budai Óvárosba, ahol a Budapesti Történeti Múzeum egészen a római időkig visszavezetve mutatja be a városi életet. A Szentháromság tér ad otthont a XIII. századi Mátyás-templomnak és a Halászbástya lőtornyainak, amelyekből messzire ellátva gyönyörködhetünk a városban." --- ## MODEL DESCRIPTION huBERT base model (cased) fine-tuned on SQuAD v1 - huBert model + Tokenizer: https://huggingface.co/SZTAKI-HLT/hubert-base-cc - Hungarian SQUAD v1 dataset: Machine Translated SQuAD dataset (Google Translate API) - This is a demo model. Date of publication: 2022.03.27. ## Model in action - Fast usage with pipelines: ```python from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="mcsabai/huBert-fine-tuned-hungarian-squadv1", tokenizer="mcsabai/huBert-fine-tuned-hungarian-squadv1" ) predictions = qa_pipeline({ 'context': "Anita vagyok és Budapesten élek már több mint 4 éve.", 'question': "Hol lakik Anita?" }) print(predictions) # output: # {'score': 0.9892364144325256, 'start': 16, 'end': 26, 'answer': 'Budapesten'} ```
osanseviero/TEST2ppo-LunarLander-v3
osanseviero
2022-05-10T10:41:13Z
4
0
stable-baselines3
[ "stable-baselines3", "MountainCar-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-04T09:38:06Z
--- library_name: stable-baselines3 tags: - MountainCar-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: -97.87 +/- 143.38 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: MountainCar-v0 type: MountainCar-v0 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
SreyanG-NVIDIA/bert-base-cased-finetuned-ner
SreyanG-NVIDIA
2022-05-10T10:05:34Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-05-10T09:56:26Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-base-cased-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.9325301204819277 - name: Recall type: recall value: 0.9374663556432801 - name: F1 type: f1 value: 0.9349917229654156 - name: Accuracy type: accuracy value: 0.9840466238888562 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0650 - Precision: 0.9325 - Recall: 0.9375 - F1: 0.9350 - Accuracy: 0.9840 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2346 | 1.0 | 878 | 0.0722 | 0.9168 | 0.9217 | 0.9192 | 0.9795 | | 0.0483 | 2.0 | 1756 | 0.0618 | 0.9299 | 0.9370 | 0.9335 | 0.9837 | | 0.0262 | 3.0 | 2634 | 0.0650 | 0.9325 | 0.9375 | 0.9350 | 0.9840 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu102 - Datasets 2.1.0 - Tokenizers 0.12.1
patrickvonplaten/wav2vec2-base-timit-demo-colab
patrickvonplaten
2022-05-10T09:38:48Z
449
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4888 - Wer: 0.3392 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.1134 | 4.0 | 500 | 0.4250 | 0.3626 | | 0.1035 | 8.0 | 1000 | 0.4980 | 0.3650 | | 0.0801 | 12.0 | 1500 | 0.5563 | 0.3632 | | 0.0592 | 16.0 | 2000 | 0.6222 | 0.3607 | | 0.0563 | 20.0 | 2500 | 0.4763 | 0.3457 | | 0.0611 | 24.0 | 3000 | 0.4938 | 0.3489 | | 0.0475 | 28.0 | 3500 | 0.4888 | 0.3392 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
etsymba/ppo-LunarLander-v2
etsymba
2022-05-10T09:26:45Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-10T09:23:14Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 208.93 +/- 53.16 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
Pausaxo/ppo-LunarLander-v2
Pausaxo
2022-05-10T08:57:23Z
4
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-10T08:56:41Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 186.57 +/- 75.05 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
melodisease/ppo-LunarLander-v2
melodisease
2022-05-10T08:57:17Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-10T08:56:43Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 243.43 +/- 22.55 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
mrm8488/electricidad-base-finetuned-parmex
mrm8488
2022-05-10T08:18:19Z
4
1
transformers
[ "transformers", "pytorch", "tensorboard", "electra", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-10T07:56:42Z
--- tags: - generated_from_trainer metrics: - f1 model-index: - name: electricidad-base-finetuned-parmex results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electricidad-base-finetuned-parmex This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0372 - F1: 0.9764 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8.309269976237555e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 208 | 0.0377 | 0.9801 | | No log | 2.0 | 416 | 0.0372 | 0.9764 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
Theimisa/distilbert-base-uncased-aisera_texts-v3
Theimisa
2022-05-10T07:49:12Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-05-09T11:41:54Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-aisera_texts-v3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-aisera_texts-v3 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8106 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0183 | 1.0 | 3875 | 1.8913 | | 1.9018 | 2.0 | 7750 | 1.8106 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
RicardFos/PPO-LunarLander-v2
RicardFos
2022-05-10T07:22:20Z
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-10T07:21:46Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 241.12 +/- 21.01 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
promsoft/ll2022-05-09-lunar4
promsoft
2022-05-10T06:43:14Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-10T06:13:26Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 252.15 +/- 22.31 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
ironbar/ppo-lunarlander-v2-local-train-bigger
ironbar
2022-05-10T05:32:57Z
8
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-10T05:32:30Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 302.71 +/- 7.68 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
suicaokhoailang/gpt-neo-vi-comments-finetuned
suicaokhoailang
2022-05-10T05:19:54Z
11
1
transformers
[ "transformers", "pytorch", "gpt_neo", "text-generation", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-05-10T03:31:14Z
--- license: mit --- GPT-Neo-small for Vietnamese Based on [NlpHUST/gpt-neo-vi-small](https://huggingface.co/NlpHUST/gpt-neo-vi-small), finetuned on dataset of [10m Facebook comments](https://github.com/binhvq/news-corpus)
kornosk/bert-political-election2020-twitter-mlm
kornosk
2022-05-10T04:45:45Z
88
4
transformers
[ "transformers", "pytorch", "jax", "bert", "fill-mask", "twitter", "masked-token-prediction", "election2020", "politics", "en", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: "en" tags: - twitter - masked-token-prediction - election2020 - politics license: "gpl-3.0" --- # Pre-trained BERT on Twitter US Political Election 2020 Pre-trained weights for [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021. We use the initialized weights from BERT-base (uncased) or `bert-base-uncased`. # Training Data This model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election. # Training Objective This model is initialized with BERT-base and trained with normal MLM objective. # Usage This pre-trained language model **can be fine-tunned to any downstream task (e.g. classification)**. Please see the [official repository](https://github.com/GU-DataLab/stance-detection-KE-MLM) for more detail. ```python from transformers import BertTokenizer, BertForMaskedLM, pipeline import torch # Choose GPU if available device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # Select mode path here pretrained_LM_path = "kornosk/bert-political-election2020-twitter-mlm" # Load model tokenizer = BertTokenizer.from_pretrained(pretrained_LM_path) model = BertForMaskedLM.from_pretrained(pretrained_LM_path) # Fill mask example = "Trump is the [MASK] of USA" fill_mask = pipeline('fill-mask', model=model, tokenizer=tokenizer) # Use following line instead of the above one does not work. # Huggingface have been updated, newer version accepts a string of model name instead. fill_mask = pipeline('fill-mask', model=pretrained_LM_path, tokenizer=tokenizer) outputs = fill_mask(example) print(outputs) # See embeddings inputs = tokenizer(example, return_tensors="pt") outputs = model(**inputs) print(outputs) # OR you can use this model to train on your downstream task! # Please consider citing our paper if you feel this is useful :) ``` # Reference - [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021. # Citation ```bibtex @inproceedings{kawintiranon2021knowledge, title={Knowledge Enhanced Masked Language Model for Stance Detection}, author={Kawintiranon, Kornraphop and Singh, Lisa}, booktitle={Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies}, year={2021}, publisher={Association for Computational Linguistics}, url={https://www.aclweb.org/anthology/2021.naacl-main.376} } ```
Sounak/distilbert-finetuned
Sounak
2022-05-10T04:05:02Z
3
0
transformers
[ "transformers", "tf", "distilbert", "question-answering", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-05-10T04:00:48Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Sounak/distilbert-finetuned results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Sounak/distilbert-finetuned This model is a fine-tuned version of [distilbert-base-uncased-distilled-squad](https://huggingface.co/distilbert-base-uncased-distilled-squad) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.0422 - Validation Loss: 1.7343 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 468, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.9989 | 1.6524 | 0 | | 1.3489 | 1.6702 | 1 | | 1.0422 | 1.7343 | 2 | ### Framework versions - Transformers 4.18.0 - TensorFlow 2.8.0 - Datasets 2.1.0 - Tokenizers 0.12.1
kornosk/polibertweet-political-twitter-roberta-mlm-small
kornosk
2022-05-10T03:49:55Z
16
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "twitter", "masked-token-prediction", "bertweet", "election2020", "politics", "en", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-05-10T03:41:49Z
--- language: "en" tags: - twitter - masked-token-prediction - bertweet - election2020 - politics license: "gpl-3.0" --- # This version is trained on a smaller data set. See the full-size version at [PoliBERTweet](https://huggingface.co/kornosk/polibertweet-mlm). # Citation ```bibtex @inproceedings{kawintiranon2022polibertweet, title = {PoliBERTweet: A Pre-trained Language Model for Analyzing Political Content on Twitter}, author = {Kawintiranon, Kornraphop and Singh, Lisa}, booktitle = {Proceedings of the Language Resources and Evaluation Conference}, year = {2022}, publisher = {European Language Resources Association} } ```
ckiplab/bert-tiny-chinese
ckiplab
2022-05-10T03:28:12Z
226
7
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "lm-head", "zh", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-05-10T02:53:57Z
--- language: - zh thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png tags: - pytorch - lm-head - bert - zh license: gpl-3.0 --- # CKIP BERT Tiny Chinese This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition). 這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。 ## Homepage - https://github.com/ckiplab/ckip-transformers ## Contributers - [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer) ## Usage Please use BertTokenizerFast as tokenizer instead of AutoTokenizer. 請使用 BertTokenizerFast 而非 AutoTokenizer。 ``` from transformers import ( BertTokenizerFast, AutoModel, ) tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese') model = AutoModel.from_pretrained('ckiplab/bert-tiny-chinese') ``` For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers. 有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
ckiplab/bert-tiny-chinese-ws
ckiplab
2022-05-10T03:28:12Z
1,641
1
transformers
[ "transformers", "pytorch", "bert", "token-classification", "zh", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-05-10T02:54:32Z
--- language: - zh thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png tags: - pytorch - token-classification - bert - zh license: gpl-3.0 --- # CKIP BERT Tiny Chinese This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition). 這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。 ## Homepage - https://github.com/ckiplab/ckip-transformers ## Contributers - [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer) ## Usage Please use BertTokenizerFast as tokenizer instead of AutoTokenizer. 請使用 BertTokenizerFast 而非 AutoTokenizer。 ``` from transformers import ( BertTokenizerFast, AutoModel, ) tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese') model = AutoModel.from_pretrained('ckiplab/bert-tiny-chinese-ws') ``` For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers. 有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
ckiplab/bert-base-chinese-ner
ckiplab
2022-05-10T03:28:12Z
31,527
112
transformers
[ "transformers", "pytorch", "jax", "bert", "token-classification", "zh", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - zh thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png tags: - pytorch - token-classification - bert - zh license: gpl-3.0 --- # CKIP BERT Base Chinese This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition). 這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。 ## Homepage - https://github.com/ckiplab/ckip-transformers ## Contributers - [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer) ## Usage Please use BertTokenizerFast as tokenizer instead of AutoTokenizer. 請使用 BertTokenizerFast 而非 AutoTokenizer。 ``` from transformers import ( BertTokenizerFast, AutoModel, ) tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese') model = AutoModel.from_pretrained('ckiplab/bert-base-chinese-ner') ``` For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers. 有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
ckiplab/bert-tiny-chinese-ner
ckiplab
2022-05-10T03:28:12Z
1,433
4
transformers
[ "transformers", "pytorch", "bert", "token-classification", "zh", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-05-10T02:55:04Z
--- language: - zh thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png tags: - pytorch - token-classification - bert - zh license: gpl-3.0 --- # CKIP BERT Tiny Chinese This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition). 這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。 ## Homepage - https://github.com/ckiplab/ckip-transformers ## Contributers - [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer) ## Usage Please use BertTokenizerFast as tokenizer instead of AutoTokenizer. 請使用 BertTokenizerFast 而非 AutoTokenizer。 ``` from transformers import ( BertTokenizerFast, AutoModel, ) tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese') model = AutoModel.from_pretrained('ckiplab/bert-tiny-chinese-ner') ``` For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers. 有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
ckiplab/bert-tiny-chinese-pos
ckiplab
2022-05-10T03:28:12Z
63
2
transformers
[ "transformers", "pytorch", "bert", "token-classification", "zh", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-05-10T02:54:45Z
--- language: - zh thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png tags: - pytorch - token-classification - bert - zh license: gpl-3.0 --- # CKIP BERT Tiny Chinese This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition). 這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。 ## Homepage - https://github.com/ckiplab/ckip-transformers ## Contributers - [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer) ## Usage Please use BertTokenizerFast as tokenizer instead of AutoTokenizer. 請使用 BertTokenizerFast 而非 AutoTokenizer。 ``` from transformers import ( BertTokenizerFast, AutoModel, ) tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese') model = AutoModel.from_pretrained('ckiplab/bert-tiny-chinese-pos') ``` For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers. 有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
ckiplab/albert-tiny-chinese-ner
ckiplab
2022-05-10T03:28:10Z
122
2
transformers
[ "transformers", "pytorch", "albert", "token-classification", "zh", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - zh thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png tags: - pytorch - token-classification - albert - zh license: gpl-3.0 --- # CKIP ALBERT Tiny Chinese This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition). 這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。 ## Homepage - https://github.com/ckiplab/ckip-transformers ## Contributers - [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer) ## Usage Please use BertTokenizerFast as tokenizer instead of AutoTokenizer. 請使用 BertTokenizerFast 而非 AutoTokenizer。 ``` from transformers import ( BertTokenizerFast, AutoModel, ) tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese') model = AutoModel.from_pretrained('ckiplab/albert-tiny-chinese-ner') ``` For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers. 有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
ckiplab/albert-base-chinese-pos
ckiplab
2022-05-10T03:28:09Z
1,144
2
transformers
[ "transformers", "pytorch", "albert", "token-classification", "zh", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - zh thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png tags: - pytorch - token-classification - albert - zh license: gpl-3.0 --- # CKIP ALBERT Base Chinese This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition). 這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。 ## Homepage - https://github.com/ckiplab/ckip-transformers ## Contributers - [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer) ## Usage Please use BertTokenizerFast as tokenizer instead of AutoTokenizer. 請使用 BertTokenizerFast 而非 AutoTokenizer。 ``` from transformers import ( BertTokenizerFast, AutoModel, ) tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese') model = AutoModel.from_pretrained('ckiplab/albert-base-chinese-pos') ``` For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers. 有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
ckiplab/albert-base-chinese-ws
ckiplab
2022-05-10T03:28:09Z
1,733
2
transformers
[ "transformers", "pytorch", "albert", "token-classification", "zh", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - zh thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png tags: - pytorch - token-classification - albert - zh license: gpl-3.0 --- # CKIP ALBERT Base Chinese This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition). 這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。 ## Homepage - https://github.com/ckiplab/ckip-transformers ## Contributers - [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer) ## Usage Please use BertTokenizerFast as tokenizer instead of AutoTokenizer. 請使用 BertTokenizerFast 而非 AutoTokenizer。 ``` from transformers import ( BertTokenizerFast, AutoModel, ) tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese') model = AutoModel.from_pretrained('ckiplab/albert-base-chinese-ws') ``` For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers. 有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
ckiplab/albert-base-chinese
ckiplab
2022-05-10T03:28:08Z
1,117
12
transformers
[ "transformers", "pytorch", "albert", "fill-mask", "lm-head", "zh", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - zh thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png tags: - pytorch - lm-head - albert - zh license: gpl-3.0 --- # CKIP ALBERT Base Chinese This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition). 這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。 ## Homepage - https://github.com/ckiplab/ckip-transformers ## Contributers - [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer) ## Usage Please use BertTokenizerFast as tokenizer instead of AutoTokenizer. 請使用 BertTokenizerFast 而非 AutoTokenizer。 ``` from transformers import ( BertTokenizerFast, AutoModel, ) tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese') model = AutoModel.from_pretrained('ckiplab/albert-base-chinese') ``` For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers. 有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
ckiplab/albert-base-chinese-ner
ckiplab
2022-05-10T03:28:08Z
2,295
14
transformers
[ "transformers", "pytorch", "albert", "token-classification", "zh", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - zh thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png tags: - pytorch - token-classification - albert - zh license: gpl-3.0 --- # CKIP ALBERT Base Chinese This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition). 這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。 ## Homepage - https://github.com/ckiplab/ckip-transformers ## Contributers - [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer) ## Usage Please use BertTokenizerFast as tokenizer instead of AutoTokenizer. 請使用 BertTokenizerFast 而非 AutoTokenizer。 ``` from transformers import ( BertTokenizerFast, AutoModel, ) tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese') model = AutoModel.from_pretrained('ckiplab/albert-base-chinese-ner') ``` For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers. 有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
huxxx657/roberta-base-finetuned-squad-3
huxxx657
2022-05-10T01:09:48Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-05-09T22:50:17Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-base-finetuned-squad-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-squad-3 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 0.8358 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.8626 | 1.0 | 5536 | 0.8358 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
jayeshgar/ppo-LunarLander-v2
jayeshgar
2022-05-09T23:57:37Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-09T23:57:06Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 209.48 +/- 63.51 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
meln1k/ppo-LunarLander-v2
meln1k
2022-05-09T23:33:56Z
5
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-06T18:39:39Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 289.26 +/- 18.33 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
danielmaxwell/TEST2ppo-LunarLander-v2
danielmaxwell
2022-05-09T21:01:58Z
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-09T21:01:24Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 137.66 +/- 94.84 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
johko/ppo-lunarlander-v2
johko
2022-05-09T20:41:17Z
3
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-09T20:16:07Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 277.89 +/- 22.93 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
KenP/marian-finetuned-kde4-en-to-fr
KenP
2022-05-09T20:36:25Z
3
0
transformers
[ "transformers", "tf", "marian", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-09T18:11:12Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: KenP/marian-finetuned-kde4-en-to-fr results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # KenP/marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6855 - Validation Loss: 0.8088 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 17733, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.0599 | 0.8835 | 0 | | 0.7975 | 0.8254 | 1 | | 0.6855 | 0.8088 | 2 | ### Framework versions - Transformers 4.18.0 - TensorFlow 2.8.0 - Datasets 2.1.0 - Tokenizers 0.12.1
umbertospazio/FirstPPO-LunarLander-v2
umbertospazio
2022-05-09T20:10:41Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-09T20:10:14Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: -212.53 +/- 86.74 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
suppabob/TEST2ppo-LunarLander-v2
suppabob
2022-05-09T18:55:43Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-09T18:55:08Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 218.36 +/- 65.70 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
TinySuitStarfish/ppo-lunarlanderabhishek-v2
TinySuitStarfish
2022-05-09T18:02:18Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-09T18:01:42Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 169.97 +/- 15.25 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
promsoft/ll2022-05-09-lunar3
promsoft
2022-05-09T17:32:02Z
3
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-09T17:31:39Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 262.07 +/- 20.63 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
ysharma/distilbert-base-uncased-finetuned-emotions
ysharma
2022-05-09T17:10:14Z
19
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-09T16:29:30Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - f1 model-index: - name: distilbert-base-uncased-finetuned-emotions results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: F1 type: f1 value: 0.9331148494056558 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotions This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1579 - Acc: 0.933 - F1: 0.9331 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Acc | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:| | 0.1723 | 1.0 | 250 | 0.1838 | 0.9315 | 0.9312 | | 0.1102 | 2.0 | 500 | 0.1579 | 0.933 | 0.9331 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
Joshwabail/lunar_lander_test
Joshwabail
2022-05-09T16:57:52Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-09T16:29:40Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: -177.16 +/- 72.05 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
ansegura/ppo-LunarLander-v2-test-2
ansegura
2022-05-09T15:44:13Z
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-09T15:43:45Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 267.76 +/- 16.85 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
princeton-nlp/CoFi-MRPC-s60
princeton-nlp
2022-05-09T15:24:25Z
3
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "arxiv:2204.00408", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-09T15:19:52Z
This is a model checkpoint for "[Structured Pruning Learns Compact and Accurate Models](https://arxiv.org/pdf/2204.00408.pdf)". The model is pruned from `bert-base-uncased` to a 60% sparsity on dataset MRPC. Please go to [our repository](https://github.com/princeton-nlp/CoFiPruning) for more details on how to use the model for inference. Note that you would have to use the model class specified in our repository to load the model.
princeton-nlp/CoFi-CoLA-s95
princeton-nlp
2022-05-09T15:24:06Z
3
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "arxiv:2204.00408", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-09T15:20:55Z
This is a model checkpoint for "[Structured Pruning Learns Compact and Accurate Models](https://arxiv.org/pdf/2204.00408.pdf)". The model is pruned from `bert-base-uncased` to a 95% sparsity on dataset CoLA. Please go to [our repository](https://github.com/princeton-nlp/CoFiPruning) for more details on how to use the model for inference. Note that you would have to use the model class specified in our repository to load the model.
princeton-nlp/CoFi-CoLA-s60
princeton-nlp
2022-05-09T15:23:43Z
5
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "arxiv:2204.00408", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-09T15:20:20Z
This is a model checkpoint for "[Structured Pruning Learns Compact and Accurate Models](https://arxiv.org/pdf/2204.00408.pdf)". The model is pruned from `bert-base-uncased` to a 60% sparsity on dataset CoLA. Please go to [our repository](https://github.com/princeton-nlp/CoFiPruning) for more details on how to use the model for inference. Note that you would have to use the model class specified in our repository to load the model.
princeton-nlp/CoFi-RTE-s60
princeton-nlp
2022-05-09T15:23:20Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "arxiv:2204.00408", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-09T15:10:20Z
This is a model checkpoint for "[Structured Pruning Learns Compact and Accurate Models](https://arxiv.org/pdf/2204.00408.pdf)". The model is pruned from `bert-base-uncased` to a 60% sparsity on dataset RTE. Please go to [our repository](https://github.com/princeton-nlp/CoFiPruning) for more details on how to use the model for inference. Note that you would have to use the model class specified in our repository to load the model.
srini98/RLModel1
srini98
2022-05-09T15:04:19Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-09T15:03:27Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 97.96 +/- 73.06 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
guhuawuli/distilbert-base-uncased-finetuned-ner
guhuawuli
2022-05-09T15:03:24Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-05-09T13:28:03Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.8982049036777583 - name: Recall type: recall value: 0.9179997762613268 - name: F1 type: f1 value: 0.9079944674965422 - name: Accuracy type: accuracy value: 0.979427137115351 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0729 - Precision: 0.8982 - Recall: 0.9180 - F1: 0.9080 - Accuracy: 0.9794 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 220 | 0.1036 | 0.8607 | 0.8797 | 0.8701 | 0.9727 | | No log | 2.0 | 440 | 0.0762 | 0.8912 | 0.9131 | 0.9020 | 0.9783 | | 0.2005 | 3.0 | 660 | 0.0729 | 0.8982 | 0.9180 | 0.9080 | 0.9794 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0a0+3fd9dcf - Datasets 2.1.0 - Tokenizers 0.12.1
ansegura/ppo-LunarLander-v2-test-1
ansegura
2022-05-09T14:54:56Z
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-09T14:54:27Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 266.06 +/- 17.29 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
gigant/LunarLander-v2_PPO
gigant
2022-05-09T13:36:27Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-09T13:35:55Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 275.23 +/- 20.86 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
arimboux/ppo-LunarLander-v2
arimboux
2022-05-09T12:44:11Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-08T12:31:08Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 258.23 +/- 23.14 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
FollishBoi/ppo-LunarLander-v2_try2
FollishBoi
2022-05-09T12:11:55Z
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-09T12:11:20Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 244.64 +/- 58.94 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
deepgai/tweet_eval-sentiment-finetuned
deepgai
2022-05-09T10:46:47Z
15
0
transformers
[ "transformers", "pytorch", "tensorboard", "deberta-v2", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-08T19:20:19Z
--- license: mit tags: - generated_from_trainer datasets: - tweet_eval metrics: - accuracy - f1 model-index: - name: tweet_eval-sentiment-finetuned results: - task: name: Sentiment Analysis type: sentiment-analysis dataset: name: tweeteval type: tweeteval args: default metrics: - name: Accuracy type: accuracy value: 0.7099 - name: f1 type: f1 value: 0.7097 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tweet_eval-sentiment-finetuned This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the Tweet_Eval dataset. It achieves the following results on the evaluation set: - Loss: 0.6532 - Accuracy: 0.744 - F1: 0.7437 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 128 - eval_batch_size: 256 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.7491 | 1.0 | 357 | 0.6089 | 0.7345 | 0.7314 | | 0.5516 | 2.0 | 714 | 0.5958 | 0.751 | 0.7516 | | 0.4618 | 3.0 | 1071 | 0.6131 | 0.748 | 0.7487 | | 0.4066 | 4.0 | 1428 | 0.6532 | 0.744 | 0.7437 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.9.1 - Datasets 2.1.0 - Tokenizers 0.12.1
jhoonk/bert-base-uncased-finetuned-swag
jhoonk
2022-05-09T10:41:40Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "multiple-choice", "generated_from_trainer", "dataset:swag", "license:apache-2.0", "endpoints_compatible", "region:us" ]
multiple-choice
2022-05-02T10:57:54Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - swag metrics: - accuracy model-index: - name: bert-base-uncased-finetuned-swag results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-swag This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the swag dataset. It achieves the following results on the evaluation set: - Loss: 1.0337 - Accuracy: 0.7888 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.7451 | 1.0 | 4597 | 0.5944 | 0.7696 | | 0.3709 | 2.0 | 9194 | 0.6454 | 0.7803 | | 0.1444 | 3.0 | 13791 | 1.0337 | 0.7888 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
Theimisa/distilbert-base-uncased-aisera_texts
Theimisa
2022-05-09T09:49:59Z
7
0
transformers
[ "transformers", "pytorch", "distilbert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-05-05T12:29:09Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-aisera_texts results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-aisera_texts This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8283 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 2.0694 | 1.0 | 7790 | 1.9868 | | 1.9054 | 2.0 | 15580 | 1.8646 | | 1.8701 | 3.0 | 23370 | 1.8283 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Tokenizers 0.12.1
JacopoBandoni/BioBertRelationGenesDiseases
JacopoBandoni
2022-05-09T09:47:10Z
7
1
transformers
[ "transformers", "pytorch", "bert", "text-classification", "license:afl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-02T10:25:29Z
--- license: afl-3.0 widget: - text: "The case of a 72-year-old male with @DISEASE$ with poor insulin control (fasting hyperglycemia greater than 180 mg/dl) who had a long-standing polyuric syndrome is here presented. Hypernatremia and plasma osmolality elevated together with a low urinary osmolality led to the suspicion of diabetes insipidus, which was subsequently confirmed by the dehydration test and the administration of @GENE$ sc." example_title: "Example 1" - text: "Hypernatremia and plasma osmolality elevated together with a low urinary osmolality led to the suspicion of diabetes insipidus, which was subsequently confirmed by the dehydration test and the administration of @GENE$ sc. With 61% increase in the calculated urinary osmolarity one hour post desmopressin s.c., @DISEASE$ was diagnosed." example_title: "Example 2" --- The following is a fine-tuning of the BioBert models on the GAD dataset. The model works by masking the gene string with "@GENE$" and the disease string with "@DISEASE$". The output is a text classification that can either be: - "LABEL0" if there is no relation - "LABEL1" if there is a relation.
RajSang/pegasus-sports-titles
RajSang
2022-05-09T09:26:14Z
16
1
transformers
[ "transformers", "pytorch", "tensorboard", "pegasus", "text2text-generation", "generated_from_trainer", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
--- tags: - generated_from_trainer widget: - text: "Coutinho was just about to be introduced by Villa boss Gerrard midway through the second half when Bruno Fernandes slammed home his second goal of the game off the underside of the bar. But the Brazilian proved the catalyst for a memorable response. First he drove at the United defence, helping to create the space which Jacob Ramsey exploited to halve the deficit. Then Ramsey slid over an excellent cross from the left which Raphael Varane was unable to intercept as he slid back, leaving Coutinho to finish into an empty net. The goal brought celebrations at both ends of the pitch as Emiliano Martinez also went into the crowd in relief - it was the Argentine's horrible sixth-minute error that had gifted Fernandes the visitors' opener. Given his background - with Liverpool, Barcelona and Bayern Munich - Coutinho is a bold loan signing by Villa, and underlines the pedigree of the man they appointed as manager in November. Gerrard is not at Villa to learn how to avoid relegation. His demands remain as high as they were as a player and Coutinho's arrival is an example of that. Villa are a better team since Gerrard's arrival and, after a sluggish start against opponents they dominated but lost to in the FA Cup five days ago, they grew into the game. The club's other newboy, Lucas Digne, was among those denied by United keeper David de Gea at the end of the first half - in unorthodox fashion, with his knees. Ollie Watkins did not really test the Spain keeper when Villa broke after Edinson Cavani lost possession in his own half. However, Emi Buendia certainly did with a near-post header. Rooted to his line, De Gea's reactions were up to the job as he beat Buendia's effort away. When De Gea produced more saves after half-time to deny Ramsey and Digne again, it appeared the image of the night for Villa would be midfielder Morgan Sanson kicking a drinks bottle in fury after his error in gifting Fred possession to set up Fernandes for the visitors' second had been followed immediately by his substitution. However, as it was the prelude to Coutinho's arrival, it was the moment that changed the course of the game - and the acclaim for the Brazilian at the final whistle indicated Villa's fans are already firmly behind him." language: en --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-sports-titles This model is a fine-tuned pegasus on some **sports news articles scraped from the internet. (For educational purposes only)**. The model can generate titles for sports articles. Try it out using the inference API. ## Model description A Pegasus model tuned on generating scientific titles has been further fine-tuned to generate titles for sports articles. While training articles on **Tennis, Football (Soccer), Cricket , Athletics and Rugby** were used to generate titles. I experimented training the Tokenizer from scratch but it did not give good results compared to the pre-trained tokenizer. ## Usage ```python from transformers import pipeline #Feel free to play around with the generation parameters. #Reduce the beam width for faster inference #Note that the maximum length for the generated titles is 64 gen_kwargs = {"length_penalty": 0.6, "num_beams":4, "num_return_sequences": 4,"num_beam_groups":4,"diversity_penalty":2.0} pipe = pipeline("summarization", model="RajSang/pegasus-sports-titles") #Change the article according to your wish article=""" Coutinho was just about to be introduced by Villa boss Gerrard midway through the second half when Bruno Fernandes slammed home his second goal of the game off the underside of the bar. But the Brazilian proved the catalyst for a memorable response. First he drove at the United defence, helping to create the space which Jacob Ramsey exploited to halve the deficit. Then Ramsey slid over an excellent cross from the left which Raphael Varane was unable to intercept as he slid back, leaving Coutinho to finish into an empty net. The goal brought celebrations at both ends of the pitch as Emiliano Martinez also went into the crowd in relief - it was the Argentine's horrible sixth-minute error that had gifted Fernandes the visitors' opener. Given his background - with Liverpool, Barcelona and Bayern Munich - Coutinho is a bold loan signing by Villa, and underlines the pedigree of the man they appointed as manager in November. Gerrard is not at Villa to learn how to avoid relegation. His demands remain as high as they were as a player and Coutinho's arrival is an example of that. Villa are a better team since Gerrard's arrival and, after a sluggish start against opponents they dominated but lost to in the FA Cup five days ago, they grew into the game. The club's other newboy, Lucas Digne, was among those denied by United keeper David de Gea at the end of the first half - in unorthodox fashion, with his knees. Ollie Watkins did not really test the Spain keeper when Villa broke after Edinson Cavani lost possession in his own half. However, Emi Buendia certainly did with a near-post header. Rooted to his line, De Gea's reactions were up to the job as he beat Buendia's effort away. When De Gea produced more saves after half-time to deny Ramsey and Digne again, it appeared the image of the night for Villa would be midfielder Morgan Sanson kicking a drinks bottle in fury after his error in gifting Fred possession to set up Fernandes for the visitors' second had been followed immediately by his substitution. However, as it was the prelude to Coutinho's arrival, it was the moment that changed the course of the game - and the acclaim for the Brazilian at the final whistle indicated Villa's fans are already firmly behind him. """ result=pipe(article, **gen_kwargs)[0]["summary_text"] print(result) ''' Output Title 1 : Coutinho's arrival sparks Villa comeback Title 2 : Philippe Coutinho marked his debut for Aston Villa with a goal and an assist as Steven Gerrard's side came from two goals down to draw with Manchester United. Title 3 : Steven Gerrard's first game in charge of Aston Villa ended in a dramatic draw against Manchester United - but it was the arrival of Philippe Coutinho that marked the night. Title 4 : Liverpool loanee Philippe Coutinho marked his first appearance for Aston Villa with two goals as Steven Gerrard's side came from two goals down to draw 2-2.''' ``` ## Training procedure While training, **short titles were combined with the subtitles for the articles to improve the quality of the generated titles and the subtitles were removed from the main body of the articles.** ##Limitations In rare cases, if the opening few lines of a passage/article are descriptive enough, the model often just copies these lines instead of looking for information further down the articles, which may not be conducive in some cases. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 2 ### Training results **Rouge1:38.2315** **Rouge2: 18.6598** **RougueL: 31.7393** **RougeLsum: 31.7086** ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
Cmepthbiu/deep_rl
Cmepthbiu
2022-05-09T09:22:41Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-09T09:09:06Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 203.88 +/- 20.92 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
madatnlp/gamza-bart-for-kormath
madatnlp
2022-05-09T09:17:11Z
5
0
transformers
[ "transformers", "tf", "bart", "text2text-generation", "generated_from_keras_callback", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-09T08:19:07Z
--- license: mit tags: - generated_from_keras_callback model-index: - name: madatnlp/gamza-bart-for-kormath results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # madatnlp/gamza-bart-for-kormath This model is a fine-tuned version of [gogamza/kobart-base-v2](https://huggingface.co/gogamza/kobart-base-v2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1418 - Validation Loss: 0.3009 - Epoch: 29 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 1e-04, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 4.4155 | 1.9300 | 0 | | 1.4995 | 1.0293 | 1 | | 1.0445 | 0.8365 | 2 | | 0.8775 | 0.7569 | 3 | | 0.8198 | 0.7778 | 4 | | 0.7619 | 0.7430 | 5 | | 0.7324 | 0.7259 | 6 | | 0.7234 | 0.7214 | 7 | | 0.6697 | 0.6819 | 8 | | 0.6599 | 0.6673 | 9 | | 0.6387 | 0.6433 | 10 | | 0.6227 | 0.6651 | 11 | | 0.6017 | 0.6128 | 12 | | 0.5820 | 0.6430 | 13 | | 0.5229 | 0.5611 | 14 | | 0.4617 | 0.4675 | 15 | | 0.4071 | 0.4463 | 16 | | 0.3495 | 0.4213 | 17 | | 0.3202 | 0.4103 | 18 | | 0.2875 | 0.4477 | 19 | | 0.2528 | 0.3244 | 20 | | 0.2331 | 0.4037 | 21 | | 0.2117 | 0.3041 | 22 | | 0.1943 | 0.3069 | 23 | | 0.1805 | 0.3385 | 24 | | 0.2267 | 0.3347 | 25 | | 0.2049 | 0.2993 | 26 | | 0.1800 | 0.3792 | 27 | | 0.1583 | 0.2905 | 28 | | 0.1418 | 0.3009 | 29 | ### Framework versions - Transformers 4.18.0 - TensorFlow 2.8.0 - Datasets 2.1.0 - Tokenizers 0.12.1
theojolliffe/distilbart-cnn-arxiv-pubmed-v3-e8
theojolliffe
2022-05-09T08:48:07Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-09T07:16:32Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: distilbart-cnn-arxiv-pubmed-v3-e8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbart-cnn-arxiv-pubmed-v3-e8 This model is a fine-tuned version of [theojolliffe/distilbart-cnn-arxiv-pubmed](https://huggingface.co/theojolliffe/distilbart-cnn-arxiv-pubmed) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8329 - Rouge1: 53.3047 - Rouge2: 34.6219 - Rougel: 37.6148 - Rougelsum: 50.8973 - Gen Len: 141.8704 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:| | No log | 1.0 | 398 | 1.1211 | 50.4753 | 30.5417 | 33.192 | 48.1321 | 141.8704 | | 1.3657 | 2.0 | 796 | 0.9944 | 52.2197 | 33.6109 | 35.9448 | 50.0028 | 141.6111 | | 0.887 | 3.0 | 1194 | 0.9149 | 52.796 | 33.7683 | 36.4941 | 50.4514 | 141.5926 | | 0.6548 | 4.0 | 1592 | 0.8725 | 52.5353 | 33.4019 | 36.4573 | 50.2506 | 142.0 | | 0.6548 | 5.0 | 1990 | 0.8540 | 53.2987 | 34.6476 | 38.314 | 51.163 | 141.4815 | | 0.504 | 6.0 | 2388 | 0.8395 | 52.7218 | 34.6524 | 37.9921 | 50.5185 | 141.5556 | | 0.4006 | 7.0 | 2786 | 0.8342 | 53.2251 | 35.2702 | 38.3763 | 51.1958 | 141.6667 | | 0.3314 | 8.0 | 3184 | 0.8329 | 53.3047 | 34.6219 | 37.6148 | 50.8973 | 141.8704 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
KushalRamaiya/ppo-LunarLander-v2
KushalRamaiya
2022-05-09T07:15:37Z
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-09T06:54:45Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 268.32 +/- 24.24 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
RustBucket/LunarLanderTest
RustBucket
2022-05-09T06:47:33Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-09T06:47:02Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 238.47 +/- 60.15 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
huggingtweets/jamesliao333
huggingtweets
2022-05-09T05:49:36Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-05-09T05:47:52Z
--- language: en thumbnail: http://www.huggingtweets.com/jamesliao333/1652075372352/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1522973288288333825/NhsZowLa_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">DON XMCA//素 Vitamin(RNG) 🦀 "MILLENNIUM 定制 Vision"</div> <div style="text-align: center; font-size: 14px;">@jamesliao333</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from DON XMCA//素 Vitamin(RNG) 🦀 "MILLENNIUM 定制 Vision". | Data | DON XMCA//素 Vitamin(RNG) 🦀 "MILLENNIUM 定制 Vision" | | --- | --- | | Tweets downloaded | 202 | | Retweets | 37 | | Short tweets | 16 | | Tweets kept | 149 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ed1hlxcu/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jamesliao333's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/mfrtr3lf) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/mfrtr3lf/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/jamesliao333') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/propertyexile
huggingtweets
2022-05-09T05:28:39Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-04-22T20:00:56Z
--- language: en thumbnail: http://www.huggingtweets.com/propertyexile/1652074114021/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1523442545153519616/mYJEJtEL_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Primo</div> <div style="text-align: center; font-size: 14px;">@propertyexile</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Primo. | Data | Primo | | --- | --- | | Tweets downloaded | 304 | | Retweets | 37 | | Short tweets | 26 | | Tweets kept | 241 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1q8zni52/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @propertyexile's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1f85w6fy) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1f85w6fy/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/propertyexile') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/computerforever
huggingtweets
2022-05-09T05:19:58Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-05-09T05:19:20Z
--- language: en thumbnail: http://www.huggingtweets.com/computerforever/1652073594573/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1518444670266839045/38xr9OAd_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">computer sweetie</div> <div style="text-align: center; font-size: 14px;">@computerforever</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from computer sweetie. | Data | computer sweetie | | --- | --- | | Tweets downloaded | 2170 | | Retweets | 48 | | Short tweets | 313 | | Tweets kept | 1809 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/9j3sj0ot/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @computerforever's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2iw1hcff) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2iw1hcff/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/computerforever') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
zzyzx0/PPO-LunarLander-v2
zzyzx0
2022-05-09T02:47:36Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-09T02:46:57Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 208.86 +/- 20.83 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
theojolliffe/bart-cnn-pubmed-arxiv-pubmed-v3-e64
theojolliffe
2022-05-09T02:03:17Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-08T18:50:49Z
--- license: mit tags: - generated_from_trainer metrics: - rouge model-index: - name: bart-cnn-pubmed-arxiv-pubmed-v3-e64 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-cnn-pubmed-arxiv-pubmed-v3-e64 This model is a fine-tuned version of [theojolliffe/bart-cnn-pubmed-arxiv-pubmed](https://huggingface.co/theojolliffe/bart-cnn-pubmed-arxiv-pubmed) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0630 - Rouge1: 58.7 - Rouge2: 47.8042 - Rougel: 50.6967 - Rougelsum: 57.5543 - Gen Len: 142.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 64 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:| | No log | 1.0 | 398 | 0.9499 | 53.8396 | 34.0954 | 35.6734 | 51.3453 | 142.0 | | 1.1219 | 2.0 | 796 | 0.8223 | 53.0414 | 33.3193 | 35.7448 | 50.1675 | 142.0 | | 0.6681 | 3.0 | 1194 | 0.7689 | 53.6684 | 35.3651 | 37.7087 | 51.1441 | 142.0 | | 0.4393 | 4.0 | 1592 | 0.7694 | 53.9066 | 35.3925 | 38.8917 | 51.6172 | 142.0 | | 0.4393 | 5.0 | 1990 | 0.7597 | 54.0746 | 36.1026 | 39.1318 | 51.9272 | 142.0 | | 0.2947 | 6.0 | 2388 | 0.8284 | 53.1168 | 34.7428 | 38.0573 | 50.9563 | 142.0 | | 0.2016 | 7.0 | 2786 | 0.7951 | 55.7222 | 39.0458 | 42.5265 | 53.5359 | 142.0 | | 0.1422 | 8.0 | 3184 | 0.7793 | 56.2376 | 40.3348 | 43.435 | 54.3228 | 142.0 | | 0.1096 | 9.0 | 3582 | 0.8260 | 55.0372 | 39.0552 | 42.5403 | 53.0694 | 142.0 | | 0.1096 | 10.0 | 3980 | 0.8397 | 53.849 | 37.519 | 40.674 | 52.1357 | 141.7037 | | 0.0881 | 11.0 | 4378 | 0.8504 | 56.4835 | 41.0484 | 44.9407 | 54.3557 | 142.0 | | 0.0693 | 12.0 | 4776 | 0.8285 | 55.7705 | 39.8585 | 43.722 | 53.7607 | 142.0 | | 0.0572 | 13.0 | 5174 | 0.8327 | 57.932 | 43.5378 | 46.8233 | 55.8739 | 142.0 | | 0.0461 | 14.0 | 5572 | 0.8720 | 57.6733 | 42.9742 | 45.8698 | 56.018 | 142.0 | | 0.0461 | 15.0 | 5970 | 0.8723 | 57.6072 | 42.6946 | 45.2551 | 55.8486 | 142.0 | | 0.0416 | 16.0 | 6368 | 0.8764 | 57.1973 | 43.1931 | 46.4492 | 55.3842 | 142.0 | | 0.0343 | 17.0 | 6766 | 0.8638 | 57.4474 | 43.3544 | 46.3026 | 55.7863 | 142.0 | | 0.03 | 18.0 | 7164 | 0.9234 | 57.9166 | 43.8551 | 46.6473 | 56.3895 | 142.0 | | 0.0252 | 19.0 | 7562 | 0.9393 | 58.2908 | 45.2321 | 47.1398 | 56.6618 | 142.0 | | 0.0252 | 20.0 | 7960 | 0.8966 | 59.2798 | 46.381 | 49.3514 | 57.6061 | 142.0 | | 0.024 | 21.0 | 8358 | 0.9056 | 57.8409 | 44.2048 | 47.3329 | 56.2568 | 142.0 | | 0.0195 | 22.0 | 8756 | 0.9424 | 57.551 | 44.6847 | 47.2771 | 56.2391 | 142.0 | | 0.0182 | 23.0 | 9154 | 0.9361 | 59.1078 | 46.4704 | 49.4178 | 57.6796 | 142.0 | | 0.0169 | 24.0 | 9552 | 0.9456 | 56.7966 | 43.3135 | 46.4208 | 55.4646 | 142.0 | | 0.0169 | 25.0 | 9950 | 0.9867 | 59.5561 | 47.4638 | 50.0725 | 58.2388 | 141.8519 | | 0.0147 | 26.0 | 10348 | 0.9727 | 58.2574 | 44.9904 | 47.2701 | 56.4274 | 142.0 | | 0.0125 | 27.0 | 10746 | 0.9589 | 58.6792 | 45.8465 | 48.0781 | 57.0755 | 142.0 | | 0.0117 | 28.0 | 11144 | 0.9635 | 59.1118 | 46.6614 | 50.0552 | 57.6153 | 142.0 | | 0.0103 | 29.0 | 11542 | 0.9623 | 58.2517 | 45.6401 | 48.5888 | 56.7733 | 142.0 | | 0.0103 | 30.0 | 11940 | 0.9752 | 59.0707 | 47.203 | 49.7992 | 57.6216 | 142.0 | | 0.0096 | 31.0 | 12338 | 0.9610 | 57.6781 | 44.0504 | 47.6718 | 56.1201 | 142.0 | | 0.0089 | 32.0 | 12736 | 0.9705 | 58.5592 | 45.7397 | 48.681 | 57.0302 | 142.0 | | 0.008 | 33.0 | 13134 | 0.9989 | 58.1997 | 45.6345 | 48.2551 | 56.8571 | 141.7778 | | 0.0075 | 34.0 | 13532 | 0.9880 | 57.9632 | 44.7845 | 47.8763 | 56.3979 | 142.0 | | 0.0075 | 35.0 | 13930 | 1.0041 | 58.1316 | 46.2737 | 49.5986 | 56.8263 | 142.0 | | 0.0061 | 36.0 | 14328 | 0.9923 | 58.4686 | 46.1735 | 49.1299 | 57.0331 | 142.0 | | 0.0066 | 37.0 | 14726 | 1.0157 | 58.4277 | 45.6559 | 49.1739 | 56.8198 | 141.6481 | | 0.0052 | 38.0 | 15124 | 1.0220 | 58.5166 | 46.3883 | 50.0964 | 57.0104 | 142.0 | | 0.0049 | 39.0 | 15522 | 0.9949 | 59.3697 | 47.0609 | 50.2733 | 58.1388 | 142.0 | | 0.0049 | 40.0 | 15920 | 1.0368 | 59.9537 | 48.4059 | 51.8185 | 58.8002 | 142.0 | | 0.0039 | 41.0 | 16318 | 1.0228 | 58.2093 | 46.4807 | 49.54 | 56.9994 | 142.0 | | 0.0041 | 42.0 | 16716 | 1.0218 | 57.6376 | 45.4951 | 49.003 | 56.4606 | 142.0 | | 0.0035 | 43.0 | 17114 | 1.0381 | 57.2845 | 43.9593 | 46.779 | 55.6106 | 142.0 | | 0.0059 | 44.0 | 17512 | 1.0316 | 58.5506 | 46.2111 | 49.4844 | 56.9506 | 142.0 | | 0.0059 | 45.0 | 17910 | 1.0388 | 58.8383 | 47.6053 | 50.6187 | 57.7125 | 142.0 | | 0.0028 | 46.0 | 18308 | 1.0068 | 59.3198 | 47.6888 | 50.2478 | 58.0 | 142.0 | | 0.0028 | 47.0 | 18706 | 1.0446 | 58.8938 | 46.7524 | 49.5642 | 57.3659 | 142.0 | | 0.0022 | 48.0 | 19104 | 1.0347 | 59.8253 | 48.3871 | 51.3949 | 58.5652 | 142.0 | | 0.0024 | 49.0 | 19502 | 1.0294 | 60.655 | 50.2339 | 53.1662 | 59.3333 | 142.0 | | 0.0024 | 50.0 | 19900 | 1.0225 | 58.5131 | 47.3009 | 50.1642 | 57.2287 | 142.0 | | 0.0022 | 51.0 | 20298 | 1.0320 | 59.6101 | 47.4104 | 50.5291 | 58.075 | 142.0 | | 0.0018 | 52.0 | 20696 | 1.0507 | 58.7957 | 46.8893 | 50.2996 | 57.3662 | 142.0 | | 0.0015 | 53.0 | 21094 | 1.0599 | 58.9064 | 47.9433 | 51.3082 | 57.6871 | 142.0 | | 0.0015 | 54.0 | 21492 | 1.0636 | 59.6607 | 48.5737 | 51.2361 | 58.333 | 142.0 | | 0.0013 | 55.0 | 21890 | 1.0452 | 58.7026 | 46.5286 | 49.9672 | 57.2521 | 142.0 | | 0.0012 | 56.0 | 22288 | 1.0418 | 58.9452 | 47.7209 | 50.657 | 57.7103 | 142.0 | | 0.0011 | 57.0 | 22686 | 1.0578 | 58.485 | 46.0691 | 49.811 | 57.2591 | 142.0 | | 0.0009 | 58.0 | 23084 | 1.0561 | 59.2268 | 48.1987 | 50.1948 | 57.8871 | 142.0 | | 0.0009 | 59.0 | 23482 | 1.0548 | 59.6307 | 48.1778 | 50.9934 | 58.2098 | 142.0 | | 0.0009 | 60.0 | 23880 | 1.0498 | 59.5054 | 48.8866 | 51.5977 | 58.1868 | 142.0 | | 0.0008 | 61.0 | 24278 | 1.0583 | 60.0232 | 49.2518 | 52.2297 | 58.6774 | 142.0 | | 0.0007 | 62.0 | 24676 | 1.0659 | 59.1755 | 48.4144 | 51.5157 | 58.0416 | 142.0 | | 0.0007 | 63.0 | 25074 | 1.0622 | 59.1023 | 47.74 | 50.5188 | 57.9707 | 142.0 | | 0.0007 | 64.0 | 25472 | 1.0630 | 58.7 | 47.8042 | 50.6967 | 57.5543 | 142.0 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
ebonazza2910/model
ebonazza2910
2022-05-08T23:12:15Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-05-03T16:38:01Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.2220 - Wer: 0.1301 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 5.9743 | 0.18 | 400 | 2.1457 | 1.0000 | | 0.5747 | 0.36 | 800 | 0.3415 | 0.3456 | | 0.3383 | 0.54 | 1200 | 0.2797 | 0.3095 | | 0.2967 | 0.72 | 1600 | 0.2464 | 0.2568 | | 0.2747 | 0.9 | 2000 | 0.2341 | 0.2466 | | 0.2501 | 1.08 | 2400 | 0.2299 | 0.2317 | | 0.2309 | 1.26 | 2800 | 0.2306 | 0.2328 | | 0.2273 | 1.44 | 3200 | 0.2212 | 0.2375 | | 0.225 | 1.62 | 3600 | 0.2193 | 0.2267 | | 0.2204 | 1.8 | 4000 | 0.2157 | 0.2295 | | 0.2256 | 1.98 | 4400 | 0.2165 | 0.2260 | | 0.1941 | 2.17 | 4800 | 0.2105 | 0.2163 | | 0.1925 | 2.35 | 5200 | 0.2098 | 0.2153 | | 0.1925 | 2.53 | 5600 | 0.2120 | 0.2148 | | 0.1952 | 2.71 | 6000 | 0.2063 | 0.2178 | | 0.1971 | 2.89 | 6400 | 0.2100 | 0.2158 | | 0.1888 | 3.07 | 6800 | 0.2131 | 0.2172 | | 0.1702 | 3.25 | 7200 | 0.2155 | 0.2203 | | 0.173 | 3.43 | 7600 | 0.2141 | 0.2254 | | 0.174 | 3.61 | 8000 | 0.2017 | 0.2100 | | 0.1802 | 3.79 | 8400 | 0.1998 | 0.2043 | | 0.1717 | 3.97 | 8800 | 0.2070 | 0.2110 | | 0.162 | 4.15 | 9200 | 0.2082 | 0.2157 | | 0.154 | 4.33 | 9600 | 0.2163 | 0.2161 | | 0.1598 | 4.51 | 10000 | 0.2070 | 0.2171 | | 0.1576 | 4.69 | 10400 | 0.2034 | 0.2116 | | 0.1601 | 4.87 | 10800 | 0.1990 | 0.2009 | | 0.152 | 5.05 | 11200 | 0.1994 | 0.2039 | | 0.1395 | 5.23 | 11600 | 0.2013 | 0.2046 | | 0.1407 | 5.41 | 12000 | 0.2009 | 0.2022 | | 0.1449 | 5.59 | 12400 | 0.1982 | 0.1961 | | 0.1483 | 5.77 | 12800 | 0.2082 | 0.2054 | | 0.1514 | 5.95 | 13200 | 0.1953 | 0.1985 | | 0.138 | 6.13 | 13600 | 0.2046 | 0.1965 | | 0.1322 | 6.31 | 14000 | 0.2076 | 0.1948 | | 0.1372 | 6.5 | 14400 | 0.1968 | 0.1944 | | 0.136 | 6.68 | 14800 | 0.1971 | 0.1963 | | 0.1382 | 6.86 | 15200 | 0.2001 | 0.1990 | | 0.1335 | 7.04 | 15600 | 0.2026 | 0.1935 | | 0.1206 | 7.22 | 16000 | 0.1986 | 0.1938 | | 0.1239 | 7.4 | 16400 | 0.2054 | 0.1919 | | 0.1254 | 7.58 | 16800 | 0.1918 | 0.1939 | | 0.1262 | 7.76 | 17200 | 0.1960 | 0.1947 | | 0.126 | 7.94 | 17600 | 0.1932 | 0.1906 | | 0.1169 | 8.12 | 18000 | 0.2037 | 0.1916 | | 0.1142 | 8.3 | 18400 | 0.1999 | 0.1900 | | 0.1151 | 8.48 | 18800 | 0.1920 | 0.1855 | | 0.1121 | 8.66 | 19200 | 0.2007 | 0.1859 | | 0.1135 | 8.84 | 19600 | 0.1932 | 0.1879 | | 0.1158 | 9.02 | 20000 | 0.1916 | 0.1859 | | 0.105 | 9.2 | 20400 | 0.1961 | 0.1831 | | 0.1023 | 9.38 | 20800 | 0.1914 | 0.1791 | | 0.1004 | 9.56 | 21200 | 0.1881 | 0.1787 | | 0.1023 | 9.74 | 21600 | 0.1963 | 0.1817 | | 0.1075 | 9.92 | 22000 | 0.1889 | 0.1861 | | 0.103 | 10.1 | 22400 | 0.1975 | 0.1791 | | 0.0952 | 10.28 | 22800 | 0.1979 | 0.1787 | | 0.0957 | 10.46 | 23200 | 0.1922 | 0.1817 | | 0.0966 | 10.65 | 23600 | 0.1953 | 0.1857 | | 0.0997 | 10.83 | 24000 | 0.1902 | 0.1783 | | 0.0981 | 11.01 | 24400 | 0.1959 | 0.1780 | | 0.0868 | 11.19 | 24800 | 0.2056 | 0.1783 | | 0.0905 | 11.37 | 25200 | 0.1958 | 0.1777 | | 0.0892 | 11.55 | 25600 | 0.1935 | 0.1796 | | 0.0891 | 11.73 | 26000 | 0.1968 | 0.1763 | | 0.0888 | 11.91 | 26400 | 0.2043 | 0.1804 | | 0.0842 | 12.09 | 26800 | 0.2043 | 0.1733 | | 0.0828 | 12.27 | 27200 | 0.1964 | 0.1715 | | 0.0827 | 12.45 | 27600 | 0.1991 | 0.1749 | | 0.0844 | 12.63 | 28000 | 0.2014 | 0.1695 | | 0.0837 | 12.81 | 28400 | 0.1973 | 0.1759 | | 0.0872 | 12.99 | 28800 | 0.1975 | 0.1689 | | 0.0778 | 13.17 | 29200 | 0.1979 | 0.1740 | | 0.0759 | 13.35 | 29600 | 0.2093 | 0.1753 | | 0.076 | 13.53 | 30000 | 0.1990 | 0.1731 | | 0.0762 | 13.71 | 30400 | 0.2024 | 0.1690 | | 0.0764 | 13.89 | 30800 | 0.2037 | 0.1709 | | 0.0756 | 14.07 | 31200 | 0.2007 | 0.1716 | | 0.0702 | 14.25 | 31600 | 0.2011 | 0.1680 | | 0.0694 | 14.43 | 32000 | 0.2061 | 0.1683 | | 0.0713 | 14.61 | 32400 | 0.2014 | 0.1687 | | 0.0693 | 14.79 | 32800 | 0.1961 | 0.1658 | | 0.071 | 14.98 | 33200 | 0.1921 | 0.1645 | | 0.0659 | 15.16 | 33600 | 0.2079 | 0.1682 | | 0.0659 | 15.34 | 34000 | 0.2046 | 0.1649 | | 0.0685 | 15.52 | 34400 | 0.1994 | 0.1660 | | 0.0663 | 15.7 | 34800 | 0.1970 | 0.1652 | | 0.0678 | 15.88 | 35200 | 0.1961 | 0.1634 | | 0.0644 | 16.06 | 35600 | 0.2141 | 0.1644 | | 0.0596 | 16.24 | 36000 | 0.2098 | 0.1628 | | 0.0629 | 16.42 | 36400 | 0.1969 | 0.1616 | | 0.0598 | 16.6 | 36800 | 0.2026 | 0.1604 | | 0.0628 | 16.78 | 37200 | 0.2050 | 0.1620 | | 0.0616 | 16.96 | 37600 | 0.1958 | 0.1618 | | 0.0538 | 17.14 | 38000 | 0.2093 | 0.1588 | | 0.0573 | 17.32 | 38400 | 0.1995 | 0.1588 | | 0.0555 | 17.5 | 38800 | 0.2077 | 0.1608 | | 0.0555 | 17.68 | 39200 | 0.2036 | 0.1571 | | 0.0578 | 17.86 | 39600 | 0.2045 | 0.1572 | | 0.056 | 18.04 | 40000 | 0.2065 | 0.1593 | | 0.0525 | 18.22 | 40400 | 0.2093 | 0.1580 | | 0.0527 | 18.4 | 40800 | 0.2141 | 0.1585 | | 0.0529 | 18.58 | 41200 | 0.2137 | 0.1585 | | 0.0533 | 18.76 | 41600 | 0.2021 | 0.1558 | | 0.0529 | 18.94 | 42000 | 0.2108 | 0.1535 | | 0.05 | 19.12 | 42400 | 0.2114 | 0.1555 | | 0.0479 | 19.31 | 42800 | 0.2091 | 0.1549 | | 0.0509 | 19.49 | 43200 | 0.2145 | 0.1554 | | 0.0486 | 19.67 | 43600 | 0.2061 | 0.1536 | | 0.049 | 19.85 | 44000 | 0.2132 | 0.1548 | | 0.0484 | 20.03 | 44400 | 0.2077 | 0.1523 | | 0.0449 | 20.21 | 44800 | 0.2177 | 0.1529 | | 0.0452 | 20.39 | 45200 | 0.2204 | 0.1517 | | 0.0477 | 20.57 | 45600 | 0.2132 | 0.1517 | | 0.048 | 20.75 | 46000 | 0.2119 | 0.1532 | | 0.0469 | 20.93 | 46400 | 0.2109 | 0.1524 | | 0.0439 | 21.11 | 46800 | 0.2118 | 0.1503 | | 0.044 | 21.29 | 47200 | 0.2033 | 0.1474 | | 0.0435 | 21.47 | 47600 | 0.2066 | 0.1485 | | 0.0418 | 21.65 | 48000 | 0.2125 | 0.1491 | | 0.0417 | 21.83 | 48400 | 0.2139 | 0.1487 | | 0.0446 | 22.01 | 48800 | 0.2054 | 0.1493 | | 0.039 | 22.19 | 49200 | 0.2179 | 0.1459 | | 0.0414 | 22.37 | 49600 | 0.2118 | 0.1466 | | 0.0394 | 22.55 | 50000 | 0.2104 | 0.1444 | | 0.0381 | 22.73 | 50400 | 0.2095 | 0.1458 | | 0.0382 | 22.91 | 50800 | 0.2193 | 0.1471 | | 0.0391 | 23.09 | 51200 | 0.2143 | 0.1455 | | 0.0365 | 23.27 | 51600 | 0.2198 | 0.1445 | | 0.0368 | 23.46 | 52000 | 0.2151 | 0.1444 | | 0.038 | 23.64 | 52400 | 0.2094 | 0.1439 | | 0.038 | 23.82 | 52800 | 0.2137 | 0.1422 | | 0.0374 | 24.0 | 53200 | 0.2180 | 0.1425 | | 0.0352 | 24.18 | 53600 | 0.2207 | 0.1422 | | 0.0343 | 24.36 | 54000 | 0.2269 | 0.1445 | | 0.0353 | 24.54 | 54400 | 0.2222 | 0.1438 | | 0.0348 | 24.72 | 54800 | 0.2224 | 0.1413 | | 0.0342 | 24.9 | 55200 | 0.2146 | 0.1401 | | 0.0337 | 25.08 | 55600 | 0.2246 | 0.1408 | | 0.0327 | 25.26 | 56000 | 0.2161 | 0.1401 | | 0.0339 | 25.44 | 56400 | 0.2212 | 0.1402 | | 0.0324 | 25.62 | 56800 | 0.2203 | 0.1394 | | 0.0319 | 25.8 | 57200 | 0.2145 | 0.1376 | | 0.0317 | 25.98 | 57600 | 0.2147 | 0.1375 | | 0.0302 | 26.16 | 58000 | 0.2213 | 0.1362 | | 0.0309 | 26.34 | 58400 | 0.2218 | 0.1365 | | 0.0308 | 26.52 | 58800 | 0.2167 | 0.1362 | | 0.0294 | 26.7 | 59200 | 0.2169 | 0.1368 | | 0.0297 | 26.88 | 59600 | 0.2163 | 0.1350 | | 0.0289 | 27.06 | 60000 | 0.2188 | 0.1348 | | 0.0284 | 27.24 | 60400 | 0.2172 | 0.1338 | | 0.0278 | 27.42 | 60800 | 0.2230 | 0.1342 | | 0.0283 | 27.6 | 61200 | 0.2233 | 0.1342 | | 0.0292 | 27.79 | 61600 | 0.2238 | 0.1335 | | 0.0286 | 27.97 | 62000 | 0.2218 | 0.1327 | | 0.0262 | 28.15 | 62400 | 0.2220 | 0.1324 | | 0.0274 | 28.33 | 62800 | 0.2182 | 0.1323 | | 0.0279 | 28.51 | 63200 | 0.2170 | 0.1314 | | 0.0269 | 28.69 | 63600 | 0.2228 | 0.1313 | | 0.0264 | 28.87 | 64000 | 0.2209 | 0.1313 | | 0.0254 | 29.05 | 64400 | 0.2224 | 0.1304 | | 0.026 | 29.23 | 64800 | 0.2220 | 0.1302 | | 0.0253 | 29.41 | 65200 | 0.2229 | 0.1304 | | 0.0244 | 29.59 | 65600 | 0.2217 | 0.1298 | | 0.025 | 29.77 | 66000 | 0.2223 | 0.1303 | | 0.0255 | 29.95 | 66400 | 0.2220 | 0.1301 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1+cu102 - Datasets 1.18.3 - Tokenizers 0.10.3
theojolliffe/bart-cnn-pubmed-arxiv-pubmed-v3-e12
theojolliffe
2022-05-08T23:01:48Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-08T20:57:25Z
--- license: mit tags: - generated_from_trainer metrics: - rouge model-index: - name: bart-cnn-pubmed-arxiv-pubmed-v3-e12 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-cnn-pubmed-arxiv-pubmed-v3-e12 This model is a fine-tuned version of [theojolliffe/bart-cnn-pubmed-arxiv-pubmed](https://huggingface.co/theojolliffe/bart-cnn-pubmed-arxiv-pubmed) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8658 - Rouge1: 57.2678 - Rouge2: 43.347 - Rougel: 47.0854 - Rougelsum: 55.4167 - Gen Len: 142.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 12 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:| | 1.2548 | 1.0 | 795 | 0.9154 | 53.4249 | 34.0377 | 36.4396 | 50.9884 | 141.8889 | | 0.6994 | 2.0 | 1590 | 0.8213 | 54.7613 | 35.9428 | 38.3899 | 51.9527 | 142.0 | | 0.5272 | 3.0 | 2385 | 0.7703 | 53.8561 | 35.4871 | 38.0502 | 51.131 | 141.8889 | | 0.3407 | 4.0 | 3180 | 0.7764 | 53.9514 | 35.8553 | 39.1935 | 51.7005 | 142.0 | | 0.2612 | 5.0 | 3975 | 0.7529 | 54.4056 | 36.2605 | 40.8003 | 52.0424 | 142.0 | | 0.1702 | 6.0 | 4770 | 0.8105 | 54.2251 | 37.1441 | 41.2472 | 52.2803 | 142.0 | | 0.1276 | 7.0 | 5565 | 0.8004 | 56.49 | 40.4009 | 44.018 | 54.2404 | 141.5556 | | 0.0978 | 8.0 | 6360 | 0.7890 | 56.6339 | 40.9867 | 43.9603 | 54.4468 | 142.0 | | 0.0711 | 9.0 | 7155 | 0.8285 | 56.0469 | 40.7758 | 44.1395 | 53.9668 | 142.0 | | 0.0649 | 10.0 | 7950 | 0.8498 | 56.9873 | 42.4721 | 46.705 | 55.2188 | 142.0 | | 0.0471 | 11.0 | 8745 | 0.8547 | 57.7898 | 43.4238 | 46.5868 | 56.0858 | 142.0 | | 0.0336 | 12.0 | 9540 | 0.8658 | 57.2678 | 43.347 | 47.0854 | 55.4167 | 142.0 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
subhasisj/Ar-Mulitlingula-MiniLM
subhasisj
2022-05-08T21:26:17Z
3
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-05-08T19:19:57Z
Ar-Mulitlingual-MiniLM This model is a fine-tuned version of microsoft/Multilingual-MiniLM-L12-H384 on an unknown dataset. Model description More information needed Intended uses & limitations More information needed Training and evaluation data More information needed Training procedure Training hyperparameters The following hyperparameters were used during training: learning_rate: 5e-05 train_batch_size: 24 eval_batch_size: 8 seed: 42 optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 lr_scheduler_type: linear num_epochs: 2 mixed_precision_training: Native AMP Training results Framework versions Transformers 4.18.0 Pytorch 1.11.0+cu113 Tokenizers 0.12.1
subhasisj/Zh-Mulitlingual-MiniLM
subhasisj
2022-05-08T21:19:00Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-05-08T19:56:18Z
--- license: mit tags: - generated_from_trainer model-index: - name: Zh-Mulitlingual-MiniLM results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Zh-Mulitlingual-MiniLM This model is a fine-tuned version of [microsoft/Multilingual-MiniLM-L12-H384](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Tokenizers 0.12.1
jecp97/trial-ppo-LunarLander-v2
jecp97
2022-05-08T20:28:36Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-08T16:22:10Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 206.72 +/- 58.57 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
leebaidyanathan/TEST2ppo-LunarLander-v2
leebaidyanathan
2022-05-08T20:28:32Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-08T20:28:01Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 223.51 +/- 38.67 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
syrios/lunarlanding
syrios
2022-05-08T20:18:08Z
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-08T16:47:46Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 260.47 +/- 35.08 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
sam999/t5-end2end-questions-generation
sam999
2022-05-08T20:01:47Z
3
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-08T01:16:57Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: t5-end2end-questions-generation results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-end2end-questions-generation This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.6940 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 0.1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0297 | 0.07 | 100 | 1.6940 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
huxxx657/roberta-base-finetuned-squad
huxxx657
2022-05-08T19:57:20Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "question-answering", "generated_from_trainer", "dataset:squad_v2", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-05-08T02:59:11Z
--- license: mit tags: - generated_from_trainer datasets: - squad_v2 model-index: - name: roberta-base-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-squad This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 0.8152 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.8557 | 1.0 | 8239 | 0.8152 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
SofyPreo/ppo-LunarLander-v2
SofyPreo
2022-05-08T18:45:10Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-08T17:42:02Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 259.34 +/- 20.02 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 ---
LiYuan/Amazon-Cross-Encoder-Classification
LiYuan
2022-05-08T17:52:56Z
6
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "license:afl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-08T04:29:31Z
--- license: afl-3.0 --- There are two types of Cross-Encoder models. One is the Cross-Encoder Regression model that we fine-tuned and mentioned in the previous section. Next, we have the Cross-Encoder Classification model. These two models are introduced in the same paper https://doi.org/10.48550/arxiv.1908.10084 Both models resolve the issue that the BERT model is too time-consuming and resource-consuming to train in pairwised sentences. These two model weights are initialized as the BERT and RoBERTa networks. We only need to fine-tune them, spending much less time to yield a comparable or even better sentence embedding. The below figure \ref{figure:5} shows the architecture of Cross-Encoder Classification. ![](1.png) Then we evaluated the model performance on the 2,000 held-out test set. We also got a test accuracy **46.05%** that is almost identical to the best validation accuracy, suggesting a good generalization model.
Kabutopusu/DialoGPT-medium-NITWMae
Kabutopusu
2022-05-08T17:39:42Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-05-08T16:18:15Z
--- tags: - conversational --- # DialoGPT Model, Trained on dialogue from "Mae" in the game Night in the Woods from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("Kabutopusu/DialoGPT-medium-NITWMae") model = AutoModelWithLMHead.from_pretrained("Kabutopusu/DialoGPT-medium-NITWMae") # Let's chat for 4 lines for step in range(4): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # print(new_user_input_ids) # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate( bot_input_ids, max_length=200, pad_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=3, do_sample=True, top_k=100, top_p=0.1, temperature=1.2 ) # pretty print last ouput tokens from bot print("Mae: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
PrajwalS/wav2vec2_custom_model_50
PrajwalS
2022-05-08T16:33:22Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-05-06T09:39:43Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2_custom_model_50 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2_custom_model_50 This model is a fine-tuned version of [facebook/wav2vec2-large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu102 - Datasets 1.18.4 - Tokenizers 0.11.6
GideonFr/PPO-LunarLander-v2-low-gamma
GideonFr
2022-05-08T16:00:31Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-08T15:59:53Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: -34.75 +/- 121.94 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
huggingtweets/temapex
huggingtweets
2022-05-08T15:33:31Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-14T15:47:57Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1511150115582525442/9l-weW8Z_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Ema Pex 🌠 ペクスえま</div> <div style="text-align: center; font-size: 14px;">@temapex</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Ema Pex 🌠 ペクスえま. | Data | Ema Pex 🌠 ペクスえま | | --- | --- | | Tweets downloaded | 3245 | | Retweets | 446 | | Short tweets | 259 | | Tweets kept | 2540 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2qyw32m2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @temapex's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3my4azzd) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3my4azzd/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/temapex') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)