modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-29 12:28:39
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
526 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-29 12:28:30
card
stringlengths
11
1.01M
SreyanG-NVIDIA/gpt2-wikitext2
SreyanG-NVIDIA
2022-05-16T11:44:23Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-05-16T11:23:42Z
--- license: mit tags: - generated_from_trainer model-index: - name: gpt2-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-wikitext2 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 6.1085 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.5573 | 1.0 | 2249 | 6.4633 | | 6.1893 | 2.0 | 4498 | 6.1993 | | 6.0153 | 3.0 | 6747 | 6.1085 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
ml6team/mbart-large-cc25-cnn-dailymail-nl
ml6team
2022-05-16T11:41:37Z
16
6
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "bart", "summarization", "nl", "dataset:ml6team/cnn_dailymail_nl", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:05Z
--- language: - nl tags: - mbart - bart - summarization datasets: - ml6team/cnn_dailymail_nl pipeline_tag: summarization widget: - text: 'Het jongetje werd eind april met zwaar letsel naar het ziekenhuis gebracht in Maastricht. Drie weken later overleed het kindje als gevolg van het letsel. Onderzoek moet nog uitwijzen wat voor verwondingen de baby precies had en hoe hij gewond is geraakt. Daarnaast doet de politie onderzoek in de woning van de ouders. Het is nog niet duidelijk wanneer de onderzoeken zijn afgerond, meldt 1Limburg. De verdachten zitten in beperkingen en mogen alleen contact hebben met hun advocaat.' - text: 'Volgens De Vries gaat het om "de hoogste beloning die ooit is uitgeloofd in Nederland". De stichting heeft een website waar donateurs geld kunnen storten, schrijft NH Nieuws. Volgens De Vries is dit initiatief ook bedoeld voor andere zaken waar beloningen voor een gouden tip worden uitgereikt. "Het is dus niet eenmalig", aldus De Vries. Het is de eerste keer dat zoiets wordt opgezet, stelt hij: De 18-jarige Tanja Groen verdween spoorloos tijdens de ontgroeningsweek van de Universiteit Maastricht in augustus 1993. Ze werd voor het laatst gezien nadat ze was vertrokken van een feestje. De studente zou vandaag 46 jaar zijn geworden. Ook de ouders van Groen waren op de persconferentie aanwezig. "Het is vandaag de verjaardag van Tanja Groen, die haar ouders al 27 jaar niet meer hebben kunnen vieren, omdat zij eind augustus 1993 spoorloos is verdwenen", zei De Vries. "Haar ouders zitten in tergende onzekerheid. Ze geloven dat ze niet meer leeft. Maar die ene promille vreet aan ze. Ze hebben recht op duidelijkheid. Ze komen op leeftijd. Grootste angst is nooit te weten wat er met hun kind is gebeurd." De Vries wil dat het miljoen binnen een jaar is ingezameld. Als het bedrag na een jaar lager uitkomt, dan is dat de uit te loven beloning. Is het meer, dan zal de rest van het geld gebruikt worden in beloningen in andere zaken. Het initiatief wordt gesteund door de politie en justitie. De afgelopen jaren is er vaker uitgebreid naar sporen van Tanja Groen gezocht, maar die zoekacties hebben niets concreets opgeleverd. Vorige week werd opnieuw naar de vrouw gezocht, op de Strabrechtse Heide in Noord-Brabant. Ook die zoektocht leverde niets op.' --- # mbart-large-cc25-cnn-dailymail-nl ## Model description Finetuned version of [mbart](https://huggingface.co/facebook/mbart-large-cc25). We also wrote a **blog post** about this model [here](https://blog.ml6.eu/why-we-open-sourced-two-dutch-summarization-datasets-1047445abc97) ## Intended uses & limitations It's meant for summarizing Dutch news articles. #### How to use ```python import transformers undisputed_best_model = transformers.MBartForConditionalGeneration.from_pretrained( "ml6team/mbart-large-cc25-cnn-dailymail-nl" ) tokenizer = transformers.MBartTokenizer.from_pretrained("facebook/mbart-large-cc25") summarization_pipeline = transformers.pipeline( task="summarization", model=undisputed_best_model, tokenizer=tokenizer, ) summarization_pipeline.model.config.decoder_start_token_id = tokenizer.lang_code_to_id[ "nl_XX" ] article = "Kan je dit even samenvatten alsjeblief." # Dutch summarization_pipeline( article, do_sample=True, top_p=0.75, top_k=50, # num_beams=4, min_length=50, early_stopping=True, truncation=True, )[0]["summary_text"] ``` ## Training data Finetuned [mbart](https://huggingface.co/facebook/mbart-large-cc25) with [this dataset](https://huggingface.co/datasets/ml6team/cnn_dailymail_nl)
anes-saidi/aragpt2-base-finetuned-wikitext2
anes-saidi
2022-05-16T11:14:18Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-05-16T10:51:28Z
--- tags: - generated_from_trainer model-index: - name: aragpt2-base-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # aragpt2-base-finetuned-wikitext2 This model is a fine-tuned version of [aubmindlab/aragpt2-base](https://huggingface.co/aubmindlab/aragpt2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 5.0307 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 387 | 5.1841 | | 5.9664 | 2.0 | 774 | 5.0627 | | 5.4166 | 3.0 | 1161 | 5.0307 | ### Framework versions - Transformers 4.11.0 - Pytorch 1.11.0+cu113 - Datasets 2.2.1 - Tokenizers 0.10.3
SreyanG-NVIDIA/distilgpt2-finetuned-wikitext2
SreyanG-NVIDIA
2022-05-16T11:06:40Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-05-16T10:15:16Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilgpt2-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.6408 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.7592 | 1.0 | 2334 | 3.6646 | | 3.6519 | 2.0 | 4668 | 3.6454 | | 3.601 | 3.0 | 7002 | 3.6408 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
jsunster/layoutlmv2-finetuned-cord
jsunster
2022-05-16T09:35:27Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "layoutlmv2", "token-classification", "generated_from_trainer", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-05-16T08:58:07Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer model-index: - name: layoutlmv2-finetuned-cord results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlmv2-finetuned-cord This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 3000 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.19.1 - Pytorch 1.10.0+cu111 - Datasets 2.2.1 - Tokenizers 0.12.1
SreyanG-NVIDIA/bert-base-cased-finetuned-squad
SreyanG-NVIDIA
2022-05-16T08:39:41Z
35
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-05-13T13:39:02Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-base-cased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.0848 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.0337 | 1.0 | 5546 | 1.0150 | | 0.7546 | 2.0 | 11092 | 1.0015 | | 0.5537 | 3.0 | 16638 | 1.0848 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
madatnlp/sk-kogptv2-kormath-causal
madatnlp
2022-05-16T07:56:43Z
8
0
transformers
[ "transformers", "tf", "gpt2", "text-generation", "generated_from_keras_callback", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-05-13T11:28:16Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_keras_callback model-index: - name: madatnlp/sk-kogptv2-kormath-causal results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # madatnlp/sk-kogptv2-kormath-causal This model is a fine-tuned version of [skt/kogpt2-base-v2](https://huggingface.co/skt/kogpt2-base-v2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3184 - Validation Loss: 1.4046 - Epoch: 15 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 2.2999999e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.7142 | 1.8683 | 0 | | 1.6077 | 1.4417 | 1 | | 1.2458 | 1.3161 | 2 | | 1.0396 | 1.2704 | 3 | | 0.8848 | 1.2818 | 4 | | 0.7634 | 1.2579 | 5 | | 0.6699 | 1.2724 | 6 | | 0.5948 | 1.2718 | 7 | | 0.5306 | 1.3300 | 8 | | 0.4832 | 1.3377 | 9 | | 0.4401 | 1.3038 | 10 | | 0.4053 | 1.3622 | 11 | | 0.3782 | 1.3577 | 12 | | 0.3550 | 1.3696 | 13 | | 0.3347 | 1.3682 | 14 | | 0.3184 | 1.4046 | 15 | ### Framework versions - Transformers 4.19.1 - TensorFlow 2.8.0 - Datasets 2.2.1 - Tokenizers 0.12.1
ceggian/sbert_pt_reddit_softmax_256
ceggian
2022-05-16T06:52:11Z
2
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-05-16T06:48:33Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 117759 with parameters: ``` {'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 11775, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
kompactss/JeBERT_ko_je
kompactss
2022-05-16T06:11:24Z
5
0
transformers
[ "transformers", "pytorch", "encoder-decoder", "text2text-generation", "license:afl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-01T15:16:08Z
--- license: afl-3.0 --- # 🍊 제주 방언 번역 모델 🍊 - 표준어 -> 제주어 - Made by. 구름 자연어처리 과정 3기 3조!! - github link : https://github.com/Goormnlpteam3/JeBERT ## 1. Seq2Seq Transformer Model - encoder : BertConfig - decoder : BertConfig - Tokenizer : WordPiece Tokenizer ## 2. Dataset - Jit Dataset - AI HUB(+아래아 문자) ## 3. Hyper Parameters - Epoch : 10 epochs(best at 7 epoch) - Random Seed : 42 - Learning Rate : 5e-5 - Warm up Ratio : 0.1 - Batch Size : 32 ## 4. BLEU Score - Jit + AI HUB(+아래아 문자) Dataset : 67.3 --- ### CREDIT - 주형준 : wngudwns2798@gmail.com - 강가람 : 1st9aram@gmail.com - 고광연 : rhfprl11@gmail.com - 김수연 : s01090445778@gmail.com - 이원경 : hjtwin2@gmail.com - 조성은 : eun102476@gmail.com
kompactss/JeBERT_ko_je_v2
kompactss
2022-05-16T06:10:50Z
5
0
transformers
[ "transformers", "pytorch", "encoder-decoder", "text2text-generation", "license:afl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-02T17:30:31Z
--- license: afl-3.0 --- # 🍊 제주 방언 번역 모델 🍊 - 표준어 -> 제주어 - Made by. 구름 자연어처리 과정 3기 3조!! - github link : https://github.com/Goormnlpteam3/JeBERT ## 1. Seq2Seq Transformer Model - encoder : BertConfig - decoder : BertConfig - Tokenizer : WordPiece Tokenizer ## 2. Dataset - Jit Dataset - AI HUB(+아래아 문자)_v2 ## 3. Hyper Parameters - Epoch : 10 epochs(best at 7 epoch) - Random Seed : 42 - Learning Rate : 5e-5 - Warm up Ratio : 0.1 - Batch Size : 32 ## 4. BLEU Score - Jit + AI HUB(+아래아 문자) Dataset : 67.6 --- ### CREDIT - 주형준 : wngudwns2798@gmail.com - 강가람 : 1st9aram@gmail.com - 고광연 : rhfprl11@gmail.com - 김수연 : s01090445778@gmail.com - 이원경 : hjtwin2@gmail.com - 조성은 : eun102476@gmail.com
minsik-oh/TEST2ppo-LunarLander-v2
minsik-oh
2022-05-16T06:09:03Z
4
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-16T06:08:36Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 259.04 +/- 16.81 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
fancyerii/bert-finetuned-ner
fancyerii
2022-05-16T05:35:53Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-05-16T05:00:21Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.9387755102040817 - name: Recall type: recall value: 0.9522046449007069 - name: F1 type: f1 value: 0.9454423928481912 - name: Accuracy type: accuracy value: 0.9869606169423677 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0592 - Precision: 0.9388 - Recall: 0.9522 - F1: 0.9454 - Accuracy: 0.9870 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0857 | 1.0 | 1756 | 0.0635 | 0.9121 | 0.9359 | 0.9238 | 0.9830 | | 0.0318 | 2.0 | 3512 | 0.0586 | 0.9245 | 0.9465 | 0.9354 | 0.9857 | | 0.0222 | 3.0 | 5268 | 0.0592 | 0.9388 | 0.9522 | 0.9454 | 0.9870 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2+cu113 - Datasets 1.18.3 - Tokenizers 0.11.6
IMSyPP/hate_speech_targets_nl
IMSyPP
2022-05-16T04:49:35Z
9
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "nl", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-16T04:23:10Z
--- language: - nl license: mit --- # Hate Speech Target Classifier for Social Media Content in Dutch A monolingual model for hate speech target classification of social media content in Dutch. The model was trained on 20000 social media posts (youtube, twitter, facebook) and tested on an independent test set of 2000 posts. It is based on the pre-trained language model [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased). ## Tokenizer During training the text was preprocessed using the Distilbert tokenizer. We suggest the same tokenizer is used for inference. ## Model output The model classifies each input into one of four distinct classes: * 0 - HOMOPHOBIA * 1 - OTHER * 2 - RELIGION * 3 - ANTISEMITISM * 4 - IDEOLOGY * 5 - MIGRANTS * 6 - POLITICS * 7 - RACISM * 8 - MEDIA * 9 - ISLAMOPHOBIA * 10 - INDIVIDUAL * 11 - SEXISM
Tititun/consumer_super
Tititun
2022-05-16T04:46:12Z
3
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-15T15:31:47Z
--- license: mit tags: - generated_from_trainer model-index: - name: consumer_super results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # consumer_super This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.19.1 - Pytorch 1.11.0+cu102 - Datasets 2.2.1 - Tokenizers 0.12.1
nttoanh/t5vi-finetuned-en-to-vi
nttoanh
2022-05-15T22:20:38Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:mt_eng_vietnamese", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-15T17:03:36Z
--- tags: - generated_from_trainer datasets: - mt_eng_vietnamese metrics: - bleu model-index: - name: t5vi-finetuned-en-to-vi results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: mt_eng_vietnamese type: mt_eng_vietnamese args: iwslt2015-en-vi metrics: - name: Bleu type: bleu value: 13.547 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5vi-finetuned-en-to-vi This model is a fine-tuned version of [imthanhlv/t5vi](https://huggingface.co/imthanhlv/t5vi) on the mt_eng_vietnamese dataset. It achieves the following results on the evaluation set: - Loss: 1.3827 - Bleu: 13.547 - Gen Len: 17.3719 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 20 - eval_batch_size: 20 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:| | 1.8026 | 1.0 | 6666 | 1.5907 | 10.9756 | 17.3231 | | 1.6217 | 2.0 | 13332 | 1.4635 | 12.375 | 17.3444 | | 1.5087 | 3.0 | 19998 | 1.4131 | 13.1828 | 17.3924 | | 1.4446 | 4.0 | 26664 | 1.3915 | 13.5217 | 17.3617 | | 1.4076 | 5.0 | 33330 | 1.3827 | 13.547 | 17.3719 | ### Framework versions - Transformers 4.19.1 - Pytorch 1.11.0+cu113 - Datasets 2.2.1 - Tokenizers 0.12.1
nadirbekovnadir/LunarLander-64_128_tanh
nadirbekovnadir
2022-05-15T22:16:03Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-15T22:15:21Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 279.69 +/- 14.54 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
vukpetar/ppo-CarRacing-v0-v2
vukpetar
2022-05-15T21:43:18Z
3
0
stable-baselines3
[ "stable-baselines3", "CarRacing-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-15T21:41:44Z
--- library_name: stable-baselines3 tags: - CarRacing-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 849.04 +/- 31.91 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: CarRacing-v0 type: CarRacing-v0 --- # **PPO** Agent playing **CarRacing-v0** This is a trained model of a **PPO** agent playing **CarRacing-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
mateotyz/tf-xml-r-base-ape-swm
mateotyz
2022-05-15T21:19:18Z
5
0
transformers
[ "transformers", "tf", "tensorboard", "xlm-roberta", "fill-mask", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-05-15T18:47:41Z
--- tags: - generated_from_keras_callback model-index: - name: mateotyz/tf-xml-r-base-ape-swm results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # mateotyz/tf-xml-r-base-ape-swm This model is a fine-tuned version of [jplu/tf-xlm-roberta-base](https://huggingface.co/jplu/tf-xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.1811 - Validation Loss: 1.0441 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -125, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.3563 | 1.0668 | 0 | | 1.1682 | 1.0687 | 1 | | 1.1811 | 1.0441 | 2 | ### Framework versions - Transformers 4.19.1 - TensorFlow 2.8.0 - Datasets 2.2.1 - Tokenizers 0.12.1
maglagla/TEST2ppo-LunarLander-v2
maglagla
2022-05-15T19:23:03Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-15T19:01:19Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 269.84 +/- 18.09 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
KhariotnovKK/luna_lender_v1
KhariotnovKK
2022-05-15T18:37:37Z
4
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-06T08:33:31Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 260.20 +/- 20.71 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
send-it/TEST5ppo-LunarLander-v2
send-it
2022-05-15T18:30:57Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-15T18:30:25Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 270.57 +/- 10.85 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
prashanth/mbart-large-cc25-ge-en-to-hi
prashanth
2022-05-15T17:11:05Z
19
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "generated_from_trainer", "dataset:hindi_english_machine_translation", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-14T23:04:55Z
--- tags: - generated_from_trainer datasets: - hindi_english_machine_translation metrics: - bleu model-index: - name: mbart-large-cc25-ge-en-to-hi results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: hindi_english_machine_translation type: hindi_english_machine_translation args: hi-en metrics: - name: Bleu type: bleu value: 4.5974 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart-large-cc25-ge-en-to-hi This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on the hindi_english_machine_translation dataset. It achieves the following results on the evaluation set: - Loss: 1.3397 - Bleu: 4.5974 - Gen Len: 66.244 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:------:|:---------------:|:------:|:-------:| | 1.4602 | 1.0 | 135739 | 1.3397 | 4.5974 | 66.244 | ### Framework versions - Transformers 4.19.1 - Pytorch 1.11.0+cu102 - Datasets 1.18.0 - Tokenizers 0.12.1
SebastianS/bert-finetuned-squad
SebastianS
2022-05-15T16:19:22Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-05-15T14:39:46Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.19.1 - Pytorch 1.11.0+cu113 - Datasets 2.2.1 - Tokenizers 0.12.1
huggingtweets/dclblogger-loopifyyy
huggingtweets
2022-05-15T15:32:50Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-05-15T15:28:31Z
--- language: en thumbnail: http://www.huggingtweets.com/dclblogger-loopifyyy/1652628765621/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1472740175130230784/L7Xcs7wJ_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1480550067564163078/D90SnyUa_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Matty & Loopify 🧙‍♂️</div> <div style="text-align: center; font-size: 14px;">@dclblogger-loopifyyy</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Matty & Loopify 🧙‍♂️. | Data | Matty | Loopify 🧙‍♂️ | | --- | --- | --- | | Tweets downloaded | 3250 | 3250 | | Retweets | 62 | 117 | | Short tweets | 494 | 867 | | Tweets kept | 2694 | 2266 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1pq5pxck/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dclblogger-loopifyyy's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/as5uacn5) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/as5uacn5/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/dclblogger-loopifyyy') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
kktoto/kt_punc
kktoto
2022-05-15T15:16:29Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:chn_senti_corp", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-05-15T13:47:21Z
--- tags: - generated_from_trainer datasets: - chn_senti_corp metrics: - precision - recall - f1 - accuracy model-index: - name: kt_punc results: - task: name: Token Classification type: token-classification dataset: name: chn_senti_corp type: chn_senti_corp args: default metrics: - name: Precision type: precision value: 0.7078651685393258 - name: Recall type: recall value: 0.7313662547821116 - name: F1 type: f1 value: 0.7194238380517767 - name: Accuracy type: accuracy value: 0.957316742326961 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kt_punc This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the chn_senti_corp dataset. It achieves the following results on the evaluation set: - Loss: 0.1703 - Precision: 0.7079 - Recall: 0.7314 - F1: 0.7194 - Accuracy: 0.9573 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1661 | 1.0 | 600 | 0.1351 | 0.6566 | 0.6833 | 0.6697 | 0.9498 | | 0.1246 | 2.0 | 1200 | 0.1330 | 0.6854 | 0.6665 | 0.6758 | 0.9521 | | 0.1121 | 3.0 | 1800 | 0.1303 | 0.6885 | 0.6994 | 0.6939 | 0.9537 | | 0.1008 | 4.0 | 2400 | 0.1359 | 0.6836 | 0.7248 | 0.7036 | 0.9543 | | 0.0809 | 5.0 | 3000 | 0.1404 | 0.7035 | 0.7082 | 0.7059 | 0.9559 | | 0.0696 | 6.0 | 3600 | 0.1449 | 0.6986 | 0.7224 | 0.7103 | 0.9560 | | 0.0628 | 7.0 | 4200 | 0.1563 | 0.7063 | 0.7214 | 0.7138 | 0.9567 | | 0.0561 | 8.0 | 4800 | 0.1618 | 0.7024 | 0.7333 | 0.7175 | 0.9568 | | 0.0525 | 9.0 | 5400 | 0.1669 | 0.7083 | 0.7335 | 0.7207 | 0.9574 | | 0.0453 | 10.0 | 6000 | 0.1703 | 0.7079 | 0.7314 | 0.7194 | 0.9573 | ### Framework versions - Transformers 4.19.1 - Pytorch 1.11.0+cu113 - Datasets 2.2.1 - Tokenizers 0.12.1
umbertospazio/1500000_PPO-LunarLander-v2
umbertospazio
2022-05-15T15:03:24Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-15T15:02:54Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 283.46 +/- 17.55 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
Zohar/distilgpt2-finetuned-negative-restaurant-reviews-clean
Zohar
2022-05-15T14:12:08Z
11
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-05-15T11:47:04Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilgpt2-finetuned-negative-restaurant-reviews-clean results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-negative-restaurant-reviews-clean This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.5187 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.6841 | 1.0 | 3105 | 3.5793 | | 3.6184 | 2.0 | 6210 | 3.5313 | | 3.5943 | 3.0 | 9315 | 3.5187 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.11.0
KrusHan/PPO-LunarLander-v2
KrusHan
2022-05-15T13:18:42Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-15T13:18:09Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 260.52 +/- 27.65 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
robert1003/LunarLander-v2-ppo
robert1003
2022-05-15T13:15:46Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-15T05:03:44Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 280.07 +/- 14.87 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
WhatIsThisSignupForm/ppo-LunarLander-v2
WhatIsThisSignupForm
2022-05-15T12:50:36Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-15T12:46:36Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 174.04 +/- 57.75 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
ipvikas/rare-puppers
ipvikas
2022-05-15T12:47:13Z
61
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-05-01T16:51:17Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: rare-puppers results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9552238583564758 --- # rare-puppers Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### corgi ![corgi](images/corgi.jpg) #### samoyed ![samoyed](images/samoyed.jpg) #### shiba inu ![shiba inu](images/shiba_inu.jpg)
huggingtweets/medvedevrussia
huggingtweets
2022-05-15T12:26:28Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-05-15T12:26:21Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/2348558617/x0vh6bui3sq97vt4jd2n_400x400.png&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Дмитрий Медведев</div> <div style="text-align: center; font-size: 14px;">@medvedevrussia</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Дмитрий Медведев. | Data | Дмитрий Медведев | | --- | --- | | Tweets downloaded | 1740 | | Retweets | 300 | | Short tweets | 48 | | Tweets kept | 1392 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2s7c3vz9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @medvedevrussia's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1e00s9pz) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1e00s9pz/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/medvedevrussia') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
NikiTricky/ffhq-autoencoder-16dim
NikiTricky
2022-05-15T12:01:27Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2022-05-15T11:29:08Z
--- license: apache-2.0 --- # FFHQ Autoencoder An autoencoder train on the **F**lickr-**F**aces-**HQ** Dataset with 16 latent dimensions for 1000 epochs. **Note:** The images trained on were 128x128. It was meant for the [Latent Space Explorer](https://github.com/NikiTricky2/Latent-space-vizualizer)
FollishBoi/dqn-MountainCar-v0-try1
FollishBoi
2022-05-15T12:00:10Z
4
0
stable-baselines3
[ "stable-baselines3", "MountainCar-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-15T11:59:45Z
--- library_name: stable-baselines3 tags: - MountainCar-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - metrics: - type: mean_reward value: -102.50 +/- 5.73 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: MountainCar-v0 type: MountainCar-v0 --- # **DQN** Agent playing **MountainCar-v0** This is a trained model of a **DQN** agent playing **MountainCar-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
anas-awadalla/splinter-base-finetuned-squad
anas-awadalla
2022-05-15T11:49:58Z
4
0
transformers
[ "transformers", "pytorch", "splinter", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-05-15T10:55:15Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: splinter-base-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # splinter-base-finetuned-squad This model is a fine-tuned version of [tau/splinter-base-qass](https://huggingface.co/tau/splinter-base-qass) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0 ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
harikp20/hkp24
harikp20
2022-05-15T11:34:27Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-05-15T08:30:36Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: hkp24 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hkp24 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1619 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2249 | 1.0 | 5533 | 1.1675 | | 0.961 | 2.0 | 11066 | 1.1376 | | 0.7581 | 3.0 | 16599 | 1.1619 | ### Framework versions - Transformers 4.19.1 - Pytorch 1.11.0+cu113 - Datasets 2.2.1 - Tokenizers 0.12.1
anas-awadalla/splinter-large-finetuned-squad
anas-awadalla
2022-05-15T10:51:43Z
27
0
transformers
[ "transformers", "pytorch", "splinter", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-05-15T08:20:49Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: splinter-large-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # splinter-large-finetuned-squad This model is a fine-tuned version of [tau/splinter-large-qass](https://huggingface.co/tau/splinter-large-qass) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0 ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
FumaNet/TEST1PPO-MountainCar-v0
FumaNet
2022-05-15T10:48:34Z
0
0
stable-baselines3
[ "stable-baselines3", "MountainCar-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-15T10:47:58Z
--- library_name: stable-baselines3 tags: - MountainCar-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: -200.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: MountainCar-v0 type: MountainCar-v0 --- # **PPO** Agent playing **MountainCar-v0** This is a trained model of a **PPO** agent playing **MountainCar-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
FumaNet/TEST1PPO-CartPole-v1
FumaNet
2022-05-15T10:24:11Z
4
0
stable-baselines3
[ "stable-baselines3", "CartPole-v1", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-15T10:23:40Z
--- library_name: stable-baselines3 tags: - CartPole-v1 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 397.00 +/- 103.22 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 --- # **PPO** Agent playing **CartPole-v1** This is a trained model of a **PPO** agent playing **CartPole-v1** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
esh/MountainCar-v0
esh
2022-05-15T09:23:41Z
0
0
stable-baselines3
[ "stable-baselines3", "MountainCar-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-15T09:10:58Z
--- library_name: stable-baselines3 tags: - MountainCar-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: -169.90 +/- 36.95 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: MountainCar-v0 type: MountainCar-v0 --- # **PPO** Agent playing **MountainCar-v0** This is a trained model of a **PPO** agent playing **MountainCar-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
pujaburman30/autotrain-hi_ner_xlmr-869827677
pujaburman30
2022-05-15T09:00:47Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "autotrain", "unk", "dataset:pujaburman30/autotrain-data-hi_ner_xlmr", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-05-15T08:56:46Z
--- tags: autotrain language: unk widget: - text: "I love AutoTrain 🤗" datasets: - pujaburman30/autotrain-data-hi_ner_xlmr co2_eq_emissions: 4.365496441173981 --- # Model Trained Using AutoTrain - Problem type: Entity Extraction - Model ID: 869827677 - CO2 Emissions (in grams): 4.365496441173981 ## Validation Metrics - Loss: 0.894961416721344 - Accuracy: 0.7411180773249739 - Precision: 0.590625 - Recall: 0.5080645161290323 - F1: 0.546242774566474 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/pujaburman30/autotrain-hi_ner_xlmr-869827677 ``` Or Python API: ``` from transformers import AutoModelForTokenClassification, AutoTokenizer model = AutoModelForTokenClassification.from_pretrained("pujaburman30/autotrain-hi_ner_xlmr-869827677", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("pujaburman30/autotrain-hi_ner_xlmr-869827677", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
anas-awadalla/roberta-large-houlsby-few-shot-k-512-finetuned-squad-seed-4
anas-awadalla
2022-05-15T07:49:59Z
0
0
null
[ "generated_from_trainer", "dataset:squad", "license:mit", "region:us" ]
null
2022-05-15T05:12:28Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-large-houlsby-few-shot-k-512-finetuned-squad-seed-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large-houlsby-few-shot-k-512-finetuned-squad-seed-4 This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 12 - eval_batch_size: 8 - seed: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20.0 ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
anas-awadalla/roberta-large-houlsby-few-shot-k-512-finetuned-squad-seed-2
anas-awadalla
2022-05-15T07:40:11Z
0
0
null
[ "generated_from_trainer", "dataset:squad", "license:mit", "region:us" ]
null
2022-05-15T05:02:33Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-large-houlsby-few-shot-k-512-finetuned-squad-seed-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large-houlsby-few-shot-k-512-finetuned-squad-seed-2 This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 12 - eval_batch_size: 8 - seed: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20.0 ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
meln1k/ppo-CarRacing-v0-v1
meln1k
2022-05-15T07:33:43Z
3
0
stable-baselines3
[ "stable-baselines3", "CarRacing-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-15T07:32:54Z
--- library_name: stable-baselines3 tags: - CarRacing-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 800.67 +/- 46.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: CarRacing-v0 type: CarRacing-v0 --- # **PPO** Agent playing **CarRacing-v0** This is a trained model of a **PPO** agent playing **CarRacing-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
meln1k/ppo-CarRacing-v0
meln1k
2022-05-15T07:31:25Z
11
2
stable-baselines3
[ "stable-baselines3", "CarRacing-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-15T07:19:11Z
--- library_name: stable-baselines3 tags: - CarRacing-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 840.32 +/- 21.17 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: CarRacing-v0 type: CarRacing-v0 --- # **PPO** Agent playing **CarRacing-v0** This is a trained model of a **PPO** agent playing **CarRacing-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
anas-awadalla/roberta-large-houlsby-few-shot-k-512-finetuned-squad-seed-0
anas-awadalla
2022-05-15T07:30:26Z
0
0
null
[ "generated_from_trainer", "dataset:squad", "license:mit", "region:us" ]
null
2022-05-15T04:52:31Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-large-houlsby-few-shot-k-512-finetuned-squad-seed-0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large-houlsby-few-shot-k-512-finetuned-squad-seed-0 This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 12 - eval_batch_size: 8 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20.0 ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
anas-awadalla/roberta-large-houlsby-few-shot-k-256-finetuned-squad-seed-4
anas-awadalla
2022-05-15T07:20:51Z
0
0
null
[ "generated_from_trainer", "dataset:squad", "license:mit", "region:us" ]
null
2022-05-15T04:47:24Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-large-houlsby-few-shot-k-256-finetuned-squad-seed-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large-houlsby-few-shot-k-256-finetuned-squad-seed-4 This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 12 - eval_batch_size: 8 - seed: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20.0 ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
anas-awadalla/roberta-large-houlsby-few-shot-k-256-finetuned-squad-seed-0
anas-awadalla
2022-05-15T07:06:17Z
0
0
null
[ "generated_from_trainer", "dataset:squad", "license:mit", "region:us" ]
null
2022-05-15T04:38:07Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-large-houlsby-few-shot-k-256-finetuned-squad-seed-0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large-houlsby-few-shot-k-256-finetuned-squad-seed-0 This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 12 - eval_batch_size: 8 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20.0 ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
questgen/all-mpnet-base-v2-feature-extraction-pipeline
questgen
2022-05-15T06:29:59Z
8
2
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "en", "arxiv:1904.06472", "arxiv:2102.07033", "arxiv:2104.08727", "arxiv:1704.05179", "arxiv:1810.09305", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
feature-extraction
2022-05-15T06:25:37Z
--- pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity language: en license: apache-2.0 --- # all-mpnet-base-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/all-mpnet-base-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-mpnet-base-v2') model = AutoModel.from_pretrained('sentence-transformers/all-mpnet-base-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-mpnet-base-v2) ------ ## Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 384 word pieces is truncated. ## Training procedure ### Pre-training We use the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model. Please refer to the model card for more detailed information about the pre-training procedure. ### Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. #### Hyper parameters We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`. #### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |--------------------------------------------------------|:----------------------------------------:|:--------------------------:| | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 | | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | **Total** | | **1,170,060,424** |
ahmeddbahaa/mbart-large-50-finetuned-persian
ahmeddbahaa
2022-05-15T04:01:56Z
18
0
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "summarization", "persian", "MBart50", "Abstractive Summarization", "generated_from_trainer", "dataset:xlsum", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-05-14T13:40:15Z
--- tags: - summarization - persian - MBart50 - Abstractive Summarization - generated_from_trainer datasets: - xlsum model-index: - name: mbart-large-50-finetuned-persian results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart-large-50-finetuned-persian This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the xlsum dataset. It achieves the following results on the evaluation set: - Loss: 4.1932 - Rouge-1: 26.11 - Rouge-2: 8.11 - Rouge-l: 21.09 - Gen Len: 37.29 - Bertscore: 71.08 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - label_smoothing_factor: 0.1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:| | 5.5612 | 1.0 | 1476 | 4.5015 | 17.07 | 3.14 | 13.54 | 47.49 | 66.83 | | 4.3049 | 2.0 | 2952 | 4.1055 | 22.63 | 5.89 | 18.03 | 40.43 | 69.23 | | 3.8154 | 3.0 | 4428 | 3.9822 | 24.57 | 7.15 | 19.74 | 37.35 | 70.36 | | 3.3401 | 4.0 | 5904 | 4.0088 | 25.84 | 7.96 | 20.95 | 37.56 | 70.83 | | 2.8879 | 5.0 | 7380 | 4.1932 | 26.24 | 8.26 | 21.23 | 37.78 | 71.05 | ### Framework versions - Transformers 4.19.1 - Pytorch 1.11.0+cu113 - Datasets 2.2.1 - Tokenizers 0.12.1
bkh6722/bach-arb
bkh6722
2022-05-15T02:34:26Z
30
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-05-07T21:50:59Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bach-arb This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-german](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-german) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.9404 - Wer: 0.6130 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 115 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 27.8653 | 7.14 | 100 | 3.1369 | 1.0 | | 2.5975 | 14.28 | 200 | 2.1223 | 0.9976 | | 1.2001 | 21.41 | 300 | 1.7455 | 0.8774 | | 0.5938 | 28.55 | 400 | 1.8534 | 0.7981 | | 0.4001 | 35.69 | 500 | 2.3318 | 0.7740 | | 0.2895 | 42.83 | 600 | 2.2214 | 0.7163 | | 0.1853 | 49.97 | 700 | 2.4841 | 0.7043 | | 0.1318 | 57.14 | 800 | 2.9749 | 0.7139 | | 0.1067 | 64.28 | 900 | 2.4759 | 0.7115 | | 0.0635 | 71.41 | 1000 | 2.6708 | 0.6635 | | 0.0515 | 78.55 | 1100 | 3.0593 | 0.6923 | | 0.0455 | 85.69 | 1200 | 2.9637 | 0.6587 | | 0.0329 | 92.83 | 1300 | 2.9837 | 0.6346 | | 0.0232 | 99.97 | 1400 | 2.9361 | 0.6178 | | 0.021 | 107.14 | 1500 | 2.9221 | 0.6010 | | 0.0193 | 114.28 | 1600 | 2.9404 | 0.6130 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
anas-awadalla/splinter-large-few-shot-k-512-finetuned-squad-seed-4
anas-awadalla
2022-05-15T00:58:56Z
4
0
transformers
[ "transformers", "pytorch", "splinter", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-05-15T00:45:41Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: splinter-large-few-shot-k-512-finetuned-squad-seed-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # splinter-large-few-shot-k-512-finetuned-squad-seed-4 This model is a fine-tuned version of [tau/splinter-large-qass](https://huggingface.co/tau/splinter-large-qass) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
anas-awadalla/splinter-large-few-shot-k-1024-finetuned-squad-seed-4
anas-awadalla
2022-05-14T23:53:22Z
6
0
transformers
[ "transformers", "pytorch", "splinter", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-05-14T23:32:56Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: splinter-large-few-shot-k-1024-finetuned-squad-seed-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # splinter-large-few-shot-k-1024-finetuned-squad-seed-4 This model is a fine-tuned version of [tau/splinter-large-qass](https://huggingface.co/tau/splinter-large-qass) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
anas-awadalla/roberta-large-few-shot-k-1024-finetuned-squad-seed-2
anas-awadalla
2022-05-14T23:31:40Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-05-14T23:11:05Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-large-few-shot-k-1024-finetuned-squad-seed-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large-few-shot-k-1024-finetuned-squad-seed-2 This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
anas-awadalla/splinter-large-few-shot-k-1024-finetuned-squad-seed-2
anas-awadalla
2022-05-14T23:31:40Z
4
0
transformers
[ "transformers", "pytorch", "splinter", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-05-14T23:11:15Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: splinter-large-few-shot-k-1024-finetuned-squad-seed-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # splinter-large-few-shot-k-1024-finetuned-squad-seed-2 This model is a fine-tuned version of [tau/splinter-large-qass](https://huggingface.co/tau/splinter-large-qass) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
Metformin/T5model_medFineTune
Metformin
2022-05-14T23:15:48Z
5
0
transformers
[ "transformers", "tf", "mt5", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-14T10:11:14Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Metformin/T5model_medFineTune results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Metformin/T5model_medFineTune This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 9.0442 - Validation Loss: 6.1005 - Epoch: 9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 1e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 7820, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 100, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 42.6321 | 28.0647 | 0 | | 31.2672 | 21.0068 | 1 | | 24.8310 | 16.6186 | 2 | | 20.5368 | 13.8025 | 3 | | 17.3796 | 11.7180 | 4 | | 15.0329 | 10.0404 | 5 | | 13.0886 | 8.6286 | 6 | | 11.5235 | 7.5594 | 7 | | 10.1123 | 6.8079 | 8 | | 9.0442 | 6.1005 | 9 | ### Framework versions - Transformers 4.18.0 - TensorFlow 2.6.3 - Datasets 2.0.0 - Tokenizers 0.12.1
anas-awadalla/splinter-large-few-shot-k-1024-finetuned-squad-seed-0
anas-awadalla
2022-05-14T23:09:42Z
4
0
transformers
[ "transformers", "pytorch", "splinter", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-05-14T22:49:18Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: splinter-large-few-shot-k-1024-finetuned-squad-seed-0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # splinter-large-few-shot-k-1024-finetuned-squad-seed-0 This model is a fine-tuned version of [tau/splinter-large-qass](https://huggingface.co/tau/splinter-large-qass) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
prashanth/mbart-large-cc25-ind_finetun-en-to-hi
prashanth
2022-05-14T22:51:49Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "generated_from_trainer", "dataset:hindi_english_machine_translation", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-14T22:06:44Z
--- tags: - generated_from_trainer datasets: - hindi_english_machine_translation metrics: - bleu model-index: - name: mbart-large-cc25-ind_finetun-en-to-hi results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: hindi_english_machine_translation type: hindi_english_machine_translation args: hi-en metrics: - name: Bleu type: bleu value: 7.8242 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart-large-cc25-ind_finetun-en-to-hi This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on the hindi_english_machine_translation dataset. It achieves the following results on the evaluation set: - Loss: 1.8148 - Bleu: 7.8242 - Gen Len: 75.28 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | 3.3247 | 1.0 | 620 | 1.8148 | 7.8242 | 75.28 | ### Framework versions - Transformers 4.19.1 - Pytorch 1.11.0+cu102 - Datasets 1.18.0 - Tokenizers 0.12.1
anas-awadalla/splinter-large-few-shot-k-512-finetuned-squad-seed-2
anas-awadalla
2022-05-14T22:32:48Z
3
0
transformers
[ "transformers", "pytorch", "splinter", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-05-14T22:19:32Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: splinter-large-few-shot-k-512-finetuned-squad-seed-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # splinter-large-few-shot-k-512-finetuned-squad-seed-2 This model is a fine-tuned version of [tau/splinter-large-qass](https://huggingface.co/tau/splinter-large-qass) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
anas-awadalla/roberta-large-few-shot-k-512-finetuned-squad-seed-0
anas-awadalla
2022-05-14T22:17:30Z
8
0
transformers
[ "transformers", "pytorch", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-05-14T22:04:23Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-large-few-shot-k-512-finetuned-squad-seed-0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large-few-shot-k-512-finetuned-squad-seed-0 This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
anas-awadalla/roberta-large-few-shot-k-256-finetuned-squad-seed-4
anas-awadalla
2022-05-14T22:02:44Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-05-14T21:52:47Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-large-few-shot-k-256-finetuned-squad-seed-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large-few-shot-k-256-finetuned-squad-seed-4 This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
anas-awadalla/roberta-large-few-shot-k-256-finetuned-squad-seed-2
anas-awadalla
2022-05-14T21:51:44Z
8
0
transformers
[ "transformers", "pytorch", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-05-14T21:41:56Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-large-few-shot-k-256-finetuned-squad-seed-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large-few-shot-k-256-finetuned-squad-seed-2 This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
anas-awadalla/splinter-large-few-shot-k-256-finetuned-squad-seed-0
anas-awadalla
2022-05-14T21:40:52Z
3
0
transformers
[ "transformers", "pytorch", "splinter", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-05-14T21:30:12Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: splinter-large-few-shot-k-256-finetuned-squad-seed-0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # splinter-large-few-shot-k-256-finetuned-squad-seed-0 This model is a fine-tuned version of [tau/splinter-large-qass](https://huggingface.co/tau/splinter-large-qass) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
anas-awadalla/roberta-large-few-shot-k-256-finetuned-squad-seed-0
anas-awadalla
2022-05-14T21:40:29Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-05-14T21:30:14Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-large-few-shot-k-256-finetuned-squad-seed-0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large-few-shot-k-256-finetuned-squad-seed-0 This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
anas-awadalla/splinter-large-few-shot-k-128-finetuned-squad-seed-4
anas-awadalla
2022-05-14T21:28:38Z
3
0
transformers
[ "transformers", "pytorch", "splinter", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-05-14T21:16:06Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: splinter-large-few-shot-k-128-finetuned-squad-seed-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # splinter-large-few-shot-k-128-finetuned-squad-seed-4 This model is a fine-tuned version of [tau/splinter-large-qass](https://huggingface.co/tau/splinter-large-qass) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
anas-awadalla/roberta-large-few-shot-k-128-finetuned-squad-seed-4
anas-awadalla
2022-05-14T21:28:38Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-05-14T21:10:06Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-large-few-shot-k-128-finetuned-squad-seed-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large-few-shot-k-128-finetuned-squad-seed-4 This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
anas-awadalla/splinter-large-few-shot-k-128-finetuned-squad-seed-2
anas-awadalla
2022-05-14T21:14:58Z
4
0
transformers
[ "transformers", "pytorch", "splinter", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-05-14T21:02:01Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: splinter-large-few-shot-k-128-finetuned-squad-seed-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # splinter-large-few-shot-k-128-finetuned-squad-seed-2 This model is a fine-tuned version of [tau/splinter-large-qass](https://huggingface.co/tau/splinter-large-qass) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
anas-awadalla/splinter-large-few-shot-k-128-finetuned-squad-seed-0
anas-awadalla
2022-05-14T21:00:55Z
5
0
transformers
[ "transformers", "pytorch", "splinter", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-05-14T20:51:17Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: splinter-large-few-shot-k-128-finetuned-squad-seed-0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # splinter-large-few-shot-k-128-finetuned-squad-seed-0 This model is a fine-tuned version of [tau/splinter-large-qass](https://huggingface.co/tau/splinter-large-qass) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
anas-awadalla/roberta-large-few-shot-k-128-finetuned-squad-seed-0
anas-awadalla
2022-05-14T20:58:03Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-05-14T20:48:33Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-large-few-shot-k-128-finetuned-squad-seed-0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large-few-shot-k-128-finetuned-squad-seed-0 This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
Leizhang/distilbert-base-uncased-finetuned-emotion
Leizhang
2022-05-14T20:55:21Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-14T16:53:40Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion model-index: - name: distilbert-base-uncased-finetuned-emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Framework versions - Transformers 4.19.1 - Pytorch 1.11.0 - Datasets 1.16.1 - Tokenizers 0.12.1
anas-awadalla/splinter-large-few-shot-k-64-finetuned-squad-seed-4
anas-awadalla
2022-05-14T20:49:53Z
6
0
transformers
[ "transformers", "pytorch", "splinter", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-05-14T20:40:23Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: splinter-large-few-shot-k-64-finetuned-squad-seed-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # splinter-large-few-shot-k-64-finetuned-squad-seed-4 This model is a fine-tuned version of [tau/splinter-large-qass](https://huggingface.co/tau/splinter-large-qass) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
anas-awadalla/roberta-large-few-shot-k-64-finetuned-squad-seed-4
anas-awadalla
2022-05-14T20:46:57Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-05-14T20:35:59Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-large-few-shot-k-64-finetuned-squad-seed-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large-few-shot-k-64-finetuned-squad-seed-4 This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
anas-awadalla/roberta-large-few-shot-k-64-finetuned-squad-seed-2
anas-awadalla
2022-05-14T20:35:00Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-05-14T20:25:43Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-large-few-shot-k-64-finetuned-squad-seed-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large-few-shot-k-64-finetuned-squad-seed-2 This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
anas-awadalla/splinter-large-few-shot-k-64-finetuned-squad-seed-0
anas-awadalla
2022-05-14T20:28:59Z
4
0
transformers
[ "transformers", "pytorch", "splinter", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-05-14T20:19:39Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: splinter-large-few-shot-k-64-finetuned-squad-seed-0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # splinter-large-few-shot-k-64-finetuned-squad-seed-0 This model is a fine-tuned version of [tau/splinter-large-qass](https://huggingface.co/tau/splinter-large-qass) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
anas-awadalla/roberta-large-few-shot-k-64-finetuned-squad-seed-0
anas-awadalla
2022-05-14T20:24:34Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-05-14T20:15:19Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-large-few-shot-k-64-finetuned-squad-seed-0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large-few-shot-k-64-finetuned-squad-seed-0 This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
anas-awadalla/splinter-large-few-shot-k-32-finetuned-squad-seed-4
anas-awadalla
2022-05-14T20:18:03Z
5
0
transformers
[ "transformers", "pytorch", "splinter", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-05-14T20:08:23Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: splinter-large-few-shot-k-32-finetuned-squad-seed-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # splinter-large-few-shot-k-32-finetuned-squad-seed-4 This model is a fine-tuned version of [tau/splinter-large-qass](https://huggingface.co/tau/splinter-large-qass) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
anas-awadalla/splinter-large-few-shot-k-32-finetuned-squad-seed-0
anas-awadalla
2022-05-14T19:56:59Z
4
0
transformers
[ "transformers", "pytorch", "splinter", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-05-14T19:47:36Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: splinter-large-few-shot-k-32-finetuned-squad-seed-0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # splinter-large-few-shot-k-32-finetuned-squad-seed-0 This model is a fine-tuned version of [tau/splinter-large-qass](https://huggingface.co/tau/splinter-large-qass) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
anas-awadalla/roberta-large-few-shot-k-16-finetuned-squad-seed-4
anas-awadalla
2022-05-14T19:42:04Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-05-14T19:33:24Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-large-few-shot-k-16-finetuned-squad-seed-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large-few-shot-k-16-finetuned-squad-seed-4 This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
Xiaoman/NER-CoNLL2003-V4
Xiaoman
2022-05-14T19:37:35Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-05-14T18:52:51Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: NER-CoNLL2003-V4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NER-CoNLL2003-V4 This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2095 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.961395091713594e-05 - train_batch_size: 6 - eval_batch_size: 6 - seed: 27 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 14 | 0.3630 | | No log | 2.0 | 28 | 0.2711 | | No log | 3.0 | 42 | 0.2407 | | No log | 4.0 | 56 | 0.2057 | | No log | 5.0 | 70 | 0.2095 | ### Framework versions - Transformers 4.19.1 - Pytorch 1.11.0+cu113 - Datasets 2.2.1 - Tokenizers 0.12.1
anas-awadalla/roberta-large-few-shot-k-16-finetuned-squad-seed-0
anas-awadalla
2022-05-14T19:22:38Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-05-14T19:13:38Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-large-few-shot-k-16-finetuned-squad-seed-0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large-few-shot-k-16-finetuned-squad-seed-0 This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
kangaroo927/en_pipeline
kangaroo927
2022-05-14T18:04:29Z
0
0
spacy
[ "spacy", "text-classification", "en", "region:us" ]
text-classification
2022-05-14T04:29:58Z
--- tags: - spacy - text-classification language: - en model-index: - name: en_pipeline results: [] --- | Feature | Description | | --- | --- | | **Name** | `en_pipeline` | | **Version** | `0.0.0` | | **spaCy** | `>=3.1.4,<3.2.0` | | **Default Pipeline** | `transformer`, `textcat` | | **Components** | `transformer`, `textcat` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (22 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`textcat`** | `Acute Bleed/Mesenteric Ischemia`, `Adrenal Mass Abdomen/Pelvis`, `Aortic Aneurysm Post EVT`, `Aortic Aneurysm Pre EVT`, `Aortic Dissection`, `Cystogram`, `Dual Phase Abdomen/Pelvis`, `Enterography IBD`, `NON Contrast Abdomen/Pelvis`, `Oral & IV Abdomen Pelvis`, `Oral Contrast Abdomen/Pelvis`, `Pancreas Mass Abdomen/Pelvis`, `Pelvis Only`, `Rectal Contrast Abdomen/Pelvis`, `Renal Donor`, `Renal Mass Abdomen/Pelvis`, `Renal Stone Abdomen/Pelvis`, `Routine Abdomen/Pelvis`, `Trauma Abdomen/Pelvis`, `Urogram Post Treatment/Follow Up`, `Urogram Pre Treatment Initial`, `Venogram` | </details> ### Accuracy | Type | Score | | --- | --- | | `CATS_SCORE` | 76.67 | | `CATS_MICRO_P` | 85.89 | | `CATS_MICRO_R` | 85.19 | | `CATS_MICRO_F` | 85.54 | | `CATS_MACRO_P` | 74.35 | | `CATS_MACRO_R` | 80.69 | | `CATS_MACRO_F` | 76.67 | | `CATS_MACRO_AUC` | 97.57 | | `CATS_MACRO_AUC_PER_TYPE` | 0.00 | | `TRANSFORMER_LOSS` | 19.80 | | `TEXTCAT_LOSS` | 504.30 |
huggingtweets/spacex
huggingtweets
2022-05-14T18:02:18Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-05-14T17:51:44Z
--- language: en thumbnail: http://www.huggingtweets.com/spacex/1652551333667/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1082744382585856001/rH_k3PtQ_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">SpaceX</div> <div style="text-align: center; font-size: 14px;">@spacex</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from SpaceX. | Data | SpaceX | | --- | --- | | Tweets downloaded | 3250 | | Retweets | 539 | | Short tweets | 157 | | Tweets kept | 2554 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/562aigw4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @spacex's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3b58vg41) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3b58vg41/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/spacex') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
memorysaver/ppo-LunarLander-v2-2
memorysaver
2022-05-14T17:43:28Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-14T17:43:05Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 272.57 +/- 8.99 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
memorysaver/TEST2ppo-LunarLander-v2
memorysaver
2022-05-14T17:02:04Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-14T17:01:33Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 192.42 +/- 91.58 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
syp1229/bert-base-finetuned-koidiom
syp1229
2022-05-14T16:44:17Z
3
0
transformers
[ "transformers", "tf", "bert", "fill-mask", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-05-14T16:42:21Z
--- tags: - generated_from_keras_callback model-index: - name: syp1229/bert-base-finetuned-koidiom results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # syp1229/bert-base-finetuned-koidiom This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.1288 - Validation Loss: 1.8307 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.1288 | 1.8307 | 0 | ### Framework versions - Transformers 4.19.1 - TensorFlow 2.8.0 - Datasets 2.2.1 - Tokenizers 0.12.1
meln1k/ppo-CartPole-v1
meln1k
2022-05-14T16:37:49Z
3
0
stable-baselines3
[ "stable-baselines3", "CartPole-v1", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-14T16:37:31Z
--- library_name: stable-baselines3 tags: - CartPole-v1 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 --- # **PPO** Agent playing **CartPole-v1** This is a trained model of a **PPO** agent playing **CartPole-v1** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
akreal/mbart-large-50-finetuned-slurp
akreal
2022-05-14T16:36:01Z
5
0
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "mbart-50", "en", "dataset:SLURP", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-14T15:56:23Z
--- language: - en tags: - mbart-50 license: apache-2.0 datasets: - SLURP metrics: - accuracy - slu-f1 --- This model is `mbart-large-50-many-to-many-mmt` model fine-tuned on the text part of [SLURP](https://github.com/pswietojanski/slurp) spoken language understanding dataset. The scores on the test set are 85.68% and 79.00% for Intent accuracy and SLU-F1 respectively.
syp1229/koelectra-base-v3-generator-finetuned-koidiom
syp1229
2022-05-14T16:14:31Z
3
0
transformers
[ "transformers", "tf", "electra", "fill-mask", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-05-14T16:10:36Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: syp1229/koelectra-base-v3-generator-finetuned-koidiom results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # syp1229/koelectra-base-v3-generator-finetuned-koidiom This model is a fine-tuned version of [monologg/koelectra-base-v3-generator](https://huggingface.co/monologg/koelectra-base-v3-generator) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.4310 - Validation Loss: 2.0533 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.4310 | 2.0533 | 0 | ### Framework versions - Transformers 4.19.1 - TensorFlow 2.8.0 - Datasets 2.2.1 - Tokenizers 0.12.1
nadirbekovnadir/LunarLander-281_23
nadirbekovnadir
2022-05-14T15:38:42Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-14T15:38:03Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 278.11 +/- 23.37 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
syp1229/roberta-base-finetuned-koidiom
syp1229
2022-05-14T15:31:26Z
3
0
transformers
[ "transformers", "tf", "roberta", "fill-mask", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-05-14T15:29:33Z
--- tags: - generated_from_keras_callback model-index: - name: syp1229/roberta-base-finetuned-koidiom results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # syp1229/roberta-base-finetuned-koidiom This model is a fine-tuned version of [klue/roberta-base](https://huggingface.co/klue/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.5140 - Validation Loss: 2.0026 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.5140 | 2.0026 | 0 | ### Framework versions - Transformers 4.19.1 - TensorFlow 2.8.0 - Datasets 2.2.1 - Tokenizers 0.12.1
nadirbekovnadir/LunarLander-283_19
nadirbekovnadir
2022-05-14T13:25:49Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-14T13:25:08Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 283.38 +/- 17.68 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
huggingtweets/vrsoloviev
huggingtweets
2022-05-14T13:25:22Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-05-14T13:21:58Z
--- language: en thumbnail: http://www.huggingtweets.com/vrsoloviev/1652534655103/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1170975520458203136/4eDVAZZa_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Vladimir Soloviev</div> <div style="text-align: center; font-size: 14px;">@vrsoloviev</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Vladimir Soloviev. | Data | Vladimir Soloviev | | --- | --- | | Tweets downloaded | 3250 | | Retweets | 9 | | Short tweets | 29 | | Tweets kept | 3212 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/elfi2jwn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @vrsoloviev's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2m2arnt6) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2m2arnt6/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/vrsoloviev') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
FumaNet/TEST2PPO-LunarLander-v2
FumaNet
2022-05-14T12:30:42Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-14T12:30:16Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 261.60 +/- 27.38 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
Sigma/financial-sentiment-analysis
Sigma
2022-05-14T11:48:56Z
79
17
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:financial_phrasebank", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-14T08:41:10Z
--- tags: - generated_from_trainer datasets: - financial_phrasebank metrics: - accuracy - f1 model-index: - name: financial-sentiment-analysis results: - task: name: Text Classification type: text-classification dataset: name: financial_phrasebank type: financial_phrasebank args: sentences_allagree metrics: - name: Accuracy type: accuracy value: 0.9924242424242424 - name: F1 type: f1 value: 0.9924242424242424 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # financial-sentiment-analysis This model is a fine-tuned version of [ahmedrachid/FinancialBERT](https://huggingface.co/ahmedrachid/FinancialBERT) on the financial_phrasebank dataset. It achieves the following results on the evaluation set: - Loss: 0.0395 - Accuracy: 0.9924 - F1: 0.9924 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.19.1 - Pytorch 1.11.0+cu113 - Datasets 2.2.1 - Tokenizers 0.12.1
nadirbekovnadir/LunarLander-278_18
nadirbekovnadir
2022-05-14T11:40:41Z
4
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-14T11:40:01Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 278.68 +/- 16.88 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
nadirbekovnadir/LunarLander-278_18_2
nadirbekovnadir
2022-05-14T11:39:44Z
4
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-14T11:39:04Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 274.15 +/- 17.03 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
danieleV9H/hubert-base-timit-demo-google-colab-ft30ep_v5
danieleV9H
2022-05-14T10:32:52Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "hubert", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-05-12T20:23:29Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: hubert-base-timit-demo-google-colab-ft30ep_v5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hubert-base-timit-demo-google-colab-ft30ep_v5 This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on the timit-asr dataset. It achieves the following results on the evaluation set: - Loss: 0.4763 - Wer: 0.3322 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.9596 | 0.87 | 500 | 3.1237 | 1.0 | | 2.5388 | 1.73 | 1000 | 1.1689 | 0.9184 | | 1.0448 | 2.6 | 1500 | 0.6106 | 0.5878 | | 0.6793 | 3.46 | 2000 | 0.4912 | 0.5200 | | 0.5234 | 4.33 | 2500 | 0.4529 | 0.4798 | | 0.4368 | 5.19 | 3000 | 0.4239 | 0.4543 | | 0.3839 | 6.06 | 3500 | 0.4326 | 0.4339 | | 0.3315 | 6.92 | 4000 | 0.4265 | 0.4173 | | 0.2878 | 7.79 | 4500 | 0.4304 | 0.4068 | | 0.25 | 8.65 | 5000 | 0.4130 | 0.3940 | | 0.242 | 9.52 | 5500 | 0.4310 | 0.3938 | | 0.2182 | 10.38 | 6000 | 0.4204 | 0.3843 | | 0.2063 | 11.25 | 6500 | 0.4449 | 0.3816 | | 0.2099 | 12.11 | 7000 | 0.4016 | 0.3681 | | 0.1795 | 12.98 | 7500 | 0.4027 | 0.3647 | | 0.1604 | 13.84 | 8000 | 0.4294 | 0.3664 | | 0.1683 | 14.71 | 8500 | 0.4412 | 0.3661 | | 0.1452 | 15.57 | 9000 | 0.4484 | 0.3588 | | 0.1491 | 16.44 | 9500 | 0.4508 | 0.3515 | | 0.1388 | 17.3 | 10000 | 0.4240 | 0.3518 | | 0.1399 | 18.17 | 10500 | 0.4605 | 0.3513 | | 0.1265 | 19.03 | 11000 | 0.4412 | 0.3485 | | 0.1137 | 19.9 | 11500 | 0.4520 | 0.3467 | | 0.106 | 20.76 | 12000 | 0.4873 | 0.3426 | | 0.1243 | 21.63 | 12500 | 0.4456 | 0.3396 | | 0.1055 | 22.49 | 13000 | 0.4819 | 0.3406 | | 0.1124 | 23.36 | 13500 | 0.4613 | 0.3391 | | 0.1064 | 24.22 | 14000 | 0.4842 | 0.3430 | | 0.0875 | 25.09 | 14500 | 0.4661 | 0.3348 | | 0.086 | 25.95 | 15000 | 0.4724 | 0.3371 | | 0.0842 | 26.82 | 15500 | 0.4982 | 0.3381 | | 0.0834 | 27.68 | 16000 | 0.4856 | 0.3337 | | 0.0918 | 28.55 | 16500 | 0.4783 | 0.3344 | | 0.0773 | 29.41 | 17000 | 0.4763 | 0.3322 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.12.1
conan1024hao/cjkbert-small
conan1024hao
2022-05-14T10:18:04Z
5
2
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "ja", "zh", "ko", "dataset:wikipedia", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-05-14T09:07:12Z
--- language: - ja - zh - ko license: cc-by-sa-4.0 datasets: - wikipedia mask_token: "[MASK]" widget: - text: "早稲田大学で自然言語処理を[MASK]ぶ。" - text: "李白是[MASK]朝人。" - text: "불고기[MASK] 먹겠습니다." --- ### Model description - This model was trained on **ZH, JA, KO**'s Wikipedia (5 epochs). ### How to use ```python from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("conan1024hao/cjkbert-small") model = AutoModelForMaskedLM.from_pretrained("conan1024hao/cjkbert-small") ``` - Before you fine-tune downstream tasks, you don't need any text segmentation. - (Though you may obtain better results if you applied morphological analysis to the data before fine-tuning) ### Morphological analysis tools - ZH: For Chinese, we use [LTP](https://github.com/HIT-SCIR/ltp). - JA: For Japanese, we use [Juman++](https://github.com/ku-nlp/jumanpp). - KO: For Korean, we use [KoNLPy](https://github.com/konlpy/konlpy)(Kkma class). ### Tokenization - We use character-based tokenization with **whole-word-masking** strategy. ### Model size - vocab_size: 15015 - num_hidden_layers: 4 - hidden_size: 512 - num_attention_heads: 8 - param_num: 25M
BitanBiswas/wav2vec2-base-timit-demo-google-colab
BitanBiswas
2022-05-14T07:46:48Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-05-14T05:46:49Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-google-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-google-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4770 - Wer: 0.3360 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.6401 | 1.0 | 500 | 2.4138 | 1.0 | | 0.9717 | 2.01 | 1000 | 0.6175 | 0.5531 | | 0.4393 | 3.01 | 1500 | 0.4309 | 0.4414 | | 0.2976 | 4.02 | 2000 | 0.4167 | 0.4162 | | 0.2345 | 5.02 | 2500 | 0.4273 | 0.3927 | | 0.1919 | 6.02 | 3000 | 0.3983 | 0.3886 | | 0.1565 | 7.03 | 3500 | 0.5581 | 0.3928 | | 0.1439 | 8.03 | 4000 | 0.4509 | 0.3821 | | 0.1266 | 9.04 | 4500 | 0.4733 | 0.3774 | | 0.1091 | 10.04 | 5000 | 0.4755 | 0.3808 | | 0.1001 | 11.04 | 5500 | 0.4435 | 0.3689 | | 0.0911 | 12.05 | 6000 | 0.4962 | 0.3897 | | 0.0813 | 13.05 | 6500 | 0.5031 | 0.3622 | | 0.0729 | 14.06 | 7000 | 0.4853 | 0.3597 | | 0.0651 | 15.06 | 7500 | 0.5180 | 0.3577 | | 0.0608 | 16.06 | 8000 | 0.5251 | 0.3630 | | 0.0592 | 17.07 | 8500 | 0.4915 | 0.3591 | | 0.0577 | 18.07 | 9000 | 0.4724 | 0.3656 | | 0.0463 | 19.08 | 9500 | 0.4536 | 0.3546 | | 0.0475 | 20.08 | 10000 | 0.5107 | 0.3546 | | 0.0464 | 21.08 | 10500 | 0.4829 | 0.3464 | | 0.0369 | 22.09 | 11000 | 0.4844 | 0.3448 | | 0.0327 | 23.09 | 11500 | 0.4865 | 0.3437 | | 0.0337 | 24.1 | 12000 | 0.4825 | 0.3488 | | 0.0271 | 25.1 | 12500 | 0.4824 | 0.3445 | | 0.0236 | 26.1 | 13000 | 0.4747 | 0.3397 | | 0.0243 | 27.11 | 13500 | 0.4840 | 0.3397 | | 0.0226 | 28.11 | 14000 | 0.4716 | 0.3354 | | 0.0235 | 29.12 | 14500 | 0.4770 | 0.3360 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.12.1
NeonPigeon/TEST2ppo-LunarLander-v2
NeonPigeon
2022-05-14T06:48:08Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-14T05:31:06Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 289.62 +/- 18.60 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code