modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-01 12:28:49
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
530 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-01 12:27:35
card
stringlengths
11
1.01M
Kuntal/distilbert-base-uncased-finetuned-sst2
Kuntal
2023-01-10T08:11:11Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-10T07:35:48Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-sst2 results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: sst2 split: train args: sst2 metrics: - name: Accuracy type: accuracy value: 0.9059633027522935 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-sst2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.3487 - Accuracy: 0.9060 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.1874 | 1.0 | 4210 | 0.3487 | 0.9060 | | 0.1309 | 2.0 | 8420 | 0.3840 | 0.9037 | | 0.1009 | 3.0 | 12630 | 0.3770 | 0.9048 | | 0.063 | 4.0 | 16840 | 0.4741 | 0.8979 | | 0.0357 | 5.0 | 21050 | 0.5241 | 0.9002 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
Art-phys/ppo-LunarLander-62M-v2
Art-phys
2023-01-10T08:09:53Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-01-10T06:37:23Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 274.11 +/- 41.16 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
PooryaPiroozfar/Flair-Persian-NER
PooryaPiroozfar
2023-01-10T08:01:46Z
4,649
7
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "fa", "region:us" ]
token-classification
2023-01-09T20:19:52Z
--- tags: - flair - token-classification - sequence-tagger-model language: fa dataset: - NSURL-2019 widget: - text: >- اولین نمایش این فیلم‌ها روز دوشنبه 13 اردیبهشت و از ساعت 21 در موزه سینماست. metrics: - f1 --- ## Persian NER Using Flair This is the 7-class Named-entity recognition model for Persian that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **90.33** (NSURL-2019) Predicts NER tags: | **tag** | **meaning** | |:---------------------------------:|:-----------:| | PER | person name | | LOC | location name | | ORG | organization name | | DAT | date | | TIM | time | | PCT | percent | | MON | Money| Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and Pars-Bert. --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("PooryaPiroozfar/Flair-Persian-NER") # make example sentence sentence = Sentence("اولین نمایش این فیلم‌ها روز دوشنبه 13 اردیبهشت و از ساعت 21 در موزه سینماست.") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ``` This yields the following output: ``` Span[4:8]: "روز دوشنبه 13 اردیبهشت" → DAT (1.0) Span[10:12]: "ساعت 21" → TIM (1.0) Span[13:15]: "موزه سینماست" → LOC (0.9999) ``` --- ### Results - F-score (micro) 0.9033 - F-score (macro) 0.8976 - Accuracy 0.8277 ``` By class: precision recall f1-score support ORG 0.9016 0.8667 0.8838 1523 LOC 0.9113 0.9305 0.9208 1425 PER 0.9216 0.9322 0.9269 1224 DAT 0.8623 0.7958 0.8277 480 MON 0.9665 0.9558 0.9611 181 PCT 0.9375 0.9740 0.9554 77 TIM 0.8235 0.7925 0.8077 53 micro avg 0.9081 0.8984 0.9033 4963 macro avg 0.9035 0.8925 0.8976 4963 weighted avg 0.9076 0.8984 0.9028 4963 samples avg 0.8277 0.8277 0.8277 4963 ```
susooo/kobigbird-test45-74084713
susooo
2023-01-10T07:43:03Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "big_bird", "question-answering", "generated_from_trainer", "dataset:custom_squad_v2", "endpoints_compatible", "region:us" ]
question-answering
2023-01-10T03:48:45Z
--- tags: - generated_from_trainer datasets: - custom_squad_v2 model-index: - name: kobigbird-test45-74084713 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kobigbird-test45-74084713 This model is a fine-tuned version of [monologg/kobigbird-bert-base](https://huggingface.co/monologg/kobigbird-bert-base) on the custom_squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.8963 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 45 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.84 | 4 | 2.0601 | | No log | 1.84 | 8 | 1.9294 | | No log | 2.84 | 12 | 1.8963 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
cwinkler/distilbert-base-uncased-finetuned-greenplastics-small
cwinkler
2023-01-10T07:18:08Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-10T07:13:17Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-greenplastics-small results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-greenplastics-small This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4816 - Accuracy: 0.87 - F1: 0.8691 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.6531 | 1.0 | 11 | 0.5633 | 0.87 | 0.8696 | | 0.5415 | 2.0 | 22 | 0.4816 | 0.87 | 0.8691 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
CauseWhyNot/cardiacarrestjunior
CauseWhyNot
2023-01-10T07:06:44Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-01-10T07:06:44Z
--- license: creativeml-openrail-m ---
Stokrotka/Taxi-v3
Stokrotka
2023-01-10T06:43:24Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-01-09T21:05:59Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.54 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Stokrotka/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
bananaspectre/marian-finetuned-tgl-eng-netspeak-trial9
bananaspectre
2023-01-10T06:41:40Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "translation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-01-10T06:18:17Z
--- license: apache-2.0 tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: marian-finetuned-tgl-eng-netspeak-trial9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-tgl-eng-netspeak-trial9 This model is a fine-tuned version of [Helsinki-NLP/opus-mt-tl-en](https://huggingface.co/Helsinki-NLP/opus-mt-tl-en) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3568 - Bleu: 29.1370 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 200 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 5.6316 | 1.0 | 57 | 4.1183 | 4.6005 | | 4.8654 | 2.0 | 114 | 3.8581 | 5.9435 | | 4.5642 | 3.0 | 171 | 3.6865 | 6.6312 | | 4.3364 | 4.0 | 228 | 3.5646 | 7.3157 | | 4.1474 | 5.0 | 285 | 3.4686 | 8.6700 | | 4.0044 | 6.0 | 342 | 3.3852 | 8.6733 | | 3.8862 | 7.0 | 399 | 3.3257 | 8.3894 | | 3.7676 | 8.0 | 456 | 3.2528 | 9.2599 | | 3.6633 | 9.0 | 513 | 3.2005 | 9.7922 | | 3.5594 | 10.0 | 570 | 3.1615 | 10.9836 | | 3.4683 | 11.0 | 627 | 3.1055 | 11.0111 | | 3.3897 | 12.0 | 684 | 3.0527 | 11.0658 | | 3.3165 | 13.0 | 741 | 3.0106 | 11.3570 | | 3.2338 | 14.0 | 798 | 2.9819 | 12.4296 | | 3.1626 | 15.0 | 855 | 2.9395 | 13.0279 | | 3.1127 | 16.0 | 912 | 2.9088 | 13.2959 | | 3.0224 | 17.0 | 969 | 2.8760 | 13.9185 | | 2.9523 | 18.0 | 1026 | 2.8420 | 14.7849 | | 2.9036 | 19.0 | 1083 | 2.8059 | 15.3255 | | 2.8449 | 20.0 | 1140 | 2.7830 | 15.8899 | | 2.7851 | 21.0 | 1197 | 2.7654 | 15.3016 | | 2.7182 | 22.0 | 1254 | 2.7422 | 15.9169 | | 2.683 | 23.0 | 1311 | 2.7171 | 15.4695 | | 2.6016 | 24.0 | 1368 | 2.6860 | 17.2504 | | 2.5688 | 25.0 | 1425 | 2.6800 | 17.4693 | | 2.511 | 26.0 | 1482 | 2.6523 | 17.8363 | | 2.4627 | 27.0 | 1539 | 2.6247 | 18.6818 | | 2.4259 | 28.0 | 1596 | 2.6038 | 19.2026 | | 2.3814 | 29.0 | 1653 | 2.5946 | 18.9046 | | 2.3368 | 30.0 | 1710 | 2.5720 | 19.6498 | | 2.2639 | 31.0 | 1767 | 2.5564 | 18.7972 | | 2.2366 | 32.0 | 1824 | 2.5432 | 20.1555 | | 2.1884 | 33.0 | 1881 | 2.5369 | 19.9048 | | 2.143 | 34.0 | 1938 | 2.5215 | 19.3706 | | 2.122 | 35.0 | 1995 | 2.5102 | 20.2954 | | 2.0819 | 36.0 | 2052 | 2.4966 | 20.5785 | | 2.0333 | 37.0 | 2109 | 2.4939 | 20.8078 | | 1.9972 | 38.0 | 2166 | 2.4852 | 21.6624 | | 1.9596 | 39.0 | 2223 | 2.4724 | 21.3380 | | 1.9386 | 40.0 | 2280 | 2.4550 | 21.7399 | | 1.8881 | 41.0 | 2337 | 2.4539 | 21.8201 | | 1.8488 | 42.0 | 2394 | 2.4494 | 22.8561 | | 1.8344 | 43.0 | 2451 | 2.4436 | 22.0001 | | 1.8005 | 44.0 | 2508 | 2.4353 | 21.5060 | | 1.7703 | 45.0 | 2565 | 2.4314 | 22.6523 | | 1.7321 | 46.0 | 2622 | 2.4258 | 22.9500 | | 1.6897 | 47.0 | 2679 | 2.4202 | 23.0767 | | 1.6822 | 48.0 | 2736 | 2.4115 | 23.3565 | | 1.6392 | 49.0 | 2793 | 2.4056 | 24.4669 | | 1.621 | 50.0 | 2850 | 2.4071 | 25.7900 | | 1.6075 | 51.0 | 2907 | 2.3930 | 25.8570 | | 1.5558 | 52.0 | 2964 | 2.3835 | 26.0207 | | 1.5335 | 53.0 | 3021 | 2.3848 | 24.5089 | | 1.5091 | 54.0 | 3078 | 2.3870 | 26.7579 | | 1.4904 | 55.0 | 3135 | 2.3791 | 26.2250 | | 1.4645 | 56.0 | 3192 | 2.3760 | 26.1819 | | 1.4628 | 57.0 | 3249 | 2.3811 | 25.9747 | | 1.4297 | 58.0 | 3306 | 2.3659 | 26.4407 | | 1.4011 | 59.0 | 3363 | 2.3650 | 27.1145 | | 1.3649 | 60.0 | 3420 | 2.3597 | 27.6616 | | 1.3419 | 61.0 | 3477 | 2.3601 | 28.6248 | | 1.3278 | 62.0 | 3534 | 2.3670 | 27.2075 | | 1.3106 | 63.0 | 3591 | 2.3588 | 27.3917 | | 1.2855 | 64.0 | 3648 | 2.3508 | 27.8277 | | 1.2732 | 65.0 | 3705 | 2.3622 | 28.3032 | | 1.259 | 66.0 | 3762 | 2.3603 | 28.0315 | | 1.2397 | 67.0 | 3819 | 2.3551 | 27.9452 | | 1.2285 | 68.0 | 3876 | 2.3597 | 28.5887 | | 1.1898 | 69.0 | 3933 | 2.3599 | 28.5675 | | 1.181 | 70.0 | 3990 | 2.3642 | 29.7412 | | 1.1748 | 71.0 | 4047 | 2.3577 | 29.2003 | | 1.146 | 72.0 | 4104 | 2.3609 | 28.3760 | | 1.1274 | 73.0 | 4161 | 2.3519 | 29.2015 | | 1.1138 | 74.0 | 4218 | 2.3568 | 29.1370 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
0xid/Reinforce-Pixelcopter-PLE-v0m
0xid
2023-01-10T06:33:33Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-01-10T06:33:20Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0m results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 153.70 +/- 74.49 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
SkyworkAIGC/SkyCode
SkyworkAIGC
2023-01-10T06:20:09Z
14
26
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-12-14T04:51:44Z
# Brief introduction of SkyCode SkyCode is a multi-language open source programming model released by Singularity-AI. It adopts the GPT3 model structure and uses a large amount of code for training. Support Java, JavaScript, C, C++, Python, Go, shell and other mainstream programming languages, and can understand Chinese comments. The model can complete the code, solve problems and other operations, freeing you from programming and focusing on solving larger problems. ## Project Highlights 1. Technical advantage 1: covering multiple programming languages Different programming languages focus on solving problems in different platforms and environments, and different programming languages have their own reasons for existence. The codes that Singularity-AI SkyCode can generate not only include widely used JavaScript, python, Java, C, etc., but also cover more than ten programming languages such as php, go, and swift, so that users of different languages can experience SkyCode has powerful code generation capabilities. 2. Technical advantage 2: optimize for Chinese annotations In the field of pre-training large models, it has always been dominated by the English community. The code generation model based on GPT3 has the same problem. Relying on the experience of deeply cultivating Chinese models, Singularity-AI optimized and innovated a unique Chinese encoding method according to the characteristics of Chinese, which is more in line with Chinese language habits, making the model's ability to understand Chinese annotations better. 3. Technical advantage 3: excellent problem-solving ability On the HumanEval data set that reflects the problem-solving ability of the code generation model, the problem-solving ability of SkyCode is also much higher than that of other open source models. | model | pass@1 | pass@10 | pass@100 | |:-------------- | ------:|:-------:| -------- | | GPT-Neo 1.3B | 4.79% | 7.47% | 16.30% | | GPT-Neo 2.7B | 6.41% | 11.27% | 21.37% | | GPT-J 6B | 11.62% | 15.74% | 27.74% | | SKY_code(2.6B) | 12.84% | 21.07% | 35.97% | It can be seen that SkyCode with a parameter quantity of 2.6B is not only much higher than the GPT-Neo 1.3B model with fewer parameters, but also much higher than the GPT-Neo 2.7B model with a comparable parameter quantity. Even compared to the GPT-J 6B model with a higher number of parameters, SkyCode's problem-solving ability is stronger. In the pass@100 indicator that better reflects the upper limit of problem-solving ability, SkyCode's net value exceeds GPT-J by 8.23%. # News of Singularity-AI - [2022.12.15] [AIGC Press Conference of Singularity-AI](https://live.vhall.com/v3/lives/subscribe/697547540) ## Reliance ``` Recommend: transformers>=4.18.0 ``` ## Model usage ```python # -*- coding: utf-8 -*- from transformers import GPT2LMHeadModel from transformers import AutoTokenizer from transformers import TextGenerationPipeline model = GPT2LMHeadModel.from_pretrained("SkyWork/SkyCode") tokenizer = AutoTokenizer.from_pretrained("SkyWork/SkyCode", trust_remote_code=True) text_generator = TextGenerationPipeline(model, tokenizer, device=0) input_str = "if __name__" max_new_tokens = 40 print(text_generator(input_str, max_new_tokens=max_new_tokens, do_sample=True))### ``` # Licence [MIT License](LICENSE) # Join in developer group [Scan the QR Code with WeChat](https://user-images.githubusercontent.com/120169448/211475709-75b5f652-366f-45a1-b8c0-0bd64e8256bb.jpg) to join in the developer group of SkyCode. —————————————————————————————————————————————————————————————————————————————— # SkyCode SkyCode是由奇点智源发布的多语言开源编程大模型,采用GPT3模型结构,使用海量的代码进行训练。支持Java, JavaScript, C, C++, Python, Go, shell等多种主流编程语言,并能理解中文注释。模型可以对代码进行补全,进行解题等操作,使您从编程中解放出来,专心于解决更大的问题。 ## 项目亮点 1. 技术优势一 :涵盖多种编程语言 不同的编程语言着重于解决不同平台、环境下的问题,不同的编程语言都有自己存在的理由。奇点智源SkyCode能够生成的代码,不仅包括使用广泛的JavaScript、python、Java、C等,还涵盖了php、go、swift等共计十余种编程语言,使不同语言的使用者都能来体验SkyCode强大的代码生成能力。 2. 技术优势二:针对中文注释进行优化 曾经在预训练大模型领域,一直是被英文社区主导着,依托于GPT3的代码生成模型有着同样的问题。奇点智源凭借深耕中文模型的经验,针对中文的特点,优化创新使用了独特的中文编码方式,更加符合中文的语言习惯,使得模型对中文注释的理解能力更为优秀。 3. 技术优势三:极其出色的解题能力 在体现代码生成模型解题能力的HumanEval数据集上,奇点智源SkyCode的解题能力也远高出其他开源模型。 | model | pass@1 | pass@10 | pass@100 | |:-------------- | ------:|:-------:| -------- | | GPT-Neo 1.3B | 4.79% | 7.47% | 16.30% | | GPT-Neo 2.7B | 6.41% | 11.27% | 21.37% | | GPT-J 6B | 11.62% | 15.74% | 27.74% | | SKY_code(2.6B) | 12.84% | 21.07% | 35.97% | 可以看到,参数量2.6B的SkyCode在解题能力上不仅高出参数较少的GPT-Neo 1.3B许多,也远高于参数量相当的GPT-Neo 2.7B模型。即使对比参数量更高的GPT-J 6B模型,SkyCode的解题能力也更强。在更能体现解题能力上限的pass@100指标上,SkyCode超出GPT-J的净值为8.23%。 # 奇点新闻 - [2022.12.15] [昆仑天工AIGC发布会](https://live.vhall.com/v3/lives/subscribe/697547540) ## 依赖 ``` 推荐 transformers>=4.18.0 ``` ## 模型使用 ```python # -*- coding: utf-8 -*- from transformers import GPT2LMHeadModel from transformers import AutoTokenizer from transformers import TextGenerationPipeline model = GPT2LMHeadModel.from_pretrained("SkyWork/SkyCode") tokenizer = AutoTokenizer.from_pretrained("SkyWork/SkyCode", trust_remote_code=True) text_generator = TextGenerationPipeline(model, tokenizer, device=0) input_str = "if __name__" max_new_tokens = 40 print(text_generator(input_str, max_new_tokens=max_new_tokens, do_sample=True))### ``` # 版权许可 [MIT License](LICENSE) # 加入SkyCode开发者群 [微信扫描此二维码](https://user-images.githubusercontent.com/120169448/211475709-75b5f652-366f-45a1-b8c0-0bd64e8256bb.jpg) 加入SkyCode开发者群。
johnowhitaker/sac_midu_mini
johnowhitaker
2023-01-10T06:16:42Z
0
1
null
[ "region:us" ]
null
2023-01-05T18:44:28Z
```python # Download the model weights from huggingface_hub import hf_hub_download model_path = hf_hub_download(repo_id="johnowhitaker/sac_midu_mini", filename="midu_model_aesthetic_classifier.pt") # Load the aesthetic classifier m = nn.Sequential( nn.Conv2d(1280, 256, kernel_size=3, padding=1), nn.ReLU(), nn.MaxPool2d(2, 2), nn.Conv2d(256, 128, kernel_size=3, padding=1), nn.ReLU(), nn.AdaptiveAvgPool2d(output_size=(2, 2)), nn.Flatten(), nn.Linear(128*4, 64), nn.ReLU(), nn.Linear(64, 10)).to(device) m.load_state_dict(torch.load(model_path)); # Load the SD pipeline and add a hook pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1-base").to(device) pipe.scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000) pipe.scheduler.set_timesteps(30) def hook_fn(module, input, output): module.output = output pipe.unet.mid_block.register_forward_hook(hook_fn); # Now after calling the forward pass of the UNET, you can do preds = m(pipe.unet.mid_block.output) ```
YuJungSoo/kobigbird-pure45-34458617
YuJungSoo
2023-01-10T06:02:03Z
93
0
transformers
[ "transformers", "pytorch", "tensorboard", "big_bird", "question-answering", "generated_from_trainer", "dataset:custom_squad_v2", "endpoints_compatible", "region:us" ]
question-answering
2023-01-10T05:11:35Z
--- tags: - generated_from_trainer datasets: - custom_squad_v2 model-index: - name: kobigbird-pure45-34458617 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kobigbird-pure45-34458617 This model is a fine-tuned version of [monologg/kobigbird-bert-base](https://huggingface.co/monologg/kobigbird-bert-base) on the custom_squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 4.3725 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 45 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.84 | 4 | 5.0505 | | No log | 1.84 | 8 | 4.4642 | | No log | 2.84 | 12 | 4.3725 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
syabusyabu0141/fine1
syabusyabu0141
2023-01-10T06:01:30Z
72
0
transformers
[ "transformers", "tf", "bert", "fill-mask", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-10-07T07:24:46Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: syabusyabu0141/afterabove results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # syabusyabu0141/afterabove This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.1391 - Validation Loss: 0.6806 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.1391 | 0.6806 | 0 | ### Framework versions - Transformers 4.23.1 - TensorFlow 2.9.2 - Datasets 2.6.1 - Tokenizers 0.13.1
alphahg/kobigbird-test45-36490500
alphahg
2023-01-10T06:00:29Z
89
0
transformers
[ "transformers", "pytorch", "tensorboard", "big_bird", "question-answering", "generated_from_trainer", "dataset:custom_squad_v2", "endpoints_compatible", "region:us" ]
question-answering
2023-01-10T05:11:18Z
--- tags: - generated_from_trainer datasets: - custom_squad_v2 model-index: - name: kobigbird-test45-36490500 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kobigbird-test45-36490500 This model is a fine-tuned version of [monologg/kobigbird-bert-base](https://huggingface.co/monologg/kobigbird-bert-base) on the custom_squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 5.5195 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 45 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.84 | 4 | 5.9855 | | No log | 1.84 | 8 | 5.5968 | | No log | 2.84 | 12 | 5.5195 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
bananaspectre/marian-finetuned-tgl-eng-netspeak-trial6
bananaspectre
2023-01-10T05:55:50Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "translation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-01-10T05:43:41Z
--- license: apache-2.0 tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: marian-finetuned-tgl-eng-netspeak-trial6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-tgl-eng-netspeak-trial6 This model is a fine-tuned version of [Helsinki-NLP/opus-mt-tl-en](https://huggingface.co/Helsinki-NLP/opus-mt-tl-en) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3343 - Bleu: 28.3769 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 150 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 4.5424 | 1.0 | 57 | 3.8496 | 5.8759 | | 3.6414 | 2.0 | 114 | 3.5073 | 9.0838 | | 3.1777 | 3.0 | 171 | 3.2660 | 10.0099 | | 2.7865 | 4.0 | 228 | 3.0646 | 12.0484 | | 2.4638 | 5.0 | 285 | 2.9109 | 14.0737 | | 2.1815 | 6.0 | 342 | 2.7853 | 16.3752 | | 1.9426 | 7.0 | 399 | 2.6832 | 17.7971 | | 1.725 | 8.0 | 456 | 2.6060 | 19.5480 | | 1.5336 | 9.0 | 513 | 2.5433 | 20.8335 | | 1.3571 | 10.0 | 570 | 2.4888 | 21.9160 | | 1.2081 | 11.0 | 627 | 2.4424 | 21.7108 | | 1.0733 | 12.0 | 684 | 2.4045 | 23.9640 | | 0.9516 | 13.0 | 741 | 2.3940 | 24.4162 | | 0.8487 | 14.0 | 798 | 2.3840 | 27.1127 | | 0.7513 | 15.0 | 855 | 2.3563 | 27.3229 | | 0.662 | 16.0 | 912 | 2.3501 | 25.8083 | | 0.5835 | 17.0 | 969 | 2.3506 | 27.0424 | | 0.5247 | 18.0 | 1026 | 2.3355 | 27.9392 | | 0.4648 | 19.0 | 1083 | 2.3379 | 27.1880 | | 0.4047 | 20.0 | 1140 | 2.3343 | 28.3769 | | 0.3574 | 21.0 | 1197 | 2.3431 | 27.9125 | | 0.3183 | 22.0 | 1254 | 2.3407 | 29.3798 | | 0.2828 | 23.0 | 1311 | 2.3408 | 30.5316 | | 0.2528 | 24.0 | 1368 | 2.3368 | 29.9854 | | 0.2306 | 25.0 | 1425 | 2.3603 | 30.4071 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
KJIM/kobigbird-test45-81001466
KJIM
2023-01-10T05:49:02Z
90
0
transformers
[ "transformers", "pytorch", "tensorboard", "big_bird", "question-answering", "generated_from_trainer", "dataset:custom_squad_v2", "endpoints_compatible", "region:us" ]
question-answering
2023-01-10T04:57:50Z
--- tags: - generated_from_trainer datasets: - custom_squad_v2 model-index: - name: kobigbird-test45-81001466 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kobigbird-test45-81001466 This model is a fine-tuned version of [monologg/kobigbird-bert-base](https://huggingface.co/monologg/kobigbird-bert-base) on the custom_squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 3.8401 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00011 - train_batch_size: 32 - eval_batch_size: 32 - seed: 45 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.84 | 4 | 4.5836 | | No log | 1.84 | 8 | 4.1083 | | No log | 2.84 | 12 | 3.8401 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
kupman99/ppo-LunarLander-v2
kupman99
2023-01-10T05:46:41Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-01-10T05:46:19Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 260.94 +/- 14.35 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Xhaheen/hazbullay-man-generator
Xhaheen
2023-01-10T05:20:23Z
30
0
diffusers
[ "diffusers", "pytorch", "stable-diffusion", "text-to-image", "diffusion-models-class", "dreambooth-hackathon", "wildcard", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-01-10T05:15:11Z
--- license: creativeml-openrail-m tags: - pytorch - diffusers - stable-diffusion - text-to-image - diffusion-models-class - dreambooth-hackathon - wildcard widget: - text: a photo of hazbullay man with the Statue of Zeus from Ancient Greece in the background --- # DreamBooth model for the hazbullay concept trained by Xhaheen on the bethecloud/golf-courses dataset. This is a Stable Diffusion model fine-tuned on the hazbullay concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of hazbullay man** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part! ## Description This is a Stable Diffusion model fine-tuned on `man` images for the wildcard theme. ## Usage ```python from diffusers import StableDiffusionPipeline pipeline = StableDiffusionPipeline.from_pretrained('Xhaheen/hazbullay-man-generator') image = pipeline().images[0] image ```
mjschock/Reinforce-PixelCopter0
mjschock
2023-01-10T04:50:58Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-01-10T04:50:51Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-PixelCopter0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 70.20 +/- 44.62 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
YuJungSoo/kobigbird-pure45-28788823
YuJungSoo
2023-01-10T04:43:50Z
90
0
transformers
[ "transformers", "pytorch", "tensorboard", "big_bird", "question-answering", "generated_from_trainer", "dataset:custom_squad_v2", "endpoints_compatible", "region:us" ]
question-answering
2023-01-10T03:54:40Z
--- tags: - generated_from_trainer datasets: - custom_squad_v2 model-index: - name: kobigbird-pure45-28788823 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kobigbird-pure45-28788823 This model is a fine-tuned version of [monologg/kobigbird-bert-base](https://huggingface.co/monologg/kobigbird-bert-base) on the custom_squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 4.5214 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 45 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.84 | 4 | 5.2857 | | No log | 1.84 | 8 | 4.6361 | | No log | 2.84 | 12 | 4.5214 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
Gadersd/Reinforce-Pixelcopter
Gadersd
2023-01-10T04:16:11Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-01-10T04:16:06Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 35.30 +/- 25.65 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
kreepy/dqn-SpaceInvadersNoFrameskip-vsc
kreepy
2023-01-10T04:06:19Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-01-10T03:24:02Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 593.00 +/- 148.36 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga kreepy -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga kreepy -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga kreepy ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
astein0/ppo-LunarLander-v2
astein0
2023-01-10T04:04:36Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-01-10T04:04:08Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 260.60 +/- 16.29 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
bananaspectre/marianmt-finetuned-netspeak-tgl-to-eng
bananaspectre
2023-01-10T03:55:59Z
61
0
transformers
[ "transformers", "tf", "marian", "text2text-generation", "generated_from_keras_callback", "translation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-01-08T12:10:06Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: marianmt-finetuned-netspeak-tgl-to-eng results: [] pipeline_tag: translation --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # marianmt-finetuned-netspeak-tgl-to-eng This model is a fine-tuned version of [Helsinki-NLP/opus-mt-tl-en](https://huggingface.co/Helsinki-NLP/opus-mt-tl-en) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.7277 - Validation Loss: 2.0459 - Train Bleu: 33.5501 - Train Gen Len: 8.7228 - Epoch: 93 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-06, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Bleu | Train Gen Len | Epoch | |:----------:|:---------------:|:----------:|:-------------:|:-----:| | 5.4267 | 4.5907 | 3.3310 | 11.7129 | 0 | | 4.6862 | 4.1720 | 3.5594 | 10.4752 | 1 | | 4.4077 | 3.9852 | 4.0100 | 9.2079 | 2 | | 4.2296 | 3.8554 | 3.3190 | 9.3663 | 3 | | 4.0964 | 3.7598 | 4.8776 | 9.4554 | 4 | | 3.9799 | 3.6710 | 4.9744 | 9.6931 | 5 | | 3.8799 | 3.5953 | 5.9838 | 9.4752 | 6 | | 3.7661 | 3.5248 | 6.4073 | 9.3366 | 7 | | 3.6807 | 3.4588 | 6.2692 | 9.1485 | 8 | | 3.5932 | 3.3964 | 6.0781 | 9.0990 | 9 | | 3.5110 | 3.3384 | 6.6363 | 9.0891 | 10 | | 3.4294 | 3.2892 | 7.0472 | 9.2079 | 11 | | 3.3566 | 3.2363 | 7.2707 | 9.1782 | 12 | | 3.2796 | 3.1878 | 7.9426 | 9.1683 | 13 | | 3.2026 | 3.1376 | 7.9254 | 9.2772 | 14 | | 3.1472 | 3.0926 | 8.2076 | 9.1188 | 15 | | 3.0634 | 3.0496 | 8.5193 | 9.2475 | 16 | | 3.0124 | 3.0082 | 8.9990 | 9.1485 | 17 | | 2.9554 | 2.9696 | 11.2816 | 9.1485 | 18 | | 2.8885 | 2.9352 | 12.0866 | 9.0396 | 19 | | 2.8403 | 2.8974 | 12.8611 | 9.1485 | 20 | | 2.7636 | 2.8661 | 13.0981 | 9.1485 | 21 | | 2.7229 | 2.8269 | 12.9295 | 8.9010 | 22 | | 2.6714 | 2.7951 | 14.0159 | 8.8713 | 23 | | 2.6179 | 2.7644 | 13.7369 | 8.7624 | 24 | | 2.5520 | 2.7348 | 14.0979 | 8.8119 | 25 | | 2.5199 | 2.7059 | 14.5253 | 8.7426 | 26 | | 2.4652 | 2.6832 | 13.8452 | 8.7030 | 27 | | 2.4081 | 2.6537 | 15.6475 | 8.9505 | 28 | | 2.3708 | 2.6302 | 16.1325 | 8.8713 | 29 | | 2.3195 | 2.6124 | 16.0044 | 8.7426 | 30 | | 2.2938 | 2.5892 | 16.8560 | 8.8020 | 31 | | 2.2202 | 2.5700 | 16.8995 | 8.8911 | 32 | | 2.1808 | 2.5456 | 17.5342 | 8.8416 | 33 | | 2.1373 | 2.5262 | 18.4092 | 8.6337 | 34 | | 2.1096 | 2.5082 | 18.1906 | 8.6436 | 35 | | 2.0610 | 2.4896 | 18.3189 | 8.7525 | 36 | | 2.0275 | 2.4725 | 18.4318 | 8.6436 | 37 | | 1.9913 | 2.4534 | 18.1136 | 8.6832 | 38 | | 1.9544 | 2.4403 | 19.2999 | 8.6040 | 39 | | 1.9144 | 2.4220 | 19.1325 | 8.6535 | 40 | | 1.8781 | 2.4075 | 19.4122 | 8.6337 | 41 | | 1.8610 | 2.3928 | 21.0270 | 8.6832 | 42 | | 1.8176 | 2.3779 | 20.9122 | 8.7921 | 43 | | 1.7839 | 2.3618 | 20.3906 | 8.7624 | 44 | | 1.7553 | 2.3466 | 20.9078 | 8.7327 | 45 | | 1.7045 | 2.3368 | 20.7228 | 8.7030 | 46 | | 1.6974 | 2.3221 | 20.7889 | 8.7426 | 47 | | 1.6561 | 2.3109 | 20.8293 | 8.7129 | 48 | | 1.6264 | 2.2991 | 20.3201 | 8.5644 | 49 | | 1.5976 | 2.2906 | 22.7905 | 8.6139 | 50 | | 1.5725 | 2.2820 | 23.9301 | 8.7228 | 51 | | 1.5528 | 2.2702 | 23.5437 | 8.6733 | 52 | | 1.5158 | 2.2612 | 22.9832 | 8.6040 | 53 | | 1.4883 | 2.2509 | 24.6290 | 8.6733 | 54 | | 1.4497 | 2.2434 | 25.6293 | 8.6139 | 55 | | 1.4357 | 2.2336 | 25.4158 | 8.6634 | 56 | | 1.4105 | 2.2290 | 25.2337 | 8.5644 | 57 | | 1.3803 | 2.2194 | 26.2588 | 8.5941 | 58 | | 1.3606 | 2.2118 | 25.8251 | 8.6139 | 59 | | 1.3389 | 2.2073 | 26.2269 | 8.5842 | 60 | | 1.3064 | 2.1966 | 26.2973 | 8.6040 | 61 | | 1.2747 | 2.1893 | 27.3831 | 8.5743 | 62 | | 1.2586 | 2.1811 | 28.4823 | 8.6733 | 63 | | 1.2445 | 2.1740 | 27.5688 | 8.6139 | 64 | | 1.2201 | 2.1576 | 29.3111 | 8.5347 | 65 | | 1.1924 | 2.1487 | 28.3428 | 8.6040 | 66 | | 1.1657 | 2.1464 | 28.8596 | 8.5941 | 67 | | 1.1435 | 2.1469 | 28.7870 | 8.5743 | 68 | | 1.1274 | 2.1382 | 29.5455 | 8.6436 | 69 | | 1.1080 | 2.1297 | 29.4602 | 8.6139 | 70 | | 1.0907 | 2.1257 | 28.2800 | 8.7525 | 71 | | 1.0881 | 2.1207 | 29.2731 | 8.6337 | 72 | | 1.0534 | 2.1179 | 29.9292 | 8.7624 | 73 | | 1.0389 | 2.1096 | 29.9660 | 8.5347 | 74 | | 1.0186 | 2.1052 | 29.7106 | 8.5446 | 75 | | 0.9953 | 2.0959 | 30.0563 | 8.5050 | 76 | | 0.9727 | 2.0977 | 30.0527 | 8.5446 | 77 | | 0.9543 | 2.0878 | 29.8762 | 8.5446 | 78 | | 0.9372 | 2.0871 | 30.4451 | 8.4950 | 79 | | 0.9234 | 2.0804 | 30.7829 | 8.5347 | 80 | | 0.9045 | 2.0774 | 31.2911 | 8.6337 | 81 | | 0.8920 | 2.0727 | 31.4189 | 8.4752 | 82 | | 0.8729 | 2.0761 | 30.5640 | 8.7624 | 83 | | 0.8466 | 2.0735 | 31.4347 | 8.7525 | 84 | | 0.8430 | 2.0677 | 31.1463 | 8.6139 | 85 | | 0.8340 | 2.0669 | 31.5623 | 8.7228 | 86 | | 0.8152 | 2.0587 | 31.9364 | 8.6535 | 87 | | 0.7916 | 2.0548 | 31.6855 | 8.6238 | 88 | | 0.7829 | 2.0562 | 33.4523 | 8.7426 | 89 | | 0.7678 | 2.0559 | 32.0304 | 8.7129 | 90 | | 0.7509 | 2.0540 | 32.7711 | 8.7525 | 91 | | 0.7406 | 2.0498 | 33.6200 | 8.7030 | 92 | | 0.7277 | 2.0459 | 33.5501 | 8.7228 | 93 | ### Framework versions - Transformers 4.25.1 - TensorFlow 2.9.2 - Datasets 2.8.0 - Tokenizers 0.13.2
kreepy/dqn-SpaceInvadersNoFrameskip-v4
kreepy
2023-01-10T03:51:42Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-01-09T22:24:22Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 598.50 +/- 193.64 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga kreepy -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga kreepy -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga kreepy ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
Elldreth/hearthstone-fantasy
Elldreth
2023-01-10T03:49:03Z
223
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-01-10T03:48:51Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: hearthstone-fantasy results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.5263158082962036 --- # hearthstone-fantasy Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### fantasy ![fantasy](images/fantasy.jpg) #### hearthstone ![hearthstone](images/hearthstone.jpg) #### warcraft ![warcraft](images/warcraft.jpg)
lmqg/bart-large-squad-qag
lmqg
2023-01-10T03:29:11Z
105
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "questions and answers generation", "en", "dataset:lmqg/qag_squad", "arxiv:2210.03992", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-12-18T06:24:56Z
--- license: cc-by-4.0 metrics: - bleu4 - meteor - rouge-l - bertscore - moverscore language: en datasets: - lmqg/qag_squad pipeline_tag: text2text-generation tags: - questions and answers generation widget: - text: "Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records." example_title: "Questions & Answers Generation Example 1" model-index: - name: lmqg/bart-large-squad-qag results: - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qag_squad type: default args: default metrics: - name: QAAlignedF1Score-BERTScore (Question & Answer Generation) type: qa_aligned_f1_score_bertscore_question_answer_generation value: 92.16 - name: QAAlignedRecall-BERTScore (Question & Answer Generation) type: qa_aligned_recall_bertscore_question_answer_generation value: 91.17 - name: QAAlignedPrecision-BERTScore (Question & Answer Generation) type: qa_aligned_precision_bertscore_question_answer_generation value: 93.21 - name: QAAlignedF1Score-MoverScore (Question & Answer Generation) type: qa_aligned_f1_score_moverscore_question_answer_generation value: 63.79 - name: QAAlignedRecall-MoverScore (Question & Answer Generation) type: qa_aligned_recall_moverscore_question_answer_generation value: 61.32 - name: QAAlignedPrecision-MoverScore (Question & Answer Generation) type: qa_aligned_precision_moverscore_question_answer_generation value: 66.71 --- # Model Card of `lmqg/bart-large-squad-qag` This model is fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) for question & answer pair generation task on the [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [facebook/bart-large](https://huggingface.co/facebook/bart-large) - **Language:** en - **Training data:** [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="en", model="lmqg/bart-large-squad-qag") # model prediction question_answer_pairs = model.generate_qa("William Turner was an English painter who specialised in watercolour landscapes") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/bart-large-squad-qag") output = pipe("Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ``` ## Evaluation - ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/bart-large-squad-qag/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_squad.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 92.16 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedF1Score (MoverScore) | 63.79 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedPrecision (BERTScore) | 93.21 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedPrecision (MoverScore) | 66.71 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedRecall (BERTScore) | 91.17 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedRecall (MoverScore) | 61.32 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qag_squad - dataset_name: default - input_types: ['paragraph'] - output_types: ['questions_answers'] - prefix_types: None - model: facebook/bart-large - max_length: 512 - max_length_output: 256 - epoch: 14 - batch: 8 - lr: 1e-05 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 8 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/bart-large-squad-qag/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
lmqg/bart-base-squad-qag
lmqg
2023-01-10T03:27:48Z
112
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "questions and answers generation", "en", "dataset:lmqg/qag_squad", "arxiv:2210.03992", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-12-15T04:51:39Z
--- license: cc-by-4.0 metrics: - bleu4 - meteor - rouge-l - bertscore - moverscore language: en datasets: - lmqg/qag_squad pipeline_tag: text2text-generation tags: - questions and answers generation widget: - text: "Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records." example_title: "Questions & Answers Generation Example 1" model-index: - name: lmqg/bart-base-squad-qag results: - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qag_squad type: default args: default metrics: - name: QAAlignedF1Score-BERTScore (Question & Answer Generation) type: qa_aligned_f1_score_bertscore_question_answer_generation value: 84.49 - name: QAAlignedRecall-BERTScore (Question & Answer Generation) type: qa_aligned_recall_bertscore_question_answer_generation value: 83.38 - name: QAAlignedPrecision-BERTScore (Question & Answer Generation) type: qa_aligned_precision_bertscore_question_answer_generation value: 85.64 - name: QAAlignedF1Score-MoverScore (Question & Answer Generation) type: qa_aligned_f1_score_moverscore_question_answer_generation value: 57.46 - name: QAAlignedRecall-MoverScore (Question & Answer Generation) type: qa_aligned_recall_moverscore_question_answer_generation value: 55.26 - name: QAAlignedPrecision-MoverScore (Question & Answer Generation) type: qa_aligned_precision_moverscore_question_answer_generation value: 60.01 --- # Model Card of `lmqg/bart-base-squad-qag` This model is fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) for question & answer pair generation task on the [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [facebook/bart-base](https://huggingface.co/facebook/bart-base) - **Language:** en - **Training data:** [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="en", model="lmqg/bart-base-squad-qag") # model prediction question_answer_pairs = model.generate_qa("William Turner was an English painter who specialised in watercolour landscapes") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/bart-base-squad-qag") output = pipe("Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ``` ## Evaluation - ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/bart-base-squad-qag/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_squad.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 84.49 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedF1Score (MoverScore) | 57.46 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedPrecision (BERTScore) | 85.64 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedPrecision (MoverScore) | 60.01 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedRecall (BERTScore) | 83.38 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedRecall (MoverScore) | 55.26 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qag_squad - dataset_name: default - input_types: ['paragraph'] - output_types: ['questions_answers'] - prefix_types: None - model: facebook/bart-base - max_length: 512 - max_length_output: 256 - epoch: 2 - batch: 16 - lr: 1e-05 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 8 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/bart-base-squad-qag/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
lmqg/t5-large-squad-qag
lmqg
2023-01-10T03:27:02Z
28
2
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "questions and answers generation", "en", "dataset:lmqg/qag_squad", "arxiv:2210.03992", "license:cc-by-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-12-19T02:43:35Z
--- license: cc-by-4.0 metrics: - bleu4 - meteor - rouge-l - bertscore - moverscore language: en datasets: - lmqg/qag_squad pipeline_tag: text2text-generation tags: - questions and answers generation widget: - text: "generate question and answer: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records." example_title: "Questions & Answers Generation Example 1" model-index: - name: lmqg/t5-large-squad-qag results: - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qag_squad type: default args: default metrics: - name: QAAlignedF1Score-BERTScore (Question & Answer Generation) type: qa_aligned_f1_score_bertscore_question_answer_generation value: 93.45 - name: QAAlignedRecall-BERTScore (Question & Answer Generation) type: qa_aligned_recall_bertscore_question_answer_generation value: 93.57 - name: QAAlignedPrecision-BERTScore (Question & Answer Generation) type: qa_aligned_precision_bertscore_question_answer_generation value: 93.34 - name: QAAlignedF1Score-MoverScore (Question & Answer Generation) type: qa_aligned_f1_score_moverscore_question_answer_generation value: 66.05 - name: QAAlignedRecall-MoverScore (Question & Answer Generation) type: qa_aligned_recall_moverscore_question_answer_generation value: 65.84 - name: QAAlignedPrecision-MoverScore (Question & Answer Generation) type: qa_aligned_precision_moverscore_question_answer_generation value: 66.34 --- # Model Card of `lmqg/t5-large-squad-qag` This model is fine-tuned version of [t5-large](https://huggingface.co/t5-large) for question & answer pair generation task on the [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [t5-large](https://huggingface.co/t5-large) - **Language:** en - **Training data:** [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="en", model="lmqg/t5-large-squad-qag") # model prediction question_answer_pairs = model.generate_qa("William Turner was an English painter who specialised in watercolour landscapes") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/t5-large-squad-qag") output = pipe("generate question and answer: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ``` ## Evaluation - ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/t5-large-squad-qag/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_squad.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 93.45 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedF1Score (MoverScore) | 66.05 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedPrecision (BERTScore) | 93.34 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedPrecision (MoverScore) | 66.34 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedRecall (BERTScore) | 93.57 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedRecall (MoverScore) | 65.84 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qag_squad - dataset_name: default - input_types: ['paragraph'] - output_types: ['questions_answers'] - prefix_types: ['qag'] - model: t5-large - max_length: 512 - max_length_output: 256 - epoch: 12 - batch: 8 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 8 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/t5-large-squad-qag/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
Kuntal/distilbert-base-uncased-finetuned-cola
Kuntal
2023-01-10T03:09:38Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-10T02:57:24Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: cola split: train args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5340667882909217 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8130 - Matthews Correlation: 0.5341 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5214 | 1.0 | 535 | 0.5266 | 0.4239 | | 0.3449 | 2.0 | 1070 | 0.5079 | 0.5052 | | 0.2347 | 3.0 | 1605 | 0.5736 | 0.5185 | | 0.1764 | 4.0 | 2140 | 0.7526 | 0.5305 | | 0.1324 | 5.0 | 2675 | 0.8130 | 0.5341 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
lmqg/t5-base-squad-qag
lmqg
2023-01-10T03:08:25Z
379
2
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "questions and answers generation", "en", "dataset:lmqg/qag_squad", "arxiv:2210.03992", "license:cc-by-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-12-19T02:49:52Z
--- license: cc-by-4.0 metrics: - bleu4 - meteor - rouge-l - bertscore - moverscore language: en datasets: - lmqg/qag_squad pipeline_tag: text2text-generation tags: - questions and answers generation widget: - text: "generate question and answer: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records." example_title: "Questions & Answers Generation Example 1" model-index: - name: lmqg/t5-base-squad-qag results: - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qag_squad type: default args: default metrics: - name: QAAlignedF1Score-BERTScore (Question & Answer Generation) type: qa_aligned_f1_score_bertscore_question_answer_generation value: 93.34 - name: QAAlignedRecall-BERTScore (Question & Answer Generation) type: qa_aligned_recall_bertscore_question_answer_generation value: 93.51 - name: QAAlignedPrecision-BERTScore (Question & Answer Generation) type: qa_aligned_precision_bertscore_question_answer_generation value: 93.18 - name: QAAlignedF1Score-MoverScore (Question & Answer Generation) type: qa_aligned_f1_score_moverscore_question_answer_generation value: 65.78 - name: QAAlignedRecall-MoverScore (Question & Answer Generation) type: qa_aligned_recall_moverscore_question_answer_generation value: 65.68 - name: QAAlignedPrecision-MoverScore (Question & Answer Generation) type: qa_aligned_precision_moverscore_question_answer_generation value: 65.96 --- # Model Card of `lmqg/t5-base-squad-qag` This model is fine-tuned version of [t5-base](https://huggingface.co/t5-base) for question & answer pair generation task on the [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [t5-base](https://huggingface.co/t5-base) - **Language:** en - **Training data:** [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="en", model="lmqg/t5-base-squad-qag") # model prediction question_answer_pairs = model.generate_qa("William Turner was an English painter who specialised in watercolour landscapes") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/t5-base-squad-qag") output = pipe("generate question and answer: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ``` ## Evaluation - ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/t5-base-squad-qag/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_squad.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 93.34 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedF1Score (MoverScore) | 65.78 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedPrecision (BERTScore) | 93.18 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedPrecision (MoverScore) | 65.96 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedRecall (BERTScore) | 93.51 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedRecall (MoverScore) | 65.68 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qag_squad - dataset_name: default - input_types: ['paragraph'] - output_types: ['questions_answers'] - prefix_types: ['qag'] - model: t5-base - max_length: 512 - max_length_output: 256 - epoch: 17 - batch: 8 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 8 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/t5-base-squad-qag/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
lmqg/t5-small-squad-qag
lmqg
2023-01-10T03:07:26Z
142
2
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "questions and answers generation", "en", "dataset:lmqg/qag_squad", "arxiv:2210.03992", "license:cc-by-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-12-15T04:50:13Z
--- license: cc-by-4.0 metrics: - bleu4 - meteor - rouge-l - bertscore - moverscore language: en datasets: - lmqg/qag_squad pipeline_tag: text2text-generation tags: - questions and answers generation widget: - text: "generate question and answer: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records." example_title: "Questions & Answers Generation Example 1" model-index: - name: lmqg/t5-small-squad-qag results: - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qag_squad type: default args: default metrics: - name: QAAlignedF1Score-BERTScore (Question & Answer Generation) type: qa_aligned_f1_score_bertscore_question_answer_generation value: 92.76 - name: QAAlignedRecall-BERTScore (Question & Answer Generation) type: qa_aligned_recall_bertscore_question_answer_generation value: 92.68 - name: QAAlignedPrecision-BERTScore (Question & Answer Generation) type: qa_aligned_precision_bertscore_question_answer_generation value: 92.87 - name: QAAlignedF1Score-MoverScore (Question & Answer Generation) type: qa_aligned_f1_score_moverscore_question_answer_generation value: 64.59 - name: QAAlignedRecall-MoverScore (Question & Answer Generation) type: qa_aligned_recall_moverscore_question_answer_generation value: 63.99 - name: QAAlignedPrecision-MoverScore (Question & Answer Generation) type: qa_aligned_precision_moverscore_question_answer_generation value: 65.3 --- # Model Card of `lmqg/t5-small-squad-qag` This model is fine-tuned version of [t5-small](https://huggingface.co/t5-small) for question & answer pair generation task on the [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [t5-small](https://huggingface.co/t5-small) - **Language:** en - **Training data:** [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="en", model="lmqg/t5-small-squad-qag") # model prediction question_answer_pairs = model.generate_qa("William Turner was an English painter who specialised in watercolour landscapes") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/t5-small-squad-qag") output = pipe("generate question and answer: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ``` ## Evaluation - ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/t5-small-squad-qag/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_squad.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 92.76 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedF1Score (MoverScore) | 64.59 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedPrecision (BERTScore) | 92.87 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedPrecision (MoverScore) | 65.3 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedRecall (BERTScore) | 92.68 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedRecall (MoverScore) | 63.99 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qag_squad - dataset_name: default - input_types: ['paragraph'] - output_types: ['questions_answers'] - prefix_types: ['qag'] - model: t5-small - max_length: 512 - max_length_output: 256 - epoch: 18 - batch: 32 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 2 - label_smoothing: 0.0 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/t5-small-squad-qag/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
lmqg/bart-large-squad-qg
lmqg
2023-01-10T03:00:53Z
21
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "question generation", "en", "dataset:lmqg/qg_squad", "arxiv:2210.03992", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: cc-by-4.0 metrics: - bleu4 - meteor - rouge-l - bertscore - moverscore language: en datasets: - lmqg/qg_squad pipeline_tag: text2text-generation tags: - question generation widget: - text: "<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records." example_title: "Question Generation Example 1" - text: "Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records." example_title: "Question Generation Example 2" - text: "Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> ." example_title: "Question Generation Example 3" model-index: - name: lmqg/bart-large-squad-qg results: - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_squad type: default args: default metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 26.17 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 53.85 - name: METEOR (Question Generation) type: meteor_question_generation value: 27.07 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 91.0 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 64.99 - name: QAAlignedF1Score-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_f1_score_bertscore_question_answer_generation_with_gold_answer_gold_answer value: 95.54 - name: QAAlignedRecall-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_recall_bertscore_question_answer_generation_with_gold_answer_gold_answer value: 95.49 - name: QAAlignedPrecision-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_precision_bertscore_question_answer_generation_with_gold_answer_gold_answer value: 95.59 - name: QAAlignedF1Score-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_f1_score_moverscore_question_answer_generation_with_gold_answer_gold_answer value: 70.82 - name: QAAlignedRecall-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_recall_moverscore_question_answer_generation_with_gold_answer_gold_answer value: 70.54 - name: QAAlignedPrecision-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_precision_moverscore_question_answer_generation_with_gold_answer_gold_answer value: 71.13 - name: QAAlignedF1Score-BERTScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_f1_score_bertscore_question_answer_generation_gold_answer value: 93.23 - name: QAAlignedRecall-BERTScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_recall_bertscore_question_answer_generation_gold_answer value: 93.35 - name: QAAlignedPrecision-BERTScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_precision_bertscore_question_answer_generation_gold_answer value: 93.13 - name: QAAlignedF1Score-MoverScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_f1_score_moverscore_question_answer_generation_gold_answer value: 64.76 - name: QAAlignedRecall-MoverScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_recall_moverscore_question_answer_generation_gold_answer value: 64.63 - name: QAAlignedPrecision-MoverScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_precision_moverscore_question_answer_generation_gold_answer value: 64.98 - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_squadshifts type: amazon args: amazon metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 0.06530369842068952 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 0.25030985091008146 - name: METEOR (Question Generation) type: meteor_question_generation value: 0.2229994442645732 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 0.9092814804525936 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 0.6086538514008419 - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_squadshifts type: new_wiki args: new_wiki metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 0.11118273173452982 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 0.2967546690273089 - name: METEOR (Question Generation) type: meteor_question_generation value: 0.27315087810722966 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 0.9322739617807421 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 0.6623000084761579 - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_squadshifts type: nyt args: nyt metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 0.08117757543966063 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 0.25292097720734297 - name: METEOR (Question Generation) type: meteor_question_generation value: 0.25254205113198686 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 0.9249009759439454 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 0.6406329128556304 - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_squadshifts type: reddit args: reddit metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 0.059525104157825456 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 0.22365090580055863 - name: METEOR (Question Generation) type: meteor_question_generation value: 0.21499800504546457 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 0.9095144685254328 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 0.6059332247878408 - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_subjqa type: books args: books metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 0.006278914808207679 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 0.12368226019088967 - name: METEOR (Question Generation) type: meteor_question_generation value: 0.11576293675813865 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 0.8807110440044503 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 0.5555905941686486 - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_subjqa type: electronics args: electronics metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 0.00866799444965211 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 0.1601628874804186 - name: METEOR (Question Generation) type: meteor_question_generation value: 0.15348605312210778 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 0.8783386920680519 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 0.5634845371093992 - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_subjqa type: grocery args: grocery metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 0.00528043272450429 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 0.12343711316491492 - name: METEOR (Question Generation) type: meteor_question_generation value: 0.15133496445452477 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 0.8778951253890991 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 0.5701949938103265 - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_subjqa type: movies args: movies metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 1.0121579426501661e-06 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 0.12508697028506718 - name: METEOR (Question Generation) type: meteor_question_generation value: 0.11862284941640638 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 0.8748829724726739 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 0.5528899173535703 - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_subjqa type: restaurants args: restaurants metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 1.1301750984972448e-06 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 0.13083168975354642 - name: METEOR (Question Generation) type: meteor_question_generation value: 0.12419733006916912 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 0.8797711839570719 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 0.5542757411268555 - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_subjqa type: tripadvisor args: tripadvisor metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 8.380171318718442e-07 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 0.1402922852924756 - name: METEOR (Question Generation) type: meteor_question_generation value: 0.1372146070365174 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 0.8891002409937424 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 0.5604572211470809 --- # Model Card of `lmqg/bart-large-squad-qg` This model is fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) for question generation task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [facebook/bart-large](https://huggingface.co/facebook/bart-large) - **Language:** en - **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="en", model="lmqg/bart-large-squad-qg") # model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/bart-large-squad-qg") output = pipe("<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ``` ## Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/bart-large-squad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squad.default.json) | | Score | Type | Dataset | |:-----------|--------:|:--------|:---------------------------------------------------------------| | BERTScore | 91 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_1 | 58.79 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_2 | 42.79 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_3 | 33.11 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_4 | 26.17 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | METEOR | 27.07 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | MoverScore | 64.99 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | ROUGE_L | 53.85 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | - ***Metric (Question & Answer Generation, Reference Answer)***: Each question is generated from *the gold answer*. [raw metric file](https://huggingface.co/lmqg/bart-large-squad-qg/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_squad.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:---------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 95.54 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedF1Score (MoverScore) | 70.82 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedPrecision (BERTScore) | 95.59 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedPrecision (MoverScore) | 71.13 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedRecall (BERTScore) | 95.49 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedRecall (MoverScore) | 70.54 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | - ***Metric (Question & Answer Generation, Pipeline Approach)***: Each question is generated on the answer generated by [`lmqg/bart-large-squad-ae`](https://huggingface.co/lmqg/bart-large-squad-ae). [raw metric file](https://huggingface.co/lmqg/bart-large-squad-qg/raw/main/eval_pipeline/metric.first.answer.paragraph.questions_answers.lmqg_qg_squad.default.lmqg_bart-large-squad-ae.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:---------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 93.23 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedF1Score (MoverScore) | 64.76 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedPrecision (BERTScore) | 93.13 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedPrecision (MoverScore) | 64.98 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedRecall (BERTScore) | 93.35 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedRecall (MoverScore) | 64.63 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | - ***Metrics (Question Generation, Out-of-Domain)*** | Dataset | Type | BERTScore| Bleu_4 | METEOR | MoverScore | ROUGE_L | Link | |:--------|:-----|---------:|-------:|-------:|-----------:|--------:|-----:| | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | amazon | 90.93 | 6.53 | 22.3 | 60.87 | 25.03 | [link](https://huggingface.co/lmqg/bart-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.amazon.json) | | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | new_wiki | 93.23 | 11.12 | 27.32 | 66.23 | 29.68 | [link](https://huggingface.co/lmqg/bart-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.new_wiki.json) | | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | nyt | 92.49 | 8.12 | 25.25 | 64.06 | 25.29 | [link](https://huggingface.co/lmqg/bart-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.nyt.json) | | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | reddit | 90.95 | 5.95 | 21.5 | 60.59 | 22.37 | [link](https://huggingface.co/lmqg/bart-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.reddit.json) | | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | books | 88.07 | 0.63 | 11.58 | 55.56 | 12.37 | [link](https://huggingface.co/lmqg/bart-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.books.json) | | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | electronics | 87.83 | 0.87 | 15.35 | 56.35 | 16.02 | [link](https://huggingface.co/lmqg/bart-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.electronics.json) | | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | grocery | 87.79 | 0.53 | 15.13 | 57.02 | 12.34 | [link](https://huggingface.co/lmqg/bart-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.grocery.json) | | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | movies | 87.49 | 0.0 | 11.86 | 55.29 | 12.51 | [link](https://huggingface.co/lmqg/bart-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.movies.json) | | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | restaurants | 87.98 | 0.0 | 12.42 | 55.43 | 13.08 | [link](https://huggingface.co/lmqg/bart-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.restaurants.json) | | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | tripadvisor | 88.91 | 0.0 | 13.72 | 56.05 | 14.03 | [link](https://huggingface.co/lmqg/bart-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.tripadvisor.json) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_squad - dataset_name: default - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: None - model: facebook/bart-large - max_length: 512 - max_length_output: 32 - epoch: 4 - batch: 32 - lr: 5e-05 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 4 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/bart-large-squad-qg/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
lmqg/t5-large-squad-qg
lmqg
2023-01-10T02:57:58Z
212
4
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "question generation", "en", "dataset:lmqg/qg_squad", "arxiv:2210.03992", "license:cc-by-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: cc-by-4.0 metrics: - bleu4 - meteor - rouge-l - bertscore - moverscore language: en datasets: - lmqg/qg_squad pipeline_tag: text2text-generation tags: - question generation widget: - text: "generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records." example_title: "Question Generation Example 1" - text: "generate question: Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records." example_title: "Question Generation Example 2" - text: "generate question: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> ." example_title: "Question Generation Example 3" model-index: - name: lmqg/t5-large-squad-qg results: - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_squad type: default args: default metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 27.21 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 54.13 - name: METEOR (Question Generation) type: meteor_question_generation value: 27.7 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 91.0 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 65.29 - name: QAAlignedF1Score-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_f1_score_bertscore_question_answer_generation_with_gold_answer_gold_answer value: 95.57 - name: QAAlignedRecall-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_recall_bertscore_question_answer_generation_with_gold_answer_gold_answer value: 95.51 - name: QAAlignedPrecision-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_precision_bertscore_question_answer_generation_with_gold_answer_gold_answer value: 95.62 - name: QAAlignedF1Score-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_f1_score_moverscore_question_answer_generation_with_gold_answer_gold_answer value: 71.1 - name: QAAlignedRecall-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_recall_moverscore_question_answer_generation_with_gold_answer_gold_answer value: 70.8 - name: QAAlignedPrecision-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_precision_moverscore_question_answer_generation_with_gold_answer_gold_answer value: 71.41 - name: QAAlignedF1Score-BERTScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_f1_score_bertscore_question_answer_generation_gold_answer value: 92.97 - name: QAAlignedRecall-BERTScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_recall_bertscore_question_answer_generation_gold_answer value: 93.14 - name: QAAlignedPrecision-BERTScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_precision_bertscore_question_answer_generation_gold_answer value: 92.83 - name: QAAlignedF1Score-MoverScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_f1_score_moverscore_question_answer_generation_gold_answer value: 64.72 - name: QAAlignedRecall-MoverScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_recall_moverscore_question_answer_generation_gold_answer value: 64.66 - name: QAAlignedPrecision-MoverScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_precision_moverscore_question_answer_generation_gold_answer value: 64.87 - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_squadshifts type: amazon args: amazon metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 0.06900290231938097 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 0.2533914694448162 - name: METEOR (Question Generation) type: meteor_question_generation value: 0.23008771718972076 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 0.911505327721968 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 0.6121573406359604 - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_squadshifts type: new_wiki args: new_wiki metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 0.11180552552578073 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 0.30058260713604856 - name: METEOR (Question Generation) type: meteor_question_generation value: 0.2792115028015132 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 0.9316688723462665 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 0.6630609588403827 - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_squadshifts type: nyt args: nyt metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 0.08047293820182351 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 0.2518886524420378 - name: METEOR (Question Generation) type: meteor_question_generation value: 0.2567360224537303 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 0.9241819763475975 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 0.6437327703980464 - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_squadshifts type: reddit args: reddit metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 0.059479733408388684 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 0.21988765767997162 - name: METEOR (Question Generation) type: meteor_question_generation value: 0.21853957131436155 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 0.909493447578926 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 0.6064107011094938 - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_subjqa type: books args: books metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 8.038380813854933e-07 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 0.09871887977864714 - name: METEOR (Question Generation) type: meteor_question_generation value: 0.11967515095282454 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 0.879356137120911 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 0.5548471413251269 - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_subjqa type: electronics args: electronics metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 0.008434036066953862 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 0.14134333081097744 - name: METEOR (Question Generation) type: meteor_question_generation value: 0.1616192221446712 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 0.8786280911509731 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 0.560488065035827 - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_subjqa type: grocery args: grocery metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 0.007639835274564104 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 0.105046370156132 - name: METEOR (Question Generation) type: meteor_question_generation value: 0.1540402363682146 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 0.8749810194969178 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 0.56763136192963 - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_subjqa type: movies args: movies metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 1.149076256883913e-06 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 0.12272623105315689 - name: METEOR (Question Generation) type: meteor_question_generation value: 0.13027427314652157 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 0.8733754583767482 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 0.5536261740282519 - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_subjqa type: restaurants args: restaurants metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 1.8508536550762953e-10 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 0.1192666899417942 - name: METEOR (Question Generation) type: meteor_question_generation value: 0.12447769563902232 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 0.8825407926650608 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 0.5591163692270524 - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_subjqa type: tripadvisor args: tripadvisor metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 0.007817275411070228 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 0.14594416096461188 - name: METEOR (Question Generation) type: meteor_question_generation value: 0.16297700667338805 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 0.8928685000227912 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 0.5681021918513103 --- # Model Card of `lmqg/t5-large-squad-qg` This model is fine-tuned version of [t5-large](https://huggingface.co/t5-large) for question generation task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [t5-large](https://huggingface.co/t5-large) - **Language:** en - **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="en", model="lmqg/t5-large-squad-qg") # model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/t5-large-squad-qg") output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ``` ## Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/t5-large-squad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squad.default.json) | | Score | Type | Dataset | |:-----------|--------:|:--------|:---------------------------------------------------------------| | BERTScore | 91 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_1 | 59.54 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_2 | 43.79 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_3 | 34.14 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_4 | 27.21 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | METEOR | 27.7 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | MoverScore | 65.29 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | ROUGE_L | 54.13 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | - ***Metric (Question & Answer Generation, Reference Answer)***: Each question is generated from *the gold answer*. [raw metric file](https://huggingface.co/lmqg/t5-large-squad-qg/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_squad.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:---------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 95.57 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedF1Score (MoverScore) | 71.1 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedPrecision (BERTScore) | 95.62 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedPrecision (MoverScore) | 71.41 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedRecall (BERTScore) | 95.51 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedRecall (MoverScore) | 70.8 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | - ***Metric (Question & Answer Generation, Pipeline Approach)***: Each question is generated on the answer generated by [`lmqg/t5-large-squad-ae`](https://huggingface.co/lmqg/t5-large-squad-ae). [raw metric file](https://huggingface.co/lmqg/t5-large-squad-qg/raw/main/eval_pipeline/metric.first.answer.paragraph.questions_answers.lmqg_qg_squad.default.lmqg_t5-large-squad-ae.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:---------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 92.97 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedF1Score (MoverScore) | 64.72 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedPrecision (BERTScore) | 92.83 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedPrecision (MoverScore) | 64.87 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedRecall (BERTScore) | 93.14 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedRecall (MoverScore) | 64.66 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | - ***Metrics (Question Generation, Out-of-Domain)*** | Dataset | Type | BERTScore| Bleu_4 | METEOR | MoverScore | ROUGE_L | Link | |:--------|:-----|---------:|-------:|-------:|-----------:|--------:|-----:| | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | amazon | 91.15 | 6.9 | 23.01 | 61.22 | 25.34 | [link](https://huggingface.co/lmqg/t5-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.amazon.json) | | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | new_wiki | 93.17 | 11.18 | 27.92 | 66.31 | 30.06 | [link](https://huggingface.co/lmqg/t5-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.new_wiki.json) | | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | nyt | 92.42 | 8.05 | 25.67 | 64.37 | 25.19 | [link](https://huggingface.co/lmqg/t5-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.nyt.json) | | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | reddit | 90.95 | 5.95 | 21.85 | 60.64 | 21.99 | [link](https://huggingface.co/lmqg/t5-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.reddit.json) | | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | books | 87.94 | 0.0 | 11.97 | 55.48 | 9.87 | [link](https://huggingface.co/lmqg/t5-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.books.json) | | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | electronics | 87.86 | 0.84 | 16.16 | 56.05 | 14.13 | [link](https://huggingface.co/lmqg/t5-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.electronics.json) | | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | grocery | 87.5 | 0.76 | 15.4 | 56.76 | 10.5 | [link](https://huggingface.co/lmqg/t5-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.grocery.json) | | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | movies | 87.34 | 0.0 | 13.03 | 55.36 | 12.27 | [link](https://huggingface.co/lmqg/t5-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.movies.json) | | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | restaurants | 88.25 | 0.0 | 12.45 | 55.91 | 11.93 | [link](https://huggingface.co/lmqg/t5-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.restaurants.json) | | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | tripadvisor | 89.29 | 0.78 | 16.3 | 56.81 | 14.59 | [link](https://huggingface.co/lmqg/t5-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.tripadvisor.json) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_squad - dataset_name: default - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: ['qg'] - model: t5-large - max_length: 512 - max_length_output: 32 - epoch: 6 - batch: 16 - lr: 5e-05 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 4 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/t5-large-squad-qg/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
Jbot/ppo-Huggy
Jbot
2023-01-10T02:57:54Z
12
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-01-10T02:57:46Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: Jbot/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
lmqg/t5-small-squad-qg
lmqg
2023-01-10T02:54:20Z
234
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "question generation", "en", "dataset:lmqg/qg_squad", "arxiv:2210.03992", "license:cc-by-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: cc-by-4.0 metrics: - bleu4 - meteor - rouge-l - bertscore - moverscore language: en datasets: - lmqg/qg_squad pipeline_tag: text2text-generation tags: - question generation widget: - text: "generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records." example_title: "Question Generation Example 1" - text: "generate question: Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records." example_title: "Question Generation Example 2" - text: "generate question: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> ." example_title: "Question Generation Example 3" model-index: - name: lmqg/t5-small-squad-qg results: - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_squad type: default args: default metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 24.4 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 51.43 - name: METEOR (Question Generation) type: meteor_question_generation value: 25.84 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 90.2 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 63.89 - name: QAAlignedF1Score-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_f1_score_bertscore_question_answer_generation_with_gold_answer_gold_answer value: 95.14 - name: QAAlignedRecall-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_recall_bertscore_question_answer_generation_with_gold_answer_gold_answer value: 95.09 - name: QAAlignedPrecision-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_precision_bertscore_question_answer_generation_with_gold_answer_gold_answer value: 95.19 - name: QAAlignedF1Score-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_f1_score_moverscore_question_answer_generation_with_gold_answer_gold_answer value: 69.79 - name: QAAlignedRecall-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_recall_moverscore_question_answer_generation_with_gold_answer_gold_answer value: 69.51 - name: QAAlignedPrecision-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_precision_moverscore_question_answer_generation_with_gold_answer_gold_answer value: 70.09 - name: QAAlignedF1Score-BERTScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_f1_score_bertscore_question_answer_generation_gold_answer value: 92.26 - name: QAAlignedRecall-BERTScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_recall_bertscore_question_answer_generation_gold_answer value: 92.48 - name: QAAlignedPrecision-BERTScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_precision_bertscore_question_answer_generation_gold_answer value: 92.07 - name: QAAlignedF1Score-MoverScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_f1_score_moverscore_question_answer_generation_gold_answer value: 63.83 - name: QAAlignedRecall-MoverScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_recall_moverscore_question_answer_generation_gold_answer value: 63.82 - name: QAAlignedPrecision-MoverScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_precision_moverscore_question_answer_generation_gold_answer value: 63.92 - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_squadshifts type: amazon args: amazon metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 0.05446530981230419 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 0.22970251150837936 - name: METEOR (Question Generation) type: meteor_question_generation value: 0.20750111458026313 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 0.8994468043449728 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 0.5979360752045209 - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_squadshifts type: new_wiki args: new_wiki metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 0.104778841878282 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 0.2810996054026912 - name: METEOR (Question Generation) type: meteor_question_generation value: 0.2620896643265683 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 0.9260609935106264 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 0.6505447280842604 - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_squadshifts type: nyt args: nyt metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 0.06968574467261796 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 0.23034544400347773 - name: METEOR (Question Generation) type: meteor_question_generation value: 0.2366281135333324 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 0.9170723215078939 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 0.6286133349914554 - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_squadshifts type: reddit args: reddit metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 0.04750005928226048 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 0.20103251416604878 - name: METEOR (Question Generation) type: meteor_question_generation value: 0.19795765672224766 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 0.8956885570918934 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 0.5923103575686176 - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_subjqa type: books args: books metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 9.484839636219606e-07 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 0.10882963005711024 - name: METEOR (Question Generation) type: meteor_question_generation value: 0.12295516249732996 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 0.8739685463031549 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 0.5533617434235973 - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_subjqa type: electronics args: electronics metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 0.01163379406564442 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 0.1561742307706773 - name: METEOR (Question Generation) type: meteor_question_generation value: 0.1548763941617263 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 0.871218326462417 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 0.555469199401916 - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_subjqa type: grocery args: grocery metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 0.005200691923654061 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 0.12630554732425642 - name: METEOR (Question Generation) type: meteor_question_generation value: 0.14946423426295516 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 0.8721985507011414 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 0.5711858634802471 - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_subjqa type: movies args: movies metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 9.928321423080042e-07 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 0.1263481480649435 - name: METEOR (Question Generation) type: meteor_question_generation value: 0.12111872719101677 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 0.868397428617849 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 0.5500525496260875 - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_subjqa type: restaurants args: restaurants metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 1.728249026089261e-10 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 0.11532401921027728 - name: METEOR (Question Generation) type: meteor_question_generation value: 0.12673504956336362 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 0.8748602174660739 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 0.5503550909114101 - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_subjqa type: tripadvisor args: tripadvisor metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 0.01455898541449453 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 0.1424064090212074 - name: METEOR (Question Generation) type: meteor_question_generation value: 0.15534444057817395 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 0.8839819959101786 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 0.5591337724792363 --- # Model Card of `lmqg/t5-small-squad-qg` This model is fine-tuned version of [t5-small](https://huggingface.co/t5-small) for question generation task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [t5-small](https://huggingface.co/t5-small) - **Language:** en - **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="en", model="lmqg/t5-small-squad-qg") # model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/t5-small-squad-qg") output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ``` ## Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squad.default.json) | | Score | Type | Dataset | |:-----------|--------:|:--------|:---------------------------------------------------------------| | BERTScore | 90.2 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_1 | 56.86 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_2 | 40.59 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_3 | 31.05 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_4 | 24.4 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | METEOR | 25.84 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | MoverScore | 63.89 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | ROUGE_L | 51.43 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | - ***Metric (Question & Answer Generation, Reference Answer)***: Each question is generated from *the gold answer*. [raw metric file](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_squad.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:---------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 95.14 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedF1Score (MoverScore) | 69.79 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedPrecision (BERTScore) | 95.19 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedPrecision (MoverScore) | 70.09 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedRecall (BERTScore) | 95.09 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedRecall (MoverScore) | 69.51 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | - ***Metric (Question & Answer Generation, Pipeline Approach)***: Each question is generated on the answer generated by [`lmqg/t5-small-squad-ae`](https://huggingface.co/lmqg/t5-small-squad-ae). [raw metric file](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/eval_pipeline/metric.first.answer.paragraph.questions_answers.lmqg_qg_squad.default.lmqg_t5-small-squad-ae.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:---------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 92.26 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedF1Score (MoverScore) | 63.83 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedPrecision (BERTScore) | 92.07 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedPrecision (MoverScore) | 63.92 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedRecall (BERTScore) | 92.48 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedRecall (MoverScore) | 63.82 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | - ***Metrics (Question Generation, Out-of-Domain)*** | Dataset | Type | BERTScore| Bleu_4 | METEOR | MoverScore | ROUGE_L | Link | |:--------|:-----|---------:|-------:|-------:|-----------:|--------:|-----:| | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | amazon | 89.94 | 5.45 | 20.75 | 59.79 | 22.97 | [link](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.amazon.json) | | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | new_wiki | 92.61 | 10.48 | 26.21 | 65.05 | 28.11 | [link](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.new_wiki.json) | | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | nyt | 91.71 | 6.97 | 23.66 | 62.86 | 23.03 | [link](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.nyt.json) | | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | reddit | 89.57 | 4.75 | 19.8 | 59.23 | 20.1 | [link](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.reddit.json) | | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | books | 87.4 | 0.0 | 12.3 | 55.34 | 10.88 | [link](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.books.json) | | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | electronics | 87.12 | 1.16 | 15.49 | 55.55 | 15.62 | [link](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.electronics.json) | | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | grocery | 87.22 | 0.52 | 14.95 | 57.12 | 12.63 | [link](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.grocery.json) | | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | movies | 86.84 | 0.0 | 12.11 | 55.01 | 12.63 | [link](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.movies.json) | | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | restaurants | 87.49 | 0.0 | 12.67 | 55.04 | 11.53 | [link](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.restaurants.json) | | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | tripadvisor | 88.4 | 1.46 | 15.53 | 55.91 | 14.24 | [link](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.tripadvisor.json) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_squad - dataset_name: default - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: ['qg'] - model: t5-small - max_length: 512 - max_length_output: 32 - epoch: 9 - batch: 64 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 1 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
YuJungSoo/kobigbird-pure45-82642472
YuJungSoo
2023-01-10T02:52:04Z
91
0
transformers
[ "transformers", "pytorch", "tensorboard", "big_bird", "question-answering", "generated_from_trainer", "dataset:custom_squad_v2", "endpoints_compatible", "region:us" ]
question-answering
2023-01-10T01:20:19Z
--- tags: - generated_from_trainer datasets: - custom_squad_v2 model-index: - name: kobigbird-pure45-82642472 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kobigbird-pure45-82642472 This model is a fine-tuned version of [monologg/kobigbird-bert-base](https://huggingface.co/monologg/kobigbird-bert-base) on the custom_squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.5578 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 45 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.99 | 42 | 1.3233 | | No log | 1.99 | 84 | 1.1793 | | No log | 2.99 | 126 | 1.5578 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
eduardokapp/ppo-LunarLander-v2
eduardokapp
2023-01-10T02:27:30Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-01-10T02:26:53Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -226.89 +/- 25.60 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
muhtasham/small-mlm-glue-cola-target-glue-wnli
muhtasham
2023-01-10T02:11:58Z
105
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-10T01:46:21Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: small-mlm-glue-cola-target-glue-wnli results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # small-mlm-glue-cola-target-glue-wnli This model is a fine-tuned version of [muhtasham/small-mlm-glue-cola](https://huggingface.co/muhtasham/small-mlm-glue-cola) on the None dataset. It achieves the following results on the evaluation set: - Loss: 8.0250 - Accuracy: 0.0563 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6311 | 25.0 | 500 | 2.6389 | 0.0845 | | 0.3168 | 50.0 | 1000 | 5.1490 | 0.0986 | | 0.1452 | 75.0 | 1500 | 6.3515 | 0.0986 | | 0.0775 | 100.0 | 2000 | 7.5723 | 0.0704 | | 0.056 | 125.0 | 2500 | 8.0250 | 0.0563 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
AleNunezArroyo/distilbert-base-spanish-uncased-model
AleNunezArroyo
2023-01-10T01:50:04Z
126
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-01-10T01:27:04Z
--- tags: - generated_from_trainer model-index: - name: distilbert-base-spanish-uncased-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-spanish-uncased-model This model is a fine-tuned version of [CenIA/distilbert-base-spanish-uncased](https://huggingface.co/CenIA/distilbert-base-spanish-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.0311 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7074 | 1.0 | 920 | 2.2671 | | 2.2717 | 2.0 | 1840 | 2.0866 | | 2.1587 | 3.0 | 2760 | 2.0233 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
andreids/en_nature_of_li_multilabel
andreids
2023-01-10T01:29:17Z
3
0
spacy
[ "spacy", "text-classification", "en", "region:us" ]
text-classification
2023-01-10T01:28:51Z
--- tags: - spacy - text-classification language: - en model-index: - name: en_nature_of_li_multilabel results: [] --- | Feature | Description | | --- | --- | | **Name** | `en_nature_of_li_multilabel` | | **Version** | `0.0.0` | | **spaCy** | `>=3.4.3,<3.5.0` | | **Default Pipeline** | `textcat_multilabel` | | **Components** | `textcat_multilabel` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (8 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`textcat_multilabel`** | `shirt`, `balloon`, `cream`, `socks`, `pants`, `shampoo`, `toy`, `sweater` | </details> ### Accuracy | Type | Score | | --- | --- | | `CATS_SCORE` | 95.82 | | `CATS_MICRO_P` | 99.77 | | `CATS_MICRO_R` | 99.60 | | `CATS_MICRO_F` | 99.69 | | `CATS_MACRO_P` | 74.48 | | `CATS_MACRO_R` | 73.84 | | `CATS_MACRO_F` | 74.14 | | `CATS_MACRO_AUC` | 95.82 | | `CATS_MACRO_AUC_PER_TYPE` | 0.00 | | `TEXTCAT_MULTILABEL_LOSS` | 7.61 |
YuJungSoo/kobigbird-pure46-467565
YuJungSoo
2023-01-10T01:01:54Z
90
0
transformers
[ "transformers", "pytorch", "tensorboard", "big_bird", "question-answering", "generated_from_trainer", "dataset:custom_squad_v2", "endpoints_compatible", "region:us" ]
question-answering
2023-01-10T00:20:02Z
--- tags: - generated_from_trainer datasets: - custom_squad_v2 model-index: - name: kobigbird-pure46-467565 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kobigbird-pure46-467565 This model is a fine-tuned version of [monologg/kobigbird-bert-base](https://huggingface.co/monologg/kobigbird-bert-base) on the custom_squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.4834 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 46 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.99 | 42 | 1.3187 | | No log | 1.99 | 84 | 1.2002 | | No log | 2.99 | 126 | 1.4834 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
rmeireles/ppo-Huggy
rmeireles
2023-01-10T00:52:22Z
4
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-01-10T00:52:14Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: rmeireles/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
muhtasham/small-vanilla-target-glue-wnli
muhtasham
2023-01-10T00:49:10Z
103
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-10T00:23:15Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: small-vanilla-target-glue-wnli results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # small-vanilla-target-glue-wnli This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset. It achieves the following results on the evaluation set: - Loss: 8.2398 - Accuracy: 0.0845 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6354 | 25.0 | 500 | 2.5362 | 0.0845 | | 0.3043 | 50.0 | 1000 | 5.1175 | 0.0986 | | 0.138 | 75.0 | 1500 | 6.7552 | 0.0986 | | 0.0732 | 100.0 | 2000 | 7.6533 | 0.0986 | | 0.0413 | 125.0 | 2500 | 8.2398 | 0.0845 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
squidcrash/ppo-LunarLander-v2
squidcrash
2023-01-10T00:26:06Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-01-10T00:25:45Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 250.55 +/- 16.59 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
muhtasham/small-vanilla-target-glue-stsb
muhtasham
2023-01-10T00:22:18Z
105
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T23:55:27Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - spearmanr model-index: - name: small-vanilla-target-glue-stsb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # small-vanilla-target-glue-stsb This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5625 - Pearson: 0.8713 - Spearmanr: 0.8677 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:| | 0.823 | 2.78 | 500 | 0.5972 | 0.8689 | 0.8689 | | 0.2951 | 5.56 | 1000 | 0.5683 | 0.8725 | 0.8710 | | 0.181 | 8.33 | 1500 | 0.5985 | 0.8695 | 0.8657 | | 0.1349 | 11.11 | 2000 | 0.5915 | 0.8708 | 0.8679 | | 0.1067 | 13.89 | 2500 | 0.5625 | 0.8713 | 0.8677 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
kozina/newworld
kozina
2023-01-10T00:22:09Z
0
0
null
[ "image-classification", "cs", "dataset:fka/awesome-chatgpt-prompts", "region:us" ]
image-classification
2023-01-10T00:17:42Z
--- datasets: - fka/awesome-chatgpt-prompts language: - cs pipeline_tag: image-classification ---
DiegoD616/Reinforce-Pixelcopter-PLE-v0
DiegoD616
2023-01-10T00:09:13Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-01-09T03:41:24Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 28.20 +/- 22.90 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
muhtasham/small-mlm-glue-cola-target-glue-rte
muhtasham
2023-01-10T00:06:12Z
105
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T23:45:11Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: small-mlm-glue-cola-target-glue-rte results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # small-mlm-glue-cola-target-glue-rte This model is a fine-tuned version of [muhtasham/small-mlm-glue-cola](https://huggingface.co/muhtasham/small-mlm-glue-cola) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.9023 - Accuracy: 0.6318 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4086 | 6.41 | 500 | 1.2604 | 0.6390 | | 0.0549 | 12.82 | 1000 | 2.3633 | 0.6318 | | 0.0276 | 19.23 | 1500 | 2.9521 | 0.6282 | | 0.0188 | 25.64 | 2000 | 2.9023 | 0.6318 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
SatCat/Reinforce-Cartpole-v1
SatCat
2023-01-09T23:42:46Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-01-09T23:42:15Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Cartpole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
cleanrl/Skiing-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1
cleanrl
2023-01-09T23:30:28Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Skiing-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-01-09T23:30:24Z
--- tags: - Skiing-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Skiing-v5 type: Skiing-v5 metrics: - type: mean_reward value: -12803.50 +/- 19.59 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Skiing-v5** This is a trained model of a PPO agent playing Skiing-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ppo_atari_envpool_async_jax_scan_impalanet_machado.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[ppo_atari_envpool_async_jax_scan_impalanet_machado]" python -m cleanrl_utils.enjoy --exp-name ppo_atari_envpool_async_jax_scan_impalanet_machado --env-id Skiing-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Skiing-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1/raw/main/ppo_atari_envpool_async_jax_scan_impalanet_machado.py curl -OL https://huggingface.co/cleanrl/Skiing-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Skiing-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1/raw/main/poetry.lock poetry install --all-extras python ppo_atari_envpool_async_jax_scan_impalanet_machado.py --track --wandb-project-name envpool-atari --save-model --upload-model --hf-entity cleanrl --env-id Skiing-v5 --seed 1 ``` # Hyperparameters ```python {'anneal_lr': True, 'async_batch_size': 16, 'batch_size': 2048, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Skiing-v5', 'exp_name': 'ppo_atari_envpool_async_jax_scan_impalanet_machado', 'gae': True, 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1024, 'norm_adv': True, 'num_envs': 64, 'num_minibatches': 2, 'num_steps': 32, 'num_updates': 24414, 'save_model': True, 'seed': 1, 'target_kl': None, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 2, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'envpool-atari'} ```
saurabhnaik/ppo-LunarLnaderV1
saurabhnaik
2023-01-09T23:09:34Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-01-09T20:21:51Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 282.92 +/- 19.05 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
YuJungSoo/kobigbird-pure45-19926792
YuJungSoo
2023-01-09T23:08:37Z
92
0
transformers
[ "transformers", "pytorch", "tensorboard", "big_bird", "question-answering", "generated_from_trainer", "dataset:custom_squad_v2", "endpoints_compatible", "region:us" ]
question-answering
2023-01-09T22:41:18Z
--- tags: - generated_from_trainer datasets: - custom_squad_v2 model-index: - name: kobigbird-pure45-19926792 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kobigbird-pure45-19926792 This model is a fine-tuned version of [monologg/kobigbird-bert-base](https://huggingface.co/monologg/kobigbird-bert-base) on the custom_squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.1392 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 45 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.99 | 42 | 1.2244 | | No log | 1.99 | 84 | 1.1392 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
SuburbanLion/Reinforce-Pixelcopter-PLE-v0
SuburbanLion
2023-01-09T23:06:29Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-01-09T19:15:33Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 30.80 +/- 26.39 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
simlaharma/vit-base-cifar10
simlaharma
2023-01-09T23:04:32Z
26
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "vision", "generated_from_trainer", "dataset:cifar10", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-01-04T22:35:51Z
--- license: apache-2.0 tags: - image-classification - vision - generated_from_trainer datasets: - cifar10 metrics: - accuracy model-index: - name: vit-base-cifar10 results: - task: name: Image Classification type: image-classification dataset: name: cifar10 type: cifar10 config: plain_text split: train args: plain_text metrics: - name: Accuracy type: accuracy value: 0.106 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-cifar10 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the cifar10 dataset. It achieves the following results on the evaluation set: - Loss: 2.3302 - Accuracy: 0.106 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.3324 | 1.0 | 664 | 2.3352 | 0.0967 | | 2.3489 | 2.0 | 1328 | 2.3288 | 0.1049 | | 2.4899 | 3.0 | 1992 | 2.4473 | 0.0989 | | 2.479 | 4.0 | 2656 | 2.4894 | 0.1 | | 2.4179 | 5.0 | 3320 | 2.4404 | 0.0947 | | 2.3881 | 6.0 | 3984 | 2.3931 | 0.102 | | 2.3597 | 7.0 | 4648 | 2.3744 | 0.0967 | | 2.3721 | 8.0 | 5312 | 2.3667 | 0.0935 | | 2.3456 | 9.0 | 5976 | 2.3495 | 0.1036 | | 2.3361 | 10.0 | 6640 | 2.3473 | 0.1025 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.1+cu117 - Datasets 2.8.0 - Tokenizers 0.13.2
Utkarsh-Verma/sd-class-butterflies-32
Utkarsh-Verma
2023-01-09T23:01:54Z
30
0
diffusers
[ "diffusers", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2023-01-09T23:01:31Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('Utkarsh-Verma/sd-class-butterflies-32') image = pipeline().images[0] image ```
gauthamk28/dqn-SpaceInvadersNoFrameskip-v4
gauthamk28
2023-01-09T23:00:10Z
2
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-01-09T22:59:34Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 665.00 +/- 321.50 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga gauthamk28 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga gauthamk28 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga gauthamk28 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 10000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
egumasa/en_engagement_spl_RoBERTa_acad
egumasa
2023-01-09T22:51:03Z
8
0
spacy
[ "spacy", "token-classification", "en", "model-index", "region:us" ]
token-classification
2022-12-24T22:40:44Z
--- tags: - spacy - token-classification language: - en model-index: - name: en_engagement_spl_RoBERTa_acad results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.0 - name: NER Recall type: recall value: 0.0 - name: NER F Score type: f_score value: 0.0 - task: name: TAG type: token-classification metrics: - name: TAG (XPOS) Accuracy type: accuracy value: 0.0 - task: name: LEMMA type: token-classification metrics: - name: Lemma Accuracy type: accuracy value: 0.0 - task: name: UNLABELED_DEPENDENCIES type: token-classification metrics: - name: Unlabeled Attachment Score (UAS) type: f_score value: 0.0 - task: name: LABELED_DEPENDENCIES type: token-classification metrics: - name: Labeled Attachment Score (LAS) type: f_score value: 0.0 - task: name: SENTS type: token-classification metrics: - name: Sentences F-Score type: f_score value: 0.8508399109 --- | Feature | Description | | --- | --- | | **Name** | `en_engagement_spl_RoBERTa_acad` | | **Version** | `0.6.0` | | **spaCy** | `>=3.4.4,<3.5.0` | | **Default Pipeline** | `transformer`, `parser`, `tagger`, `ner`, `attribute_ruler`, `lemmatizer`, `trainable_transformer`, `spancat` | | **Components** | `transformer`, `parser`, `tagger`, `ner`, `attribute_ruler`, `lemmatizer`, `trainable_transformer`, `spancat` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (124 labels for 4 components)</summary> | Component | Labels | | --- | --- | | **`parser`** | `ROOT`, `acl`, `acomp`, `advcl`, `advmod`, `agent`, `amod`, `appos`, `attr`, `aux`, `auxpass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `csubj`, `csubjpass`, `dative`, `dep`, `det`, `dobj`, `expl`, `intj`, `mark`, `meta`, `neg`, `nmod`, `npadvmod`, `nsubj`, `nsubjpass`, `nummod`, `oprd`, `parataxis`, `pcomp`, `pobj`, `poss`, `preconj`, `predet`, `prep`, `prt`, `punct`, `quantmod`, `relcl`, `xcomp` | | **`tagger`** | `$`, `''`, `,`, `-LRB-`, `-RRB-`, `.`, `:`, `ADD`, `AFX`, `CC`, `CD`, `DT`, `EX`, `FW`, `HYPH`, `IN`, `JJ`, `JJR`, `JJS`, `LS`, `MD`, `NFP`, `NN`, `NNP`, `NNPS`, `NNS`, `PDT`, `POS`, `PRP`, `PRP$`, `RB`, `RBR`, `RBS`, `RP`, `SYM`, `TO`, `UH`, `VB`, `VBD`, `VBG`, `VBN`, `VBP`, `VBZ`, `WDT`, `WP`, `WP$`, `WRB`, `XX`, ```` | | **`ner`** | `CARDINAL`, `DATE`, `EVENT`, `FAC`, `GPE`, `LANGUAGE`, `LAW`, `LOC`, `MONEY`, `NORP`, `ORDINAL`, `ORG`, `PERCENT`, `PERSON`, `PRODUCT`, `QUANTITY`, `TIME`, `WORK_OF_ART` | | **`spancat`** | `MONOGLOSS`, `ENDORSE`, `ENDOPHORIC`, `ENTERTAIN`, `PRONOUNCE`, `DENY`, `COUNTER`, `JUSTIFYING`, `ATTRIBUTE`, `SOURCES`, `CITATION`, `CONCUR` | </details> ### Accuracy | Type | Score | | --- | --- | | `DEP_UAS` | 0.00 | | `DEP_LAS` | 0.00 | | `DEP_LAS_PER_TYPE` | 0.00 | | `SENTS_P` | 82.24 | | `SENTS_R` | 88.13 | | `SENTS_F` | 85.08 | | `TAG_ACC` | 0.00 | | `ENTS_F` | 0.00 | | `ENTS_P` | 0.00 | | `ENTS_R` | 0.00 | | `LEMMA_ACC` | 0.00 | | `SPANS_SC_F` | 72.96 | | `SPANS_SC_P` | 75.47 | | `SPANS_SC_R` | 70.61 | | `TRAINABLE_TRANSFORMER_LOSS` | 651.38 | | `SPANCAT_LOSS` | 93570.83 |
atorre/q-Taxi-v3
atorre
2023-01-09T22:35:51Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-01-09T22:35:45Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python from huggingface_hub import hf_hub_download import pickle5 as pickle model_file = hf_hub_download(repo_id="atorre/Taxi-v3", filename="q-learning.pkl") with open(model_file, 'rb') as f: model = pickle.load(f) env = gym.make(model["env_id"]) ```
BobMcDear/vit_base_patch32_224_sam
BobMcDear
2023-01-09T22:31:18Z
0
0
null
[ "region:us" ]
null
2023-01-09T22:30:18Z
Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
henryscheible/qnli
henryscheible
2023-01-09T22:24:30Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T19:42:27Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: qnli results: - task: name: Text Classification type: text-classification dataset: name: GLUE QNLI type: glue args: qnli metrics: - name: Accuracy type: accuracy value: 0.9141497345780707 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # qnli This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE QNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.4604 - Accuracy: 0.9141 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1 - Datasets 2.6.1 - Tokenizers 0.13.1
muhtasham/small-mlm-glue-cola-target-glue-qnli
muhtasham
2023-01-09T22:09:18Z
105
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T21:16:27Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: small-mlm-glue-cola-target-glue-qnli results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # small-mlm-glue-cola-target-glue-qnli This model is a fine-tuned version of [muhtasham/small-mlm-glue-cola](https://huggingface.co/muhtasham/small-mlm-glue-cola) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3594 - Accuracy: 0.8532 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4881 | 0.15 | 500 | 0.3958 | 0.8265 | | 0.4461 | 0.31 | 1000 | 0.3827 | 0.8321 | | 0.4217 | 0.46 | 1500 | 0.3588 | 0.8453 | | 0.413 | 0.61 | 2000 | 0.3758 | 0.8384 | | 0.4119 | 0.76 | 2500 | 0.3414 | 0.8494 | | 0.3935 | 0.92 | 3000 | 0.3324 | 0.8559 | | 0.3551 | 1.07 | 3500 | 0.3450 | 0.8532 | | 0.3194 | 1.22 | 4000 | 0.3468 | 0.8620 | | 0.3162 | 1.37 | 4500 | 0.3460 | 0.8622 | | 0.3219 | 1.53 | 5000 | 0.3594 | 0.8532 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
Shaier/pubmed_qa_biolinkbert
Shaier
2023-01-09T22:06:26Z
105
0
transformers
[ "transformers", "pytorch", "bert", "multiple-choice", "generated_from_trainer", "dataset:pubmed_qa", "endpoints_compatible", "region:us" ]
multiple-choice
2023-01-09T20:34:19Z
--- tags: - generated_from_trainer datasets: - pubmed_qa model-index: - name: pubmed_qa_biolinkbert results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pubmed_qa_biolinkbert This model was trained from scratch on the pubmed_qa dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 25 - total_train_batch_size: 200 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 120 ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.1 - Datasets 2.8.0 - Tokenizers 0.11.0
GDJ1978/cardassians
GDJ1978
2023-01-09T21:55:36Z
0
0
null
[ "region:us" ]
null
2023-01-05T19:32:45Z
cardass9-300/600/900 ckpt = photo of cardass9 person with black hair c-ds9350 = photo of ds9cardassian person with black hair c-ds9500ckpt = photo of c-ds9 person with black hair cds9300/325/350 = photo of cds9 person cardass1an ckpt = photo of cardass1an person --- cds9(steps of training) cardass1an is the trigger word unless cds9 model which is cds9 or c-ds9 person for the class images included model OR ds9cardassian person with black hair for 350steps with classification model trained on 3 instance images of cardassians from ds9, 1000 steps no prior preservation no class images --pretrained_model_name_or_path=$MODEL_NAME \ --pretrained_vae_name_or_path="stabilityai/sd-vae-ft-mse" \ --output_dir=$OUTPUT_DIR \ --revision="fp16" \ --seed=1337 \ --resolution=512 \ --train_batch_size=1 \ --train_text_encoder \ --mixed_precision="fp16" \ --use_8bit_adam \ --gradient_accumulation_steps=1 --gradient_checkpointing \ --learning_rate=1e-6 \ --lr_scheduler="constant" \ --lr_warmup_steps=0 \ --num_class_images=0 \ --sample_batch_size=1 \ --max_train_steps=1000 \ --save_interval=500 \ --save_sample_prompt="photo of cardass1an person" \ --concepts_list="concepts_list.json"
tarapunchik/sd-class-butterflies-63
tarapunchik
2023-01-09T21:49:25Z
30
0
diffusers
[ "diffusers", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2023-01-09T21:49:06Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('tarapunchik/sd-class-butterflies-63') image = pipeline().images[0] image ```
CCMat/ddpm-church-finetune-wikiart-256
CCMat
2023-01-09T21:43:42Z
50
0
diffusers
[ "diffusers", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2022-12-23T13:07:09Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- This model is a diffusion model for unconditional image generation of churches ⛪️ finetuned on wikiart 🎨.<br> Pretrained model : google/ddpm-church-256<br> Dataset : huggan/wikiart<br> ## Usage ```python from diffusers import DDPMPipeline model_id = 'CCMat/ddpm-church-finetune-wikiart' # load model and scheduler pipeline = DDPMPipeline.from_pretrained(model_id) # run pipeline in inference (sample random noise and denoise) image = pipeline().images[0] # save image image.save("ddpm_church_wikiart.png") ``` ## Samples ![example images](images/ddpm_church_wikiart_hf_0.png) ![example images](images/ddpm_church_wikiart_hf_3.png) ![example images](images/ddpm_church_wikiart_hf_7.png) ![example images](images/ddpm_church_wikiart_hf_8.png) ![example images](images/ddpm_church_wikiart_hf_9.png) ![example images](images/ddpm_church_wikiart_hf_10.png)
KJIM/kobigbird-base43-52774701
KJIM
2023-01-09T21:20:47Z
91
0
transformers
[ "transformers", "pytorch", "tensorboard", "big_bird", "question-answering", "generated_from_trainer", "dataset:custom_squad_v2", "endpoints_compatible", "region:us" ]
question-answering
2023-01-09T17:54:55Z
--- tags: - generated_from_trainer datasets: - custom_squad_v2 model-index: - name: kobigbird-base43-52774701 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kobigbird-base43-52774701 This model is a fine-tuned version of [monologg/kobigbird-bert-base](https://huggingface.co/monologg/kobigbird-bert-base) on the custom_squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.3126 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 43 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.99 | 42 | 1.3113 | | No log | 1.99 | 84 | 1.3126 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
muhtasham/tiny-mlm-glue-wnli-target-glue-wnli
muhtasham
2023-01-09T21:10:49Z
105
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T21:06:59Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: tiny-mlm-glue-wnli-target-glue-wnli results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-wnli-target-glue-wnli This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-wnli](https://huggingface.co/muhtasham/tiny-mlm-glue-wnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.1020 - Accuracy: 0.1127 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6885 | 25.0 | 500 | 0.7726 | 0.2394 | | 0.658 | 50.0 | 1000 | 1.1609 | 0.0986 | | 0.6084 | 75.0 | 1500 | 1.6344 | 0.1127 | | 0.5481 | 100.0 | 2000 | 2.1020 | 0.1127 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
muhtasham/tiny-mlm-glue-wnli-target-glue-stsb
muhtasham
2023-01-09T21:05:43Z
103
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T20:56:54Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - spearmanr model-index: - name: tiny-mlm-glue-wnli-target-glue-stsb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-wnli-target-glue-stsb This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-wnli](https://huggingface.co/muhtasham/tiny-mlm-glue-wnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8784 - Pearson: 0.7929 - Spearmanr: 0.7891 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:| | 3.3443 | 2.78 | 500 | 1.5642 | 0.5784 | 0.6011 | | 1.2259 | 5.56 | 1000 | 1.0907 | 0.7358 | 0.7382 | | 0.8948 | 8.33 | 1500 | 0.9367 | 0.7750 | 0.7751 | | 0.7357 | 11.11 | 2000 | 0.8525 | 0.7934 | 0.7905 | | 0.6119 | 13.89 | 2500 | 0.8436 | 0.7977 | 0.7944 | | 0.5301 | 16.67 | 3000 | 0.8999 | 0.7947 | 0.7928 | | 0.4657 | 19.44 | 3500 | 0.8341 | 0.7989 | 0.7943 | | 0.4104 | 22.22 | 4000 | 0.8818 | 0.7972 | 0.7930 | | 0.3686 | 25.0 | 4500 | 0.8811 | 0.7973 | 0.7929 | | 0.3348 | 27.78 | 5000 | 0.8784 | 0.7929 | 0.7891 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
charlemagne/distilbert-base-uncased-final-mnli
charlemagne
2023-01-09T21:04:24Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T21:01:52Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased-final-mnli results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-final-mnli This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1836 - Accuracy: 0.9548 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 70 | 0.6386 | 0.7877 | | No log | 2.0 | 140 | 0.3014 | 0.9322 | | No log | 3.0 | 210 | 0.2330 | 0.9341 | | No log | 4.0 | 280 | 0.1990 | 0.9539 | | No log | 5.0 | 350 | 0.1836 | 0.9548 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.8.0+cu111 - Datasets 2.1.0 - Tokenizers 0.11.6
KJIM/kobigbird-base42-45602195
KJIM
2023-01-09T21:01:44Z
92
0
transformers
[ "transformers", "pytorch", "tensorboard", "big_bird", "question-answering", "generated_from_trainer", "dataset:custom_squad_v2", "endpoints_compatible", "region:us" ]
question-answering
2023-01-09T17:12:36Z
--- tags: - generated_from_trainer datasets: - custom_squad_v2 model-index: - name: kobigbird-base42-45602195 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kobigbird-base42-45602195 This model is a fine-tuned version of [monologg/kobigbird-bert-base](https://huggingface.co/monologg/kobigbird-bert-base) on the custom_squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.4457 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.99 | 42 | 1.6187 | | No log | 1.99 | 84 | 1.4457 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.10.2+cu113 - Datasets 2.8.0 - Tokenizers 0.13.2
Jander/j01
Jander
2023-01-09T21:01:22Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-01-09T21:01:22Z
--- license: creativeml-openrail-m ---
muhtasham/tiny-mlm-glue-wnli-target-glue-sst2
muhtasham
2023-01-09T20:55:48Z
106
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T20:47:47Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: tiny-mlm-glue-wnli-target-glue-sst2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-wnli-target-glue-sst2 This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-wnli](https://huggingface.co/muhtasham/tiny-mlm-glue-wnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4379 - Accuracy: 0.8245 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5813 | 0.24 | 500 | 0.4900 | 0.7649 | | 0.4434 | 0.48 | 1000 | 0.4701 | 0.7810 | | 0.3931 | 0.71 | 1500 | 0.4431 | 0.7924 | | 0.3729 | 0.95 | 2000 | 0.4576 | 0.7890 | | 0.3315 | 1.19 | 2500 | 0.4439 | 0.8062 | | 0.3141 | 1.43 | 3000 | 0.4594 | 0.8050 | | 0.2976 | 1.66 | 3500 | 0.4395 | 0.8142 | | 0.2905 | 1.9 | 4000 | 0.4367 | 0.8154 | | 0.2724 | 2.14 | 4500 | 0.4948 | 0.8062 | | 0.2524 | 2.38 | 5000 | 0.4379 | 0.8245 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
muhtasham/small-vanilla-target-glue-qnli
muhtasham
2023-01-09T20:52:10Z
105
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T19:57:43Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: small-vanilla-target-glue-qnli results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # small-vanilla-target-glue-qnli This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3458 - Accuracy: 0.8583 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.488 | 0.15 | 500 | 0.3901 | 0.8316 | | 0.4449 | 0.31 | 1000 | 0.3826 | 0.8373 | | 0.4243 | 0.46 | 1500 | 0.3596 | 0.8448 | | 0.4133 | 0.61 | 2000 | 0.3663 | 0.8417 | | 0.4102 | 0.76 | 2500 | 0.3459 | 0.8499 | | 0.3924 | 0.92 | 3000 | 0.3286 | 0.8585 | | 0.3539 | 1.07 | 3500 | 0.3467 | 0.8532 | | 0.3202 | 1.22 | 4000 | 0.3478 | 0.8636 | | 0.3183 | 1.37 | 4500 | 0.3574 | 0.8514 | | 0.3215 | 1.53 | 5000 | 0.3458 | 0.8583 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
offlinehq/ppo-LunarLander-v2
offlinehq
2023-01-09T20:47:53Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-01-09T20:46:25Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 297.30 +/- 14.03 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
muhtasham/tiny-mlm-glue-wnli-target-glue-rte
muhtasham
2023-01-09T20:46:37Z
105
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T20:39:36Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: tiny-mlm-glue-wnli-target-glue-rte results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-wnli-target-glue-rte This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-wnli](https://huggingface.co/muhtasham/tiny-mlm-glue-wnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.6882 - Accuracy: 0.5596 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6475 | 6.41 | 500 | 0.7071 | 0.5596 | | 0.4526 | 12.82 | 1000 | 0.8708 | 0.5704 | | 0.2668 | 19.23 | 1500 | 1.1317 | 0.5704 | | 0.162 | 25.64 | 2000 | 1.4052 | 0.5704 | | 0.0978 | 32.05 | 2500 | 1.8224 | 0.5812 | | 0.0658 | 38.46 | 3000 | 2.0893 | 0.5668 | | 0.0488 | 44.87 | 3500 | 2.4656 | 0.5560 | | 0.0409 | 51.28 | 4000 | 2.6882 | 0.5596 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
Deisler/q-FrozenLake-v1-4x4-noSlippery
Deisler
2023-01-09T20:40:52Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-01-09T20:40:49Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Deisler/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
henryscheible/sst2
henryscheible
2023-01-09T20:40:08Z
5
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T19:42:12Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: sst2 results: - task: name: Text Classification type: text-classification dataset: name: GLUE SST2 type: glue args: sst2 metrics: - name: Accuracy type: accuracy value: 0.9334862385321101 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sst2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE SST2 dataset. It achieves the following results on the evaluation set: - Loss: 0.3521 - Accuracy: 0.9335 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1 - Datasets 2.6.1 - Tokenizers 0.13.1
PandaIsInSpace/Blackberry_Mix
PandaIsInSpace
2023-01-09T20:39:59Z
0
1
null
[ "region:us" ]
null
2023-01-09T19:42:03Z
NAI, SDv1.4, Zeipher F111, R34 mix Not my mix, just uploading for personal use.
muhtasham/tiny-mlm-glue-wnli-target-glue-qqp
muhtasham
2023-01-09T20:37:31Z
105
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T20:21:34Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: tiny-mlm-glue-wnli-target-glue-qqp results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-wnli-target-glue-qqp This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-wnli](https://huggingface.co/muhtasham/tiny-mlm-glue-wnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4204 - Accuracy: 0.7892 - F1: 0.7460 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5839 | 0.04 | 500 | 0.5193 | 0.7299 | 0.6543 | | 0.5179 | 0.09 | 1000 | 0.4861 | 0.7508 | 0.6874 | | 0.5047 | 0.13 | 1500 | 0.4916 | 0.7406 | 0.7097 | | 0.4871 | 0.18 | 2000 | 0.4647 | 0.7584 | 0.7182 | | 0.4789 | 0.22 | 2500 | 0.4564 | 0.7637 | 0.7240 | | 0.4622 | 0.26 | 3000 | 0.4496 | 0.7668 | 0.7296 | | 0.4617 | 0.31 | 3500 | 0.4468 | 0.7678 | 0.7343 | | 0.454 | 0.35 | 4000 | 0.4415 | 0.7718 | 0.7376 | | 0.4553 | 0.4 | 4500 | 0.4371 | 0.7755 | 0.7415 | | 0.4438 | 0.44 | 5000 | 0.4204 | 0.7892 | 0.7460 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
jamm55/autotrain-pidgintranslationmix-2798982563
jamm55
2023-01-09T20:24:11Z
113
2
transformers
[ "transformers", "pytorch", "marian", "text2text-generation", "autotrain", "translation", "unk", "dataset:jamm55/autotrain-data-pidgintranslationmix", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-01-09T20:17:45Z
--- tags: - autotrain - translation language: - unk - unk datasets: - jamm55/autotrain-data-pidgintranslationmix co2_eq_emissions: emissions: 9.975347552307483 --- # Model Trained Using AutoTrain - Problem type: Translation - Model ID: 2798982563 - CO2 Emissions (in grams): 9.9753 ## Validation Metrics - Loss: 1.760 - SacreBLEU: 17.015 - Gen len: 23.459
muhtasham/tiny-mlm-glue-wnli-target-glue-qnli
muhtasham
2023-01-09T20:17:41Z
105
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T20:08:31Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: tiny-mlm-glue-wnli-target-glue-qnli results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-wnli-target-glue-qnli This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-wnli](https://huggingface.co/muhtasham/tiny-mlm-glue-wnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4737 - Accuracy: 0.7794 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6298 | 0.15 | 500 | 0.5598 | 0.7249 | | 0.563 | 0.31 | 1000 | 0.5282 | 0.7435 | | 0.5386 | 0.46 | 1500 | 0.5010 | 0.7571 | | 0.527 | 0.61 | 2000 | 0.5312 | 0.7426 | | 0.5221 | 0.76 | 2500 | 0.4837 | 0.7743 | | 0.5131 | 0.92 | 3000 | 0.4730 | 0.7785 | | 0.4991 | 1.07 | 3500 | 0.4643 | 0.7860 | | 0.4896 | 1.22 | 4000 | 0.4685 | 0.7809 | | 0.4755 | 1.37 | 4500 | 0.4734 | 0.7783 | | 0.4829 | 1.53 | 5000 | 0.4737 | 0.7794 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
bs-la/bloomz-7b1-4b-ru
bs-la
2023-01-09T20:15:59Z
5
2
transformers
[ "transformers", "pytorch", "tensorboard", "bloom", "feature-extraction", "dataset:bs-la/xP3ru", "arxiv:2212.09535", "arxiv:2211.01786", "license:bigscience-bloom-rail-1.0", "model-index", "text-generation-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-12-05T07:27:30Z
--- datasets: - bs-la/xP3ru license: bigscience-bloom-rail-1.0 model-index: - name: bloomz-7b1 results: - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (ru) config: ru split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 53.97 - task: type: Natural language inference dataset: type: xnli name: XNLI (ru) config: ru split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 33.49 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (ru) config: ru split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 48.64 --- # Model Summary [bloom-7b1](https://huggingface.co/bigscience/bloom-7b1) finetuned on Russian multitask data. Hence the same as [bloomz-7b1](https://huggingface.co/bigscience/bloomz-7b1), but with **only** Russian finetuning data. 4b stands for 4 billion finetuning tokens (same as bloomz-7b1). # Citation ``` @article{yong2022bloom+, title={BLOOM+ 1: Adding Language Support to BLOOM for Zero-Shot Prompting}, author={Yong, Zheng-Xin and Schoelkopf, Hailey and Muennighoff, Niklas and Aji, Alham Fikri and Adelani, David Ifeoluwa and Almubarak, Khalid and Bari, M Saiful and Sutawika, Lintang and Kasai, Jungo and Baruwa, Ahmed and others}, journal={arXiv preprint arXiv:2212.09535}, year={2022} } ``` ```bibtex @misc{muennighoff2022crosslingual, title={Crosslingual Generalization through Multitask Finetuning}, author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel}, year={2022}, eprint={2211.01786}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
bs-la/bloomz-7b1-4b-xp3ru
bs-la
2023-01-09T20:15:31Z
9
0
transformers
[ "transformers", "pytorch", "tensorboard", "bloom", "feature-extraction", "dataset:bigscience/xP3", "dataset:bs-la/xP3ru", "arxiv:2212.09535", "arxiv:2211.01786", "license:bigscience-bloom-rail-1.0", "model-index", "text-generation-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-12-04T10:56:31Z
--- datasets: - bigscience/xP3 - bs-la/xP3ru license: bigscience-bloom-rail-1.0 model-index: - name: bloomz-7b1 results: - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (ru) config: ru split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 53.97 - task: type: Natural language inference dataset: type: xnli name: XNLI (ru) config: ru split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 50.00 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (ru) config: ru split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 79.09 --- # Model Summary [bloom-7b1](https://huggingface.co/bigscience/bloom-7b1) finetuned on xP3 enhanced with Russian multitask data. Hence the same as [bloomz-7b1](https://huggingface.co/bigscience/bloomz-7b1), but with additional Russian finetuning data. 4b stands for 4 billion finetuning tokens (same as bloomz-7b1). # Citation ``` @article{yong2022bloom+, title={BLOOM+ 1: Adding Language Support to BLOOM for Zero-Shot Prompting}, author={Yong, Zheng-Xin and Schoelkopf, Hailey and Muennighoff, Niklas and Aji, Alham Fikri and Adelani, David Ifeoluwa and Almubarak, Khalid and Bari, M Saiful and Sutawika, Lintang and Kasai, Jungo and Baruwa, Ahmed and others}, journal={arXiv preprint arXiv:2212.09535}, year={2022} } ``` ```bibtex @misc{muennighoff2022crosslingual, title={Crosslingual Generalization through Multitask Finetuning}, author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel}, year={2022}, eprint={2211.01786}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
muhtasham/tiny-mlm-glue-wnli-target-glue-mrpc
muhtasham
2023-01-09T20:06:51Z
106
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T20:01:31Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: tiny-mlm-glue-wnli-target-glue-mrpc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-wnli-target-glue-mrpc This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-wnli](https://huggingface.co/muhtasham/tiny-mlm-glue-wnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1768 - Accuracy: 0.7230 - F1: 0.8094 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5908 | 4.35 | 500 | 0.5715 | 0.7083 | 0.8059 | | 0.4649 | 8.7 | 1000 | 0.5978 | 0.7206 | 0.8106 | | 0.3312 | 13.04 | 1500 | 0.6800 | 0.7255 | 0.8108 | | 0.2207 | 17.39 | 2000 | 0.8000 | 0.7157 | 0.8014 | | 0.1398 | 21.74 | 2500 | 0.9734 | 0.7255 | 0.8069 | | 0.0984 | 26.09 | 3000 | 1.1768 | 0.7230 | 0.8094 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
muhtasham/tiny-mlm-glue-wnli-target-glue-mnli
muhtasham
2023-01-09T19:59:45Z
105
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T19:50:08Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: tiny-mlm-glue-wnli-target-glue-mnli results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-wnli-target-glue-mnli This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-wnli](https://huggingface.co/muhtasham/tiny-mlm-glue-wnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8534 - Accuracy: 0.6159 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.0812 | 0.04 | 500 | 1.0475 | 0.4698 | | 1.0185 | 0.08 | 1000 | 0.9640 | 0.5484 | | 0.9627 | 0.12 | 1500 | 0.9279 | 0.5657 | | 0.9401 | 0.16 | 2000 | 0.9181 | 0.5779 | | 0.9307 | 0.2 | 2500 | 0.8954 | 0.5926 | | 0.9249 | 0.24 | 3000 | 0.8846 | 0.5998 | | 0.9083 | 0.29 | 3500 | 0.8752 | 0.6028 | | 0.9022 | 0.33 | 4000 | 0.8636 | 0.6108 | | 0.8841 | 0.37 | 4500 | 0.8628 | 0.6095 | | 0.8857 | 0.41 | 5000 | 0.8534 | 0.6159 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
Tanvi2992/ddpm-butterflies-256
Tanvi2992
2023-01-09T19:57:12Z
0
0
diffusers
[ "diffusers", "tensorboard", "en", "license:apache-2.0", "diffusers:DDPMPipeline", "region:us" ]
null
2023-01-09T18:04:02Z
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: /content/AS/ metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflies-256 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `/content/AS/` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/Tanvi2992/ddpm-butterflies-256/tensorboard?#scalars)
henryscheible/cola
henryscheible
2023-01-09T19:56:16Z
105
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T19:42:03Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: cola results: - task: name: Text Classification type: text-classification dataset: name: GLUE COLA type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.565965534490769 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cola This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.7272 - Matthews Correlation: 0.5660 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1 - Datasets 2.6.1 - Tokenizers 0.13.1
henryscheible/stsb
henryscheible
2023-01-09T19:53:14Z
106
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T19:42:07Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - spearmanr model-index: - name: stsb results: - task: name: Text Classification type: text-classification dataset: name: GLUE STSB type: glue args: stsb metrics: - name: Spearmanr type: spearmanr value: 0.8888103154344065 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # stsb This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE STSB dataset. It achieves the following results on the evaluation set: - Loss: 0.4914 - Pearson: 0.8930 - Spearmanr: 0.8888 - Combined Score: 0.8909 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1 - Datasets 2.6.1 - Tokenizers 0.13.1
ruihui/xlm-roberta-base-finetuned-panx-de
ruihui
2023-01-09T19:48:43Z
120
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-12-21T03:44:21Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.de split: train args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8766218984748463 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2123 - F1: 0.8766 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2657 | 1.0 | 525 | 0.1690 | 0.8133 | | 0.1409 | 2.0 | 1050 | 0.1438 | 0.8450 | | 0.0995 | 3.0 | 1575 | 0.1517 | 0.8473 | | 0.068 | 4.0 | 2100 | 0.1528 | 0.8590 | | 0.0503 | 5.0 | 2625 | 0.1663 | 0.8613 | | 0.0351 | 6.0 | 3150 | 0.1820 | 0.8703 | | 0.0245 | 7.0 | 3675 | 0.1853 | 0.8705 | | 0.0164 | 8.0 | 4200 | 0.1968 | 0.8743 | | 0.0102 | 9.0 | 4725 | 0.2087 | 0.8789 | | 0.0067 | 10.0 | 5250 | 0.2123 | 0.8766 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1 - Datasets 2.7.1 - Tokenizers 0.11.0
henryscheible/rte
henryscheible
2023-01-09T19:46:54Z
105
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T19:42:14Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: rte results: - task: name: Text Classification type: text-classification dataset: name: GLUE RTE type: glue args: rte metrics: - name: Accuracy type: accuracy value: 0.6462093862815884 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # rte This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE RTE dataset. It achieves the following results on the evaluation set: - Loss: 0.7912 - Accuracy: 0.6462 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1 - Datasets 2.6.1 - Tokenizers 0.13.1
santit96/taxiv3-course
santit96
2023-01-09T19:41:32Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-01-09T19:41:28Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: taxiv3-course results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="santit96/taxiv3-course", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
muhtasham/tiny-mlm-glue-stsb-target-glue-stsb
muhtasham
2023-01-09T19:31:31Z
105
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T19:22:15Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - spearmanr model-index: - name: tiny-mlm-glue-stsb-target-glue-stsb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-stsb-target-glue-stsb This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-stsb](https://huggingface.co/muhtasham/tiny-mlm-glue-stsb) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9825 - Pearson: 0.8061 - Spearmanr: 0.8043 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:| | 3.5377 | 2.78 | 500 | 1.3011 | 0.6728 | 0.6791 | | 1.1922 | 5.56 | 1000 | 1.1395 | 0.7537 | 0.7804 | | 0.8417 | 8.33 | 1500 | 0.9970 | 0.7940 | 0.8066 | | 0.6813 | 11.11 | 2000 | 0.8608 | 0.8097 | 0.8125 | | 0.5633 | 13.89 | 2500 | 0.8698 | 0.8122 | 0.8111 | | 0.4986 | 16.67 | 3000 | 0.9720 | 0.8120 | 0.8145 | | 0.4365 | 19.44 | 3500 | 0.8846 | 0.8128 | 0.8114 | | 0.3969 | 22.22 | 4000 | 0.9115 | 0.8139 | 0.8118 | | 0.3544 | 25.0 | 4500 | 0.9530 | 0.8139 | 0.8116 | | 0.3379 | 27.78 | 5000 | 0.9940 | 0.8096 | 0.8094 | | 0.3146 | 30.56 | 5500 | 0.9590 | 0.8092 | 0.8090 | | 0.2881 | 33.33 | 6000 | 0.9825 | 0.8061 | 0.8043 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
santit96/q-FrozenLake-v1-4x4-noSlippery
santit96
2023-01-09T19:30:48Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-01-09T19:29:45Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="santit96/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
lmazzon70/videomae-base-ssv2-finetuned-rwf2000-epochs6
lmazzon70
2023-01-09T19:29:50Z
60
0
transformers
[ "transformers", "pytorch", "tensorboard", "videomae", "video-classification", "generated_from_trainer", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2023-01-09T14:13:04Z
--- license: cc-by-nc-4.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: videomae-base-ssv2-finetuned-rwf2000-epochs6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-ssv2-finetuned-rwf2000-epochs6 This model is a fine-tuned version of [MCG-NJU/videomae-base-ssv2](https://huggingface.co/MCG-NJU/videomae-base-ssv2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.7920 - Accuracy: 0.4357 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 4800 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.841 | 0.17 | 800 | 0.7114 | 0.755 | | 0.8781 | 1.17 | 1600 | 1.6078 | 0.5925 | | 0.1951 | 2.17 | 2400 | 1.9190 | 0.5962 | | 0.2094 | 3.17 | 3200 | 0.9991 | 0.7588 | | 0.3594 | 4.17 | 4000 | 1.0306 | 0.7937 | | 0.0019 | 5.17 | 4800 | 1.0982 | 0.7775 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.1+cu117 - Datasets 2.8.0 - Tokenizers 0.13.2
Qilex/q-Taxi-v3
Qilex
2023-01-09T19:26:04Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-01-09T19:21:02Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Qilex/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Xhaheen/srkay-man_6-1-2022
Xhaheen
2023-01-09T19:20:08Z
32
90
diffusers
[ "diffusers", "pytorch", "stable-diffusion", "text-to-image", "diffusion-models-class", "dreambooth-hackathon", "wildcard", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-01-06T05:39:26Z
--- license: creativeml-openrail-m tags: - pytorch - diffusers - stable-diffusion - text-to-image - diffusion-models-class - dreambooth-hackathon - wildcard widget: - text: a photorealistic image of srkay --- # DreamBooth model for the srkay concept trained by Xhaheen on the Xhaheen/dreambooth-hackathon-images-srkman-2 dataset. This is a Stable Diffusion model fine-tuned on the sha rukh khan images with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of srkay man** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part! ## Dataset used ![srkmansd (17).png](https://s3.amazonaws.com/moonup/production/uploads/1673107436292-621c88aca7d6c7e0563256ae.png) ![srkmansd (18).png](https://s3.amazonaws.com/moonup/production/uploads/1673107436124-621c88aca7d6c7e0563256ae.png) ![srkmansd (16).png](https://s3.amazonaws.com/moonup/production/uploads/1673107436048-621c88aca7d6c7e0563256ae.png) ## Description This is a Stable Diffusion model fine-tuned on `man` images for the wildcard theme. ## Usage ```python from diffusers import StableDiffusionPipeline pipeline = StableDiffusionPipeline.from_pretrained('Xhaheen/srkay-man_6-1-2022') image = pipeline().images[0] image ``` [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1FmTaUN38enNdCgi4HxG0LMZ4HobM0Iq3?usp=sharing)
Qilex/q-FrozenLake-v1-4x4-noSlippery
Qilex
2023-01-09T19:17:59Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-01-09T19:17:53Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Qilex/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
wjlgatech/bert-fine-tuned-cola
wjlgatech
2023-01-09T19:12:34Z
109
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-07T19:16:52Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: bert-fine-tuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: cola split: train args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.0 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-fine-tuned-cola This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6256 - Matthews Correlation: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.6251 | 1.0 | 1069 | 0.6256 | 0.0 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
threite/Reinforce-Pixelcopter-PLE-v0
threite
2023-01-09T19:08:41Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-01-05T12:53:34Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 24.20 +/- 17.47 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction