modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-07 06:34:03
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
544 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-07 06:33:46
card
stringlengths
11
1.01M
fmcurti/A2C-LunarLander-v2
fmcurti
2022-05-05T17:34:36Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-05T17:09:52Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - metrics: - type: mean_reward value: 17.50 +/- 120.65 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **A2C** Agent playing **LunarLander-v2** This is a trained model of a **A2C** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
pjarbas312/ppo-LunarLander-v2
pjarbas312
2022-05-05T17:01:56Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-05T17:01:26Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 260.06 +/- 30.94 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
DarthGrogu/TEST2ppo-LunarLander-v2
DarthGrogu
2022-05-05T17:01:09Z
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-05T17:00:38Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 215.27 +/- 12.72 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
theojolliffe/bart-large-cnn-finetuned-roundup-3-2
theojolliffe
2022-05-05T16:52:43Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-05T16:04:49Z
--- license: mit tags: - generated_from_trainer metrics: - rouge model-index: - name: bart-large-cnn-finetuned-roundup-3-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-cnn-finetuned-roundup-3-2 This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2234 - Rouge1: 50.9324 - Rouge2: 30.5257 - Rougel: 32.2166 - Rougelsum: 47.9849 - Gen Len: 141.6562 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:| | No log | 1.0 | 258 | 1.2775 | 50.0638 | 30.3036 | 32.9555 | 47.3277 | 142.0 | | 1.1818 | 2.0 | 516 | 1.2234 | 50.9324 | 30.5257 | 32.2166 | 47.9849 | 141.6562 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
hugoguh/ppo-LunarLander-v2
hugoguh
2022-05-05T16:30:25Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-05T16:29:58Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 248.25 +/- 18.55 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
AdmiralTaco/TEST2ppo-LunarLander-v2
AdmiralTaco
2022-05-05T16:16:21Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-05T16:15:57Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 277.71 +/- 13.90 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
theojolliffe/bart-large-cnn-finetuned-roundup-3-1
theojolliffe
2022-05-05T16:14:12Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-05T15:09:42Z
--- license: mit tags: - generated_from_trainer model-index: - name: bart-large-cnn-finetuned-roundup-3-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-cnn-finetuned-roundup-3-1 This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 258 | 1.3238 | 50.228 | 29.5898 | 30.1054 | 47.1265 | 142.0 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
Khalsuu/english-filipino-wav2vec2-l-xls-r-test-03
Khalsuu
2022-05-05T15:44:36Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:filipino_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-05-05T08:43:02Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - filipino_voice model-index: - name: english-filipino-wav2vec2-l-xls-r-test-03 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # english-filipino-wav2vec2-l-xls-r-test-03 This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the filipino_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.6932 - Wer: 0.3676 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.3398 | 2.09 | 400 | 0.5733 | 0.6166 | | 0.5087 | 4.19 | 800 | 0.5210 | 0.4775 | | 0.344 | 6.28 | 1200 | 0.5284 | 0.5008 | | 0.2745 | 8.38 | 1600 | 0.5195 | 0.4457 | | 0.2153 | 10.47 | 2000 | 0.5820 | 0.4668 | | 0.1797 | 12.57 | 2400 | 0.4915 | 0.4432 | | 0.1513 | 14.66 | 2800 | 0.6316 | 0.4513 | | 0.1355 | 16.75 | 3200 | 0.5328 | 0.4070 | | 0.1204 | 18.85 | 3600 | 0.5800 | 0.4405 | | 0.1062 | 20.94 | 4000 | 0.6887 | 0.4532 | | 0.0931 | 23.04 | 4400 | 0.6184 | 0.4152 | | 0.0821 | 25.13 | 4800 | 0.7413 | 0.4461 | | 0.0733 | 27.23 | 5200 | 0.7160 | 0.4549 | | 0.071 | 29.32 | 5600 | 0.7001 | 0.4048 | | 0.0577 | 31.41 | 6000 | 0.7839 | 0.4309 | | 0.051 | 33.51 | 6400 | 0.7764 | 0.4128 | | 0.046 | 35.6 | 6800 | 0.6753 | 0.3875 | | 0.0384 | 37.7 | 7200 | 0.7106 | 0.3856 | | 0.0359 | 39.79 | 7600 | 0.6932 | 0.3676 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
Ruth/gbert-large-germaner
Ruth
2022-05-05T15:39:45Z
5
1
transformers
[ "transformers", "tf", "tensorboard", "bert", "token-classification", "de", "dataset:germaner", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-05-05T14:17:46Z
--- language: - de license: mit datasets: - germaner metrics: - precision - recall - f1 - accuracy model-index: - name: gbert-large-germaner results: - task: name: Token Classification type: token-classification dataset: name: germaner type: germaner args: default metrics: - name: precision type: precision value: 0.8693333333333333 - name: recall type: recall value: 0.885640362225097 - name: f1 type: f1 value: 0.8774110861903236 - name: accuracy type: accuracy value: 0.9784210744831022 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gbert-large-germaner This model is a fine-tuned version of [deepset/gbert-large](https://huggingface.co/deepset/gbert-large) on the germaner dataset. It achieves the following results on the evaluation set: - precision: 0.8693 - recall: 0.8856 - f1: 0.8774 - accuracy: 0.9784 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - num_train_epochs: 5 - train_batch_size: 8 - eval_batch_size: 8 - learning_rate: 2e-05 - weight_decay_rate: 0.01 - num_warmup_steps: 0 - fp16: True ### Framework versions - Transformers 4.18.0 - Datasets 1.18.0 - Tokenizers 0.12.1
jcranney/ppo2-LunarLander-v2
jcranney
2022-05-05T15:12:39Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-05T15:03:38Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 269.91 +/- 16.76 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
vuiseng9/roberta-l-squadv1.1
vuiseng9
2022-05-05T15:09:27Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-05-05T14:51:42Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: run05-roberta-large-squadv1.1-sl384-ds128-e2-tbs16 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # run05-roberta-large-squadv1.1-sl384-ds128-e2-tbs16 This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1 # Train ```bash python run_qa.py \ --model_name_or_path roberta-large \ --dataset_name squad \ --do_eval \ --do_train \ --evaluation_strategy steps \ --eval_steps 500 \ --learning_rate 3e-5 \ --fp16 \ --num_train_epochs 2 \ --per_device_eval_batch_size 64 \ --per_device_train_batch_size 16 \ --max_seq_length 384 \ --doc_stride 128 \ --save_steps 1000 \ --logging_steps 1 \ --overwrite_output_dir \ --run_name $RUNID \ --output_dir $OUTDIR ``` # Eval ```bash export CUDA_VISIBLE_DEVICES=0 MODEL=vuiseng9/roberta-l-squadv1.1 OUTDIR=eval-$(basename $MODEL) WORKDIR=transformers/examples/pytorch/question-answering cd $WORKDIR nohup python run_qa.py \ --model_name_or_path $MODEL \ --dataset_name squad \ --do_eval \ --per_device_eval_batch_size 16 \ --max_seq_length 384 \ --doc_stride 128 \ --overwrite_output_dir \ --output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log & ``` ```bash eval_exact_match = 88.4674 eval_f1 = 94.3001 eval_samples = 10790 ```
antgoldbloom/distilbert-rater
antgoldbloom
2022-05-05T14:45:54Z
7
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-05T14:22:55Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilbert-rater results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-rater This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.9.1 - Datasets 1.18.4 - Tokenizers 0.11.6
DeepRoller/rl-model
DeepRoller
2022-05-05T14:38:57Z
3
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-05T13:42:02Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 248.52 +/- 19.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
benjamin/gpt2-wechsel-uyghur
benjamin
2022-05-05T14:24:36Z
5
1
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "ug", "arxiv:2112.06598", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-05-05T13:27:26Z
--- language: ug license: mit --- # gpt2-wechsel-uyghur Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models. See the code here: https://github.com/CPJKU/wechsel And the paper here: https://arxiv.org/abs/2112.06598 ## Performance | Model | PPL | |---|---| | `gpt2-wechsel-sundanese` | **111.72** | | `gpt2` (retrained from scratch) | 149.46 | | Model | PPL | |---|---| | `gpt2-wechsel-scottish-gaelic` | **16.43** | | `gpt2` (retrained from scratch) | 19.53 | | Model | PPL | |---|---| | `gpt2-wechsel-uyghur` | **34.33** | | `gpt2` (retrained from scratch) | 42.82 | | Model | PPL | |---|---| | `gpt2-wechsel-malagasy` | **14.01** | | `gpt2` (retrained from scratch) | 15.93 | See our paper for details. ## Citation Please cite WECHSEL as ``` @misc{minixhofer2021wechsel, title={WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models}, author={Benjamin Minixhofer and Fabian Paischer and Navid Rekabsaz}, year={2021}, eprint={2112.06598}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
utsavnandi/LunarLander-v2_ppo-mlp-0505_02
utsavnandi
2022-05-05T13:39:28Z
3
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-05T13:38:50Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO-MLP results: - metrics: - type: mean_reward value: 212.27 +/- 22.31 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO-MLP** Agent playing **LunarLander-v2** This is a trained model of a **PPO-MLP** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
moussaKam/AraBART
moussaKam
2022-05-05T13:17:29Z
665
14
transformers
[ "transformers", "pytorch", "mbart", "feature-extraction", "summarization", "bart", "fill-mask", "ar", "license:apache-2.0", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-09T10:05:16Z
--- tags: - summarization - bart language: - ar widget: - text: بيروΨͺ Ω‡ΩŠ ΨΉΨ§Ψ΅Ω…Ψ© <mask>. license: apache-2.0 pipeline_tag: "fill-mask" --- AraBART is the first Arabic model in which the encoder and the decoder are pretrained end-to-end, based on BART. AraBART follows the architecture of BART-Base which has 6 encoder and 6 decoder layers and 768 hidden dimensions. In total AraBART has 139M parameters. AraBART achieves the best performance on multiple abstractive summarization datasets, outperforming strong baselines including a pretrained Arabic BERT-based models and multilingual mBART and mT5 models.
obrizum/all-mpnet-base-v2
obrizum
2022-05-05T12:38:54Z
12
1
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "en", "arxiv:1904.06472", "arxiv:2102.07033", "arxiv:2104.08727", "arxiv:1704.05179", "arxiv:1810.09305", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
feature-extraction
2022-05-05T11:49:12Z
--- pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity language: en license: apache-2.0 --- # all-mpnet-base-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('obrizum/all-mpnet-base-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('obrizum/all-mpnet-base-v2') model = AutoModel.from_pretrained('obrizum/all-mpnet-base-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-mpnet-base-v2) ------ ## Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 384 word pieces is truncated. ## Training procedure ### Pre-training We use the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model. Please refer to the model card for more detailed information about the pre-training procedure. ### Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. #### Hyper parameters We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`. #### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |--------------------------------------------------------|:----------------------------------------:|:--------------------------:| | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 | | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | **Total** | | **1,170,060,424** |
arsenplus/TEST2ppo-LunarLander-v2
arsenplus
2022-05-05T12:20:53Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-05T11:21:45Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 221.25 +/- 20.09 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
dk-crazydiv/LunarLander-v2
dk-crazydiv
2022-05-05T11:48:19Z
4
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-05T05:13:22Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 296.60 +/- 16.78 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
Piumi/ScholarBERT
Piumi
2022-05-05T11:41:19Z
0
0
null
[ "text-classification", "region:us" ]
text-classification
2022-05-05T10:41:19Z
--- tags: - text-classification --- # Scintific article multilabel classification SciBERT Model
katta/PPO-LunarLander-v2
katta
2022-05-05T11:33:31Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-05T11:33:01Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO-mlp results: - metrics: - type: mean_reward value: 272.32 +/- 16.75 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO-mlp** Agent playing **LunarLander-v2** This is a trained model of a **PPO-mlp** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
ceyda/RLcourse-ppo-LunarLanderv2
ceyda
2022-05-05T11:31:01Z
6
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-05T10:54:33Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 271.05 +/- 22.76 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
nondevs/100k-ppo-LunarLander-v2
nondevs
2022-05-05T11:27:11Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-05T10:26:48Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 295.52 +/- 15.60 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
Gootter/autotrain-Bart_683-825526269
Gootter
2022-05-05T10:03:01Z
3
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain", "unk", "dataset:Gootter/autotrain-data-Bart_683", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-05T09:46:53Z
--- tags: autotrain language: unk widget: - text: "I love AutoTrain πŸ€—" datasets: - Gootter/autotrain-data-Bart_683 co2_eq_emissions: 28.12268287254098 --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 825526269 - CO2 Emissions (in grams): 28.12268287254098 ## Validation Metrics - Loss: 2.836289644241333 - Rouge1: 31.9867 - Rouge2: 10.3239 - RougeL: 21.0603 - RougeLsum: 30.0862 - Gen Len: 142.0 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/Gootter/autotrain-Bart_683-825526269 ```
adityay1221/cat.5.32
adityay1221
2022-05-05T09:58:36Z
3
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-05T09:57:55Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - bleu model-index: - name: cat.5.32 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cat.5.32 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0293 - Bleu: 25.3811 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 121 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu102 - Datasets 2.1.0 - Tokenizers 0.12.1
DioLiu/distilroberta-base-wiki_shake_mask
DioLiu
2022-05-05T09:26:08Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-05-05T08:21:12Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilroberta-base-wiki_shake_mask results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-wiki_shake_mask This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.4464 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.6528 | 1.0 | 3015 | 2.5390 | | 2.5536 | 2.0 | 6030 | 2.4558 | | 2.5396 | 3.0 | 9045 | 2.4464 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
Jezzarax/TEST2ppo-LunarLander-v2
Jezzarax
2022-05-05T08:49:26Z
6
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-05T07:44:16Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 294.19 +/- 21.02 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
hfl/chinese-pert-large-mrc
hfl
2022-05-05T08:43:53Z
23
10
transformers
[ "transformers", "pytorch", "tf", "bert", "question-answering", "zh", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-05-05T05:52:57Z
--- language: - zh license: "apache-2.0" --- ## A Chinese MRC model built on Chinese PERT-large **Please use `BertForQuestionAnswering` to load this model!** This is a Chinese machine reading comprehension (MRC) model built on PERT-large and fine-tuned on a mixture of Chinese MRC datasets. PERT is a pre-trained model based on permuted language model (PerLM) to learn text semantic information in a self-supervised manner without introducing the mask tokens [MASK]. It yields competitive results on in tasks such as reading comprehension and sequence labeling. Results on Chinese MRC datasets (EM/F1): (We report the checkpoint that has the best AVG score) | | CMRC 2018 Dev | DRCD Dev | SQuAD-Zen Dev (Answerable) | AVG | | :-------: | :-----------: | :-------: | :------------------------: | :-------: | | PERT-large | 73.5/90.8 | 91.2/95.7 | 63.0/79.3 | 75.9/88.6 | Please visit our GitHub repo for more information: https://github.com/ymcui/PERT You may also be interested in, Chinese Minority Languages CINO: https://github.com/ymcui/Chinese-Minority-PLM Chinese MacBERT: https://github.com/ymcui/MacBERT Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA Chinese XLNet: https://github.com/ymcui/Chinese-XLNet Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology
mcditoos/PPO-LunarLander-v2
mcditoos
2022-05-05T07:12:33Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-05T07:11:57Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 233.04 +/- 17.51 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
DioLiu/distilbert-base-uncased-finetuned-sst2-shake-wiki
DioLiu
2022-05-05T06:39:28Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-05T05:17:48Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-sst2-shake-wiki results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-sst2-shake-wiki This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0096 - Accuracy: 0.9994 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.001 | 1.0 | 5029 | 0.0120 | 0.9988 | | 0.0017 | 2.0 | 10058 | 0.0028 | 0.9996 | | 0.0 | 3.0 | 15087 | 0.0094 | 0.9992 | | 0.0 | 4.0 | 20116 | 0.0091 | 0.9994 | | 0.0 | 5.0 | 25145 | 0.0096 | 0.9994 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
YeRyeongLee/bert-base-uncased-finetuned-0505-2
YeRyeongLee
2022-05-05T06:29:23Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-05T05:39:51Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: bert-base-uncased-finetuned-0505-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-0505-2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4277 - Accuracy: 0.9206 - F1: 0.9205 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 1373 | 0.3634 | 0.9025 | 0.9012 | | No log | 2.0 | 2746 | 0.3648 | 0.9066 | 0.9060 | | No log | 3.0 | 4119 | 0.3978 | 0.9189 | 0.9183 | | No log | 4.0 | 5492 | 0.4277 | 0.9206 | 0.9205 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0 - Datasets 1.16.1 - Tokenizers 0.10.3
jo0hnd0e/bert-finetuned-ner
jo0hnd0e
2022-05-05T06:10:51Z
3
0
transformers
[ "transformers", "tf", "bert", "token-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-21T06:03:50Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: jo0hnd0e/bert-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # jo0hnd0e/bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0276 - Validation Loss: 0.0565 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2631, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.1742 | 0.0636 | 0 | | 0.0470 | 0.0551 | 1 | | 0.0276 | 0.0565 | 2 | ### Framework versions - Transformers 4.18.0 - TensorFlow 2.8.0 - Datasets 2.1.0 - Tokenizers 0.12.1
SuperSecureHuman/Lunar-Landing-PPO
SuperSecureHuman
2022-05-05T05:56:49Z
10
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-04T14:03:32Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 284.30 +/- 14.06 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 ---
maesneako/gpt2-fr-eos-paco-cheese
maesneako
2022-05-05T04:47:13Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-05-05T04:29:18Z
--- tags: - generated_from_trainer model-index: - name: gpt2-fr-eos-paco-cheese results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-fr-eos-paco-cheese This model is a fine-tuned version of [dbddv01/gpt2-french-small](https://huggingface.co/dbddv01/gpt2-french-small) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 65 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
YeRyeongLee/mental-bert-base-uncased-finetuned-0505
YeRyeongLee
2022-05-05T04:19:55Z
42
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-05T03:29:42Z
--- tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: mental-bert-base-uncased-finetuned-0505 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mental-bert-base-uncased-finetuned-0505 This model is a fine-tuned version of [mental/mental-bert-base-uncased](https://huggingface.co/mental/mental-bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4195 - Accuracy: 0.9181 - F1: 0.9182 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 1373 | 0.2846 | 0.9124 | 0.9119 | | No log | 2.0 | 2746 | 0.3468 | 0.9132 | 0.9129 | | No log | 3.0 | 4119 | 0.3847 | 0.9189 | 0.9192 | | No log | 4.0 | 5492 | 0.4195 | 0.9181 | 0.9182 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0 - Datasets 1.16.1 - Tokenizers 0.10.3
schorndorfer/distilroberta-base-finetuned-wikitext2
schorndorfer
2022-05-05T04:09:37Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-05-05T03:45:14Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilroberta-base-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-finetuned-wikitext2 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8347 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0853 | 1.0 | 2406 | 1.9214 | | 1.986 | 2.0 | 4812 | 1.8799 | | 1.9568 | 3.0 | 7218 | 1.8202 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
schorndorfer/distilgpt2-finetuned-wikitext2
schorndorfer
2022-05-05T03:42:12Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-05-05T03:09:50Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilgpt2-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.6425 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.76 | 1.0 | 2334 | 3.6658 | | 3.6526 | 2.0 | 4668 | 3.6468 | | 3.6004 | 3.0 | 7002 | 3.6425 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
vickyjm/ppo-LunarLander-v2
vickyjm
2022-05-05T03:08:38Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-05T01:41:28Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 214.15 +/- 72.82 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
dbarbedillo/ppo-LunarLander-v2
dbarbedillo
2022-05-05T02:41:24Z
3
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-05T02:40:54Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 296.33 +/- 19.27 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
heriosousa/ppo-LunarLander-v2
heriosousa
2022-05-05T01:58:24Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-05T00:32:16Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 242.77 +/- 18.06 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
magitz/ppo-LunarLander-v2-HFcourse
magitz
2022-05-05T01:06:58Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-05T01:05:35Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 188.35 +/- 88.74 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
TweebankNLP/bertweet-tb2_wnut17-ner
TweebankNLP
2022-05-05T00:23:17Z
117
4
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "arxiv:2201.07281", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-05-04T16:50:37Z
--- license: cc-by-nc-4.0 --- ## Model Specification - This is the **state-of-the-art Twitter NER model (with 74.35\% Entity-Level F1)** on Tweebank V2's NER benchmark (also called `Tweebank-NER`), trained on the corpus combining both Tweebank-NER and WNUT 17 training data. - For more details about the `TweebankNLP` project, please refer to this [our paper](https://arxiv.org/pdf/2201.07281.pdf) and [github](https://github.com/social-machines/TweebankNLP) page. - In the paper, it is referred as `HuggingFace-BERTweet (TB2+W17).` ## How to use the model - **PRE-PROCESSING**: when you apply the model on tweets, please make sure that tweets are preprocessed by the [TweetTokenizer](https://github.com/VinAIResearch/BERTweet/blob/master/TweetNormalizer.py) to get the best performance. ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("TweebankNLP/bertweet-tb2_wnut17-ner") model = AutoModelForTokenClassification.from_pretrained("TweebankNLP/bertweet-tb2_wnut17-ner") ``` ## References If you use this repository in your research, please kindly cite [our paper](https://arxiv.org/pdf/2201.07281.pdf): ```bibtex @article{jiang2022tweetnlp, title={Annotating the Tweebank Corpus on Named Entity Recognition and Building NLP Models for Social Media Analysis}, author={Jiang, Hang and Hua, Yining and Beeferman, Doug and Roy, Deb}, journal={In Proceedings of the 13th Language Resources and Evaluation Conference (LREC)}, year={2022} } ```
dalvarez/PPO-LunarLander-v2
dalvarez
2022-05-05T00:12:32Z
3
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-05T00:11:43Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 131.32 +/- 54.42 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
akkasayaz/ppo-LunarLander-v2
akkasayaz
2022-05-04T23:44:34Z
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-04T23:43:59Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 233.78 +/- 19.45 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
YeRyeongLee/bert-base-uncased-finetuned-small-0505
YeRyeongLee
2022-05-04T22:54:18Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-04T22:25:48Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: bert-base-uncased-finetuned-small-0505 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-small-0505 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8649 - Accuracy: 0.1818 - F1: 0.1182 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 13 | 1.8337 | 0.1818 | 0.0559 | | No log | 2.0 | 26 | 1.8559 | 0.2727 | 0.1414 | | No log | 3.0 | 39 | 1.8488 | 0.1818 | 0.1010 | | No log | 4.0 | 52 | 1.8649 | 0.1818 | 0.1182 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0 - Datasets 1.16.1 - Tokenizers 0.10.3
HomayounSadri/bert-base-uncased-finetuned-squad
HomayounSadri
2022-05-04T22:40:18Z
4
0
transformers
[ "transformers", "tf", "tensorboard", "bert", "question-answering", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-05-04T18:45:32Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: HomayounSadri/bert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # HomayounSadri/bert-base-uncased-finetuned-squad This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6196 - Validation Loss: 1.0521 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16596, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.3747 | 1.0166 | 0 | | 0.8290 | 0.9963 | 1 | | 0.6196 | 1.0521 | 2 | ### Framework versions - Transformers 4.18.0 - TensorFlow 2.8.0 - Datasets 2.1.0 - Tokenizers 0.12.1
mmangino/ppo-LunarLander-v2
mmangino
2022-05-04T20:24:48Z
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-04T20:24:21Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 282.72 +/- 23.16 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
utkusaglm/ppo-LunarLander-v1
utkusaglm
2022-05-04T20:23:28Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-04T20:17:22Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 295.94 +/- 13.13 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
elotech/ppo-LunarLander-v2
elotech
2022-05-04T19:50:20Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-04T19:26:09Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO_v2 results: - metrics: - type: mean_reward value: 254.01 +/- 15.18 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO_v1** Agent playing **LunarLander-v2** This is a trained model of a **PPO_v1** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
AndreyM/rl_course_luner_lander
AndreyM
2022-05-04T19:41:14Z
3
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-04T19:02:18Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 261.24 +/- 15.44 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
theojolliffe/bart-large-cnn-finetuned-roundup-2-4
theojolliffe
2022-05-04T19:31:38Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-04T17:32:49Z
--- license: mit tags: - generated_from_trainer metrics: - rouge model-index: - name: bart-large-cnn-finetuned-roundup-2-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-cnn-finetuned-roundup-2-4 This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0908 - Rouge1: 51.9961 - Rouge2: 32.3963 - Rougel: 32.1774 - Rougelsum: 50.1033 - Gen Len: 141.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:| | No log | 1.0 | 167 | 1.2152 | 52.234 | 33.1104 | 33.308 | 49.5516 | 142.0 | | No log | 2.0 | 334 | 1.1054 | 52.7096 | 33.4698 | 33.9595 | 49.8736 | 140.3333 | | 1.0437 | 3.0 | 501 | 1.0796 | 51.699 | 32.4255 | 34.0294 | 49.5276 | 141.7143 | | 1.0437 | 4.0 | 668 | 1.0908 | 51.9961 | 32.3963 | 32.1774 | 50.1033 | 141.0 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
nondevs/TEST2ppo-LunarLander-v2
nondevs
2022-05-04T19:31:18Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-04T18:50:42Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 270.00 +/- 22.67 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
Sami/PPO-LunarLander-v2
Sami
2022-05-04T19:10:11Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-04T17:45:52Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 292.63 +/- 17.52 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
Jaechang/ppo-LunarLander-v2
Jaechang
2022-05-04T19:08:59Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-04T17:56:16Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 210.82 +/- 19.82 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 ---
SaiShashank1303/ch-1-ppo-LunarLander-v2
SaiShashank1303
2022-05-04T19:04:07Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-04T19:03:25Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: ppo-LunarLander-v2 results: - metrics: - type: mean_reward value: 203.94 +/- 26.92 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **ppo-LunarLander-v2** Agent playing **LunarLander-v2** This is a trained model of a **ppo-LunarLander-v2** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
CWhy/given-ppo-LunarLander-v2
CWhy
2022-05-04T18:44:21Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-04T18:43:43Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 198.92 +/- 36.84 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
arkadip-maitra/ppo-LunarLander-v2
arkadip-maitra
2022-05-04T17:57:24Z
3
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-04T16:34:46Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 232.73 +/- 68.47 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
riteshhf/TEST1ppo-LunarLander-v2
riteshhf
2022-05-04T17:48:57Z
0
2
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-04T17:48:25Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 267.56 +/- 15.74 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
huggingtweets/usmnt-zacksteffen_
huggingtweets
2022-05-04T17:19:08Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-05-04T17:18:29Z
--- language: en thumbnail: http://www.huggingtweets.com/usmnt-zacksteffen_/1651684743123/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1410587808666955776/mWkKWw1U_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1509644465388105731/dErjQdWT_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">πŸ€– AI CYBORG πŸ€–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">USMNT & Zack Steffen</div> <div style="text-align: center; font-size: 14px;">@usmnt-zacksteffen_</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from USMNT & Zack Steffen. | Data | USMNT | Zack Steffen | | --- | --- | --- | | Tweets downloaded | 3250 | 3120 | | Retweets | 600 | 869 | | Short tweets | 215 | 523 | | Tweets kept | 2435 | 1728 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/34uud8si/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @usmnt-zacksteffen_'s tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2wiyd3kq) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2wiyd3kq/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/usmnt-zacksteffen_') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
jamm/ppo-LunarLander-v2
jamm
2022-05-04T16:42:01Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-04T16:41:30Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 281.88 +/- 14.38 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2
MartinoMensio
2022-05-04T16:28:04Z
3
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "es", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-15T17:06:08Z
--- language: es license: mit widget: - text: "y porquΓ© es lo que hay que hacer con los menas y con los adultos tambiΓ©n!!!! NO a los inmigrantes ilegales!!!!" --- ### Description This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022) We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022) We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models: | method | epoch 1 | epoch 3 | epoch 3 | epoch 4 | |--- |--- |--- |--- |--- | | raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) | | m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) | | m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) | | regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) | | w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) | | w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) | This model is `w-m-vote-nonstrict-epoch-2` ### Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline model_name = 'w-m-vote-nonstrict-epoch-2' tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased") full_model_path = f'MartinoMensio/racism-models-{model_name}' model = AutoModelForSequenceClassification.from_pretrained(full_model_path) pipe = pipeline("text-classification", model = model, tokenizer = tokenizer) texts = [ 'y porquΓ© es lo que hay que hacer con los menas y con los adultos tambiΓ©n!!!! NO a los inmigrantes ilegales!!!!', 'Es que los judΓ­os controlan el mundo' ] print(pipe(texts)) # [{'label': 'racist', 'score': 0.9680026173591614}, {'label': 'non-racist', 'score': 0.9936750531196594}] ``` For more details, see https://github.com/preyero/neatclass22
MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1
MartinoMensio
2022-05-04T16:27:31Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "es", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-15T17:01:40Z
--- language: es license: mit widget: - text: "y porquΓ© es lo que hay que hacer con los menas y con los adultos tambiΓ©n!!!! NO a los inmigrantes ilegales!!!!" --- ### Description This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022) We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022) We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models: | method | epoch 1 | epoch 3 | epoch 3 | epoch 4 | |--- |--- |--- |--- |--- | | raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) | | m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) | | m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) | | regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) | | w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) | | w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) | This model is `w-m-vote-nonstrict-epoch-1` ### Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline model_name = 'w-m-vote-nonstrict-epoch-1' tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased") full_model_path = f'MartinoMensio/racism-models-{model_name}' model = AutoModelForSequenceClassification.from_pretrained(full_model_path) pipe = pipeline("text-classification", model = model, tokenizer = tokenizer) texts = [ 'y porquΓ© es lo que hay que hacer con los menas y con los adultos tambiΓ©n!!!! NO a los inmigrantes ilegales!!!!', 'Es que los judΓ­os controlan el mundo' ] print(pipe(texts)) # [{'label': 'racist', 'score': 0.8460916876792908}, {'label': 'non-racist', 'score': 0.9714874029159546}] ``` For more details, see https://github.com/preyero/neatclass22
Guillaume63/ppo-LunarLander-v2
Guillaume63
2022-05-04T16:27:19Z
3
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-04T16:26:48Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PP0 results: - metrics: - type: mean_reward value: 223.27 +/- 26.13 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PP0** Agent playing **LunarLander-v2** This is a trained model of a **PP0** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
MartinoMensio/racism-models-w-m-vote-strict-epoch-1
MartinoMensio
2022-05-04T16:24:13Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "es", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-15T16:53:35Z
--- language: es license: mit widget: - text: "y porquΓ© es lo que hay que hacer con los menas y con los adultos tambiΓ©n!!!! NO a los inmigrantes ilegales!!!!" --- ### Description This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022) We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022) We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models: | method | epoch 1 | epoch 3 | epoch 3 | epoch 4 | |--- |--- |--- |--- |--- | | raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) | | m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) | | m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) | | regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) | | w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) | | w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) | This model is `w-m-vote-strict-epoch-1` ### Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline model_name = 'w-m-vote-strict-epoch-1' tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased") full_model_path = f'MartinoMensio/racism-models-{model_name}' model = AutoModelForSequenceClassification.from_pretrained(full_model_path) pipe = pipeline("text-classification", model = model, tokenizer = tokenizer) texts = [ 'y porquΓ© es lo que hay que hacer con los menas y con los adultos tambiΓ©n!!!! NO a los inmigrantes ilegales!!!!', 'Es que los judΓ­os controlan el mundo' ] print(pipe(texts)) # [{'label': 'racist', 'score': 0.9342454075813293}, {'label': 'non-racist', 'score': 0.7690662741661072}] ``` For more details, see https://github.com/preyero/neatclass22
MartinoMensio/racism-models-regression-w-m-vote-epoch-3
MartinoMensio
2022-05-04T16:21:40Z
5
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "es", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-15T16:21:04Z
--- language: es license: mit widget: - text: "y porquΓ© es lo que hay que hacer con los menas y con los adultos tambiΓ©n!!!! NO a los inmigrantes ilegales!!!!" --- ### Description This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022) We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022) We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models: | method | epoch 1 | epoch 3 | epoch 3 | epoch 4 | |--- |--- |--- |--- |--- | | raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) | | m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) | | m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) | | regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) | | w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) | | w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) | This model is `regression-w-m-vote-epoch-3` ### Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline from transformers.pipelines import TextClassificationPipeline class TextRegressionPipeline(TextClassificationPipeline): """ Class based on the TextClassificationPipeline from transformers. The difference is that instead of being based on a classifier, it is based on a regressor. You can specify the regression threshold when you call the pipeline or when you instantiate the pipeline. """ def __init__(self, **kwargs): """ Builds a new Pipeline based on regression. regression_threshold: Optional(float). If None, the pipeline will simply output the score. If set to a specific value, the output will be both the score and the label. """ self.regression_threshold = kwargs.pop("regression_threshold", None) super().__init__(**kwargs) def __call__(self, *args, **kwargs): """ You can also specify the regression threshold when you call the pipeline. regression_threshold: Optional(float). If None, the pipeline will simply output the score. If set to a specific value, the output will be both the score and the label. """ self.regression_threshold_call = kwargs.pop("regression_threshold", None) result = super().__call__(*args, **kwargs) return result def postprocess(self, model_outputs, function_to_apply=None, return_all_scores=False): outputs = model_outputs["logits"][0] outputs = outputs.numpy() scores = outputs score = scores[0] regression_threshold = self.regression_threshold # override the specific threshold if it is specified in the call if self.regression_threshold_call: regression_threshold = self.regression_threshold_call if regression_threshold: return {"label": 'racist' if score > regression_threshold else 'non-racist', "score": score} else: return {"score": score} model_name = 'regression-w-m-vote-epoch-3' tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased") full_model_path = f'MartinoMensio/racism-models-{model_name}' model = AutoModelForSequenceClassification.from_pretrained(full_model_path) pipe = TextRegressionPipeline(model=model, tokenizer=tokenizer) texts = [ 'y porquΓ© es lo que hay que hacer con los menas y con los adultos tambiΓ©n!!!! NO a los inmigrantes ilegales!!!!', 'Es que los judΓ­os controlan el mundo' ] # just get the score of regression print(pipe(texts)) # [{'score': 0.7393736}, {'score': 0.44301373}] # or also specify a threshold to cut racist/non-racist print(pipe(texts, regression_threshold=0.9)) # [{'label': 'non-racist', 'score': 0.7393736}, {'label': 'non-racist', 'score': 0.44301373}] ``` For more details, see https://github.com/preyero/neatclass22
MartinoMensio/racism-models-regression-w-m-vote-epoch-2
MartinoMensio
2022-05-04T16:20:44Z
5
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "es", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-15T16:18:45Z
--- language: es license: mit widget: - text: "y porquΓ© es lo que hay que hacer con los menas y con los adultos tambiΓ©n!!!! NO a los inmigrantes ilegales!!!!" --- ### Description This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022) We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022) We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models: | method | epoch 1 | epoch 3 | epoch 3 | epoch 4 | |--- |--- |--- |--- |--- | | raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) | | m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) | | m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) | | regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) | | w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) | | w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) | This model is `regression-w-m-vote-epoch-2` ### Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline from transformers.pipelines import TextClassificationPipeline class TextRegressionPipeline(TextClassificationPipeline): """ Class based on the TextClassificationPipeline from transformers. The difference is that instead of being based on a classifier, it is based on a regressor. You can specify the regression threshold when you call the pipeline or when you instantiate the pipeline. """ def __init__(self, **kwargs): """ Builds a new Pipeline based on regression. regression_threshold: Optional(float). If None, the pipeline will simply output the score. If set to a specific value, the output will be both the score and the label. """ self.regression_threshold = kwargs.pop("regression_threshold", None) super().__init__(**kwargs) def __call__(self, *args, **kwargs): """ You can also specify the regression threshold when you call the pipeline. regression_threshold: Optional(float). If None, the pipeline will simply output the score. If set to a specific value, the output will be both the score and the label. """ self.regression_threshold_call = kwargs.pop("regression_threshold", None) result = super().__call__(*args, **kwargs) return result def postprocess(self, model_outputs, function_to_apply=None, return_all_scores=False): outputs = model_outputs["logits"][0] outputs = outputs.numpy() scores = outputs score = scores[0] regression_threshold = self.regression_threshold # override the specific threshold if it is specified in the call if self.regression_threshold_call: regression_threshold = self.regression_threshold_call if regression_threshold: return {"label": 'racist' if score > regression_threshold else 'non-racist', "score": score} else: return {"score": score} model_name = 'regression-w-m-vote-epoch-2' tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased") full_model_path = f'MartinoMensio/racism-models-{model_name}' model = AutoModelForSequenceClassification.from_pretrained(full_model_path) pipe = TextRegressionPipeline(model=model, tokenizer=tokenizer) texts = [ 'y porquΓ© es lo que hay que hacer con los menas y con los adultos tambiΓ©n!!!! NO a los inmigrantes ilegales!!!!', 'Es que los judΓ­os controlan el mundo' ] # just get the score of regression print(pipe(texts)) # [{'score': 0.8367272}, {'score': 0.4402479}] # or also specify a threshold to cut racist/non-racist print(pipe(texts, regression_threshold=0.9)) # [{'label': 'non-racist', 'score': 0.8367272}, {'label': 'non-racist', 'score': 0.4402479}] ``` For more details, see https://github.com/preyero/neatclass22
MartinoMensio/racism-models-regression-w-m-vote-epoch-1
MartinoMensio
2022-05-04T16:18:39Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "es", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-15T16:15:44Z
--- language: es license: mit widget: - text: "y porquΓ© es lo que hay que hacer con los menas y con los adultos tambiΓ©n!!!! NO a los inmigrantes ilegales!!!!" --- ### Description This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022) We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022) We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models: | method | epoch 1 | epoch 3 | epoch 3 | epoch 4 | |--- |--- |--- |--- |--- | | raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) | | m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) | | m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) | | regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) | | w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) | | w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) | This model is `regression-w-m-vote-epoch-1` ### Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline from transformers.pipelines import TextClassificationPipeline class TextRegressionPipeline(TextClassificationPipeline): """ Class based on the TextClassificationPipeline from transformers. The difference is that instead of being based on a classifier, it is based on a regressor. You can specify the regression threshold when you call the pipeline or when you instantiate the pipeline. """ def __init__(self, **kwargs): """ Builds a new Pipeline based on regression. regression_threshold: Optional(float). If None, the pipeline will simply output the score. If set to a specific value, the output will be both the score and the label. """ self.regression_threshold = kwargs.pop("regression_threshold", None) super().__init__(**kwargs) def __call__(self, *args, **kwargs): """ You can also specify the regression threshold when you call the pipeline. regression_threshold: Optional(float). If None, the pipeline will simply output the score. If set to a specific value, the output will be both the score and the label. """ self.regression_threshold_call = kwargs.pop("regression_threshold", None) result = super().__call__(*args, **kwargs) return result def postprocess(self, model_outputs, function_to_apply=None, return_all_scores=False): outputs = model_outputs["logits"][0] outputs = outputs.numpy() scores = outputs score = scores[0] regression_threshold = self.regression_threshold # override the specific threshold if it is specified in the call if self.regression_threshold_call: regression_threshold = self.regression_threshold_call if regression_threshold: return {"label": 'racist' if score > regression_threshold else 'non-racist', "score": score} else: return {"score": score} model_name = 'regression-w-m-vote-epoch-1' tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased") full_model_path = f'MartinoMensio/racism-models-{model_name}' model = AutoModelForSequenceClassification.from_pretrained(full_model_path) pipe = TextRegressionPipeline(model=model, tokenizer=tokenizer) texts = [ 'y porquΓ© es lo que hay que hacer con los menas y con los adultos tambiΓ©n!!!! NO a los inmigrantes ilegales!!!!', 'Es que los judΓ­os controlan el mundo' ] # just get the score of regression print(pipe(texts)) # [{'score': 0.8378907}, {'score': 0.33399782}] # or also specify a threshold to cut racist/non-racist print(pipe(texts, regression_threshold=0.9)) # [{'label': 'non-racist', 'score': 0.8378907}, {'label': 'non-racist', 'score': 0.33399782}] ``` For more details, see https://github.com/preyero/neatclass22
MartinoMensio/racism-models-m-vote-nonstrict-epoch-2
MartinoMensio
2022-05-04T16:12:34Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "es", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-15T16:46:17Z
--- language: es license: mit widget: - text: "y porquΓ© es lo que hay que hacer con los menas y con los adultos tambiΓ©n!!!! NO a los inmigrantes ilegales!!!!" --- ### Description This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022) We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022) We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models: | method | epoch 1 | epoch 3 | epoch 3 | epoch 4 | |--- |--- |--- |--- |--- | | raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) | | m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) | | m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) | | regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) | | w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) | | w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) | This model is `m-vote-nonstrict-epoch-2` ### Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline model_name = 'm-vote-nonstrict-epoch-2' tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased") full_model_path = f'MartinoMensio/racism-models-{model_name}' model = AutoModelForSequenceClassification.from_pretrained(full_model_path) pipe = pipeline("text-classification", model = model, tokenizer = tokenizer) texts = [ 'y porquΓ© es lo que hay que hacer con los menas y con los adultos tambiΓ©n!!!! NO a los inmigrantes ilegales!!!!', 'Es que los judΓ­os controlan el mundo' ] print(pipe(texts)) # [{'label': 'racist', 'score': 0.8650100827217102}, {'label': 'non-racist', 'score': 0.9674995541572571}] ``` For more details, see https://github.com/preyero/neatclass22
MartinoMensio/racism-models-m-vote-strict-epoch-3
MartinoMensio
2022-05-04T16:09:42Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "es", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-15T16:35:22Z
--- language: es license: mit widget: - text: "y porquΓ© es lo que hay que hacer con los menas y con los adultos tambiΓ©n!!!! NO a los inmigrantes ilegales!!!!" --- ### Description This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022) We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022) We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models: | method | epoch 1 | epoch 3 | epoch 3 | epoch 4 | |--- |--- |--- |--- |--- | | raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) | | m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) | | m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) | | regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) | | w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) | | w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) | This model is `m-vote-strict-epoch-3` ### Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline model_name = 'm-vote-strict-epoch-3' tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased") full_model_path = f'MartinoMensio/racism-models-{model_name}' model = AutoModelForSequenceClassification.from_pretrained(full_model_path) pipe = pipeline("text-classification", model = model, tokenizer = tokenizer) texts = [ 'y porquΓ© es lo que hay que hacer con los menas y con los adultos tambiΓ©n!!!! NO a los inmigrantes ilegales!!!!', 'Es que los judΓ­os controlan el mundo' ] print(pipe(texts)) # [{'label': 'racist', 'score': 0.9929012656211853}, {'label': 'non-racist', 'score': 0.5616322159767151}] ``` For more details, see https://github.com/preyero/neatclass22
seriy21/ppo-LunarLander-v2
seriy21
2022-05-04T16:09:25Z
3
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-04T16:08:55Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 286.36 +/- 12.71 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
MartinoMensio/racism-models-m-vote-strict-epoch-2
MartinoMensio
2022-05-04T16:08:39Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "es", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-15T16:32:15Z
--- language: es license: mit widget: - text: "y porquΓ© es lo que hay que hacer con los menas y con los adultos tambiΓ©n!!!! NO a los inmigrantes ilegales!!!!" --- ### Description This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022) We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022) We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models: | method | epoch 1 | epoch 3 | epoch 3 | epoch 4 | |--- |--- |--- |--- |--- | | raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) | | m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) | | m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) | | regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) | | w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) | | w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) | This model is `m-vote-strict-epoch-2` ### Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline model_name = 'm-vote-strict-epoch-2' tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased") full_model_path = f'MartinoMensio/racism-models-{model_name}' model = AutoModelForSequenceClassification.from_pretrained(full_model_path) pipe = pipeline("text-classification", model = model, tokenizer = tokenizer) texts = [ 'y porquΓ© es lo que hay que hacer con los menas y con los adultos tambiΓ©n!!!! NO a los inmigrantes ilegales!!!!', 'Es que los judΓ­os controlan el mundo' ] print(pipe(texts)) # [{'label': 'racist', 'score': 0.923829972743988}, {'label': 'non-racist', 'score': 0.8673009872436523}] ``` For more details, see https://github.com/preyero/neatclass22
MartinoMensio/racism-models-raw-label-epoch-4
MartinoMensio
2022-05-04T16:06:20Z
5
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "es", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-15T16:12:31Z
--- language: es license: mit widget: - text: "y porquΓ© es lo que hay que hacer con los menas y con los adultos tambiΓ©n!!!! NO a los inmigrantes ilegales!!!!" --- ### Description This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022) We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022) We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models: | method | epoch 1 | epoch 3 | epoch 3 | epoch 4 | |--- |--- |--- |--- |--- | | raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) | | m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) | | m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) | | regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) | | w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) | | w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) | This model is `raw-label-epoch-4` ### Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline model_name = 'raw-label-epoch-4' tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased") full_model_path = f'MartinoMensio/racism-models-{model_name}' model = AutoModelForSequenceClassification.from_pretrained(full_model_path) pipe = pipeline("text-classification", model = model, tokenizer = tokenizer) texts = [ 'y porquΓ© es lo que hay que hacer con los menas y con los adultos tambiΓ©n!!!! NO a los inmigrantes ilegales!!!!', 'Es que los judΓ­os controlan el mundo' ] print(pipe(texts)) # [{'label': 'racist', 'score': 0.921501636505127}, {'label': 'non-racist', 'score': 0.9459075331687927}] ``` For more details, see https://github.com/preyero/neatclass22
NorbertRop/PPO-MlpPolicy-LunarLander-v2
NorbertRop
2022-05-04T15:13:59Z
4
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-04T15:11:17Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 234.34 +/- 20.06 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
LidarRL/TEST2ppo-LunarLander-v2
LidarRL
2022-05-04T15:10:24Z
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-05-04T14:20:45Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 204.65 +/- 31.76 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
dbmdz/flair-hipe-2022-ajmc-all
dbmdz
2022-05-04T13:43:34Z
10
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "multilingual", "license:mit", "region:us" ]
token-classification
2022-04-29T07:26:42Z
--- tags: - flair - token-classification - sequence-tagger-model language: multilingual widget: - text: "In editing the Fragments , I have availed myself of Mr . R . Ellis ’ acute remarks on them in the Cambridge Journal of Philology , Vol . IV , and that I am largely indebted , as every editor must now be , to the edition of the Tragic Fragments by A . Nauck , Leipzig , 1856 ." - text: "459 . Skyros klang dem Athener etwa wie Pholegandros und Sikinos bei Solon Eleg . 1 , 4 , dem RΓΆmer Ulubrae , Butunti ." - text: "Celles d ’ Ajax et des siens occupaient l ' extrΓͺme aile gauche , vers le promontoire RhΓ©tΓ©e , et confinaient tout Γ  la fois au retranchement et Γ  la mer ( // . XIT1 , 681 ; Heynce , excursns citΓ© ) ," license: mit ---
uhlenbeckmew/distilroberta-base-swift_shake
uhlenbeckmew
2022-05-04T13:25:06Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-05-04T13:07:46Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilroberta-base-swift_shake results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-swift_shake This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.5309 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 334 | 2.5817 | | 2.7363 | 2.0 | 668 | 2.4499 | | 2.4584 | 3.0 | 1002 | 2.5309 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
jonfrank/xlm-roberta-base-finetuned-panx-de
jonfrank
2022-05-04T10:13:21Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-05-04T09:39:55Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8654425558524246 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1334 - F1: 0.8654 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2541 | 1.0 | 525 | 0.1596 | 0.8242 | | 0.1284 | 2.0 | 1050 | 0.1360 | 0.8499 | | 0.0827 | 3.0 | 1575 | 0.1334 | 0.8654 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
Nijana/gpt-neo-1.3B-climate_change_tweets
Nijana
2022-05-04T10:12:52Z
0
0
null
[ "region:us" ]
null
2022-05-02T11:35:45Z
# A fine-tuned GPT-Neo Model for Tweet Generation This model is a fine-tuned version of the 1.3B-parameter GPT-Neo model developed by EleutherAI. As the default GPT-Neo model did not receive any social media data during its pre-training, we fine-tuned it with tweets collected from Twitter from October to November 2021 related to climate change hashtags. The model received data in the format `<username> - <tweet>` We used an 80/20 train/test split, and to differentiate distinct tweets, we added a start-of-tweet and an end-of-tweet token to the training dataset. To guide you in using this model, please consult the `gpt_neo_1.3B_twitter.ipynb` Jupyter Notebook file from this repository. --- license: cc-by-3.0 ---
waboucay/camembert-base-finetuned-xnli_fr-finetuned-nli-repnum_wl
waboucay
2022-05-04T09:31:42Z
4
0
transformers
[ "transformers", "pytorch", "camembert", "text-classification", "nli", "fr", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-04T09:28:44Z
--- language: - fr tags: - nli metrics: - f1 --- ## Eval results We obtain the following results on ```validation``` and ```test``` sets: | Set | F1<sub>micro</sub> | F1<sub>macro</sub> | |------------|--------------------|--------------------| | validation | 73.3 | 73.3 | | test | 69.4 | 69.4 |
nbhimte/tiny-bert-mnli-distilled
nbhimte
2022-05-04T07:14:17Z
26
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-17T03:40:10Z
--- tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: tiny-bert-mnli-distilled results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: mnli metrics: - name: Accuracy type: accuracy value: 0.5818644931227712 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-bert-mnli-distilled It achieves the following results on the evaluation set: - Loss: 1.5018 - Accuracy: 0.5819 - F1 score: 0.5782 - Precision score: 0.6036 - Metric recall: 0.5819 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 32 - seed: 33 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 score | Precision score | Metric recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:---------------:|:-------------:| | 1.4475 | 1.0 | 614 | 1.4296 | 0.4521 | 0.4070 | 0.5621 | 0.4521 | | 1.3354 | 2.0 | 1228 | 1.4320 | 0.4805 | 0.4579 | 0.5276 | 0.4805 | | 1.2244 | 3.0 | 1842 | 1.4786 | 0.5699 | 0.5602 | 0.5865 | 0.5699 | | 1.1416 | 4.0 | 2456 | 1.5018 | 0.5819 | 0.5782 | 0.6036 | 0.5819 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.9.1 - Datasets 2.1.0 - Tokenizers 0.11.6
huggingtweets/dril-nycguidovoice-senn_spud
huggingtweets
2022-05-04T01:55:26Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-05-04T01:44:12Z
--- language: en thumbnail: http://www.huggingtweets.com/dril-nycguidovoice-senn_spud/1651629321136/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1510917391533830145/XW-zSFDJ_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1503095773059244036/xof9dI-A_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1387151448203358209/HKNuKY7L_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">πŸ€– AI CYBORG πŸ€–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">wint & Nick Mullen & Will Sennett</div> <div style="text-align: center; font-size: 14px;">@dril-nycguidovoice-senn_spud</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from wint & Nick Mullen & Will Sennett. | Data | wint | Nick Mullen | Will Sennett | | --- | --- | --- | --- | | Tweets downloaded | 3229 | 1007 | 3231 | | Retweets | 486 | 71 | 314 | | Short tweets | 300 | 41 | 631 | | Tweets kept | 2443 | 895 | 2286 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3dcek2rh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dril-nycguidovoice-senn_spud's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2f1xmo4s) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2f1xmo4s/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/dril-nycguidovoice-senn_spud') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
theojolliffe/bart-large-cnn-finetuned-roundup-64
theojolliffe
2022-05-04T00:41:04Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-03T21:34:00Z
--- license: mit tags: - generated_from_trainer metrics: - rouge model-index: - name: bart-large-cnn-finetuned-roundup-64 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-cnn-finetuned-roundup-64 This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4772 - Rouge1: 46.5444 - Rouge2: 27.4056 - Rougel: 29.6779 - Rougelsum: 44.0905 - Gen Len: 142.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 64 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 132 | 1.3213 | 48.3389 | 28.6641 | 31.4086 | 45.6679 | 142.0 | | No log | 2.0 | 264 | 1.2325 | 48.798 | 29.3068 | 31.4329 | 45.7945 | 142.0 | | No log | 3.0 | 396 | 1.2791 | 47.1449 | 27.3965 | 30.56 | 44.4704 | 142.0 | | 0.9574 | 4.0 | 528 | 1.3134 | 46.2319 | 25.6249 | 28.7673 | 43.7555 | 140.3 | | 0.9574 | 5.0 | 660 | 1.3187 | 46.7313 | 25.3467 | 29.3873 | 43.9495 | 142.0 | | 0.9574 | 6.0 | 792 | 1.4271 | 48.1638 | 27.8874 | 30.5334 | 45.9944 | 142.0 | | 0.9574 | 7.0 | 924 | 1.4876 | 46.7481 | 25.7259 | 29.7214 | 43.7042 | 140.5 | | 0.3303 | 8.0 | 1056 | 1.5259 | 46.7075 | 26.0716 | 29.5521 | 43.7312 | 142.0 | | 0.3303 | 9.0 | 1188 | 1.6223 | 48.012 | 27.2795 | 30.4989 | 45.4644 | 142.0 | | 0.3303 | 10.0 | 1320 | 1.6842 | 48.0074 | 26.8831 | 29.3396 | 45.1937 | 142.0 | | 0.3303 | 11.0 | 1452 | 1.7317 | 46.52 | 26.5152 | 29.5124 | 43.8797 | 142.0 | | 0.1478 | 12.0 | 1584 | 1.8087 | 47.5887 | 27.0488 | 29.8569 | 44.7318 | 140.8 | | 0.1478 | 13.0 | 1716 | 1.8263 | 46.1251 | 25.8576 | 30.1698 | 42.7228 | 142.0 | | 0.1478 | 14.0 | 1848 | 1.9459 | 46.4034 | 25.7039 | 28.2542 | 43.7254 | 142.0 | | 0.1478 | 15.0 | 1980 | 1.9539 | 44.4666 | 24.5827 | 27.7147 | 41.9769 | 142.0 | | 0.0779 | 16.0 | 2112 | 1.9654 | 47.2267 | 26.4562 | 29.7352 | 44.0823 | 142.0 | | 0.0779 | 17.0 | 2244 | 1.9580 | 48.5086 | 28.0294 | 30.8311 | 45.6336 | 142.0 | | 0.0779 | 18.0 | 2376 | 2.0065 | 48.293 | 28.5678 | 30.0243 | 45.1384 | 142.0 | | 0.0499 | 19.0 | 2508 | 1.9313 | 49.0549 | 28.9695 | 32.0711 | 46.3834 | 142.0 | | 0.0499 | 20.0 | 2640 | 2.0176 | 47.0121 | 25.1606 | 29.0108 | 44.1556 | 142.0 | | 0.0499 | 21.0 | 2772 | 2.0711 | 48.3754 | 28.2221 | 30.772 | 45.8547 | 140.95 | | 0.0499 | 22.0 | 2904 | 2.0848 | 45.7392 | 25.254 | 29.0833 | 43.0381 | 142.0 | | 0.0335 | 23.0 | 3036 | 2.0711 | 47.2931 | 27.4573 | 30.718 | 44.5932 | 142.0 | | 0.0335 | 24.0 | 3168 | 2.1200 | 50.515 | 30.4253 | 33.7045 | 47.6158 | 142.0 | | 0.0335 | 25.0 | 3300 | 2.1097 | 46.4737 | 26.3055 | 29.0148 | 43.2135 | 142.0 | | 0.0335 | 26.0 | 3432 | 2.1695 | 46.9099 | 26.5227 | 29.7757 | 44.0613 | 142.0 | | 0.0249 | 27.0 | 3564 | 2.1494 | 47.8319 | 27.6364 | 31.3593 | 45.065 | 141.95 | | 0.0249 | 28.0 | 3696 | 2.1510 | 47.504 | 26.8971 | 31.7196 | 45.0328 | 142.0 | | 0.0249 | 29.0 | 3828 | 2.1612 | 46.8789 | 27.266 | 30.1009 | 43.8248 | 142.0 | | 0.0249 | 30.0 | 3960 | 2.1579 | 47.7012 | 27.7761 | 30.935 | 44.3686 | 142.0 | | 0.018 | 31.0 | 4092 | 2.1981 | 48.4703 | 29.167 | 31.9815 | 45.8005 | 142.0 | | 0.018 | 32.0 | 4224 | 2.2332 | 45.9512 | 25.8111 | 29.2467 | 42.9234 | 142.0 | | 0.018 | 33.0 | 4356 | 2.1944 | 47.7189 | 28.1413 | 30.9692 | 44.9361 | 142.0 | | 0.018 | 34.0 | 4488 | 2.2589 | 50.9687 | 32.3987 | 36.5644 | 48.3938 | 142.0 | | 0.0132 | 35.0 | 4620 | 2.2269 | 47.8241 | 28.0442 | 31.5535 | 44.9394 | 142.0 | | 0.0132 | 36.0 | 4752 | 2.2865 | 47.4383 | 27.0825 | 30.4109 | 44.194 | 142.0 | | 0.0132 | 37.0 | 4884 | 2.3267 | 49.1786 | 29.6416 | 32.875 | 46.8821 | 142.0 | | 0.0095 | 38.0 | 5016 | 2.2872 | 48.2085 | 28.3304 | 32.1473 | 45.3571 | 142.0 | | 0.0095 | 39.0 | 5148 | 2.3340 | 46.6762 | 26.1637 | 29.0149 | 43.5923 | 142.0 | | 0.0095 | 40.0 | 5280 | 2.3425 | 46.7561 | 26.1645 | 29.6337 | 43.6188 | 142.0 | | 0.0095 | 41.0 | 5412 | 2.3111 | 49.4118 | 29.9761 | 33.4765 | 46.601 | 142.0 | | 0.0076 | 42.0 | 5544 | 2.3892 | 45.3335 | 25.0161 | 28.4124 | 41.9873 | 142.0 | | 0.0076 | 43.0 | 5676 | 2.3808 | 46.2506 | 26.4283 | 29.3841 | 42.7488 | 142.0 | | 0.0076 | 44.0 | 5808 | 2.3825 | 45.6823 | 26.0048 | 29.5501 | 42.6475 | 142.0 | | 0.0076 | 45.0 | 5940 | 2.3592 | 47.9127 | 26.7924 | 30.2353 | 44.791 | 142.0 | | 0.0051 | 46.0 | 6072 | 2.4206 | 46.0415 | 27.0681 | 29.9602 | 43.1225 | 142.0 | | 0.0051 | 47.0 | 6204 | 2.4214 | 48.1229 | 29.0913 | 31.1828 | 45.0022 | 142.0 | | 0.0051 | 48.0 | 6336 | 2.4176 | 47.3825 | 27.7622 | 30.4138 | 43.9047 | 142.0 | | 0.0051 | 49.0 | 6468 | 2.4137 | 48.2544 | 28.277 | 31.5548 | 45.6053 | 142.0 | | 0.0041 | 50.0 | 6600 | 2.4384 | 49.6459 | 30.186 | 33.0059 | 47.0483 | 142.0 | | 0.0041 | 51.0 | 6732 | 2.4433 | 47.7279 | 27.7857 | 30.2982 | 45.0842 | 142.0 | | 0.0041 | 52.0 | 6864 | 2.4068 | 48.6047 | 28.1758 | 31.2744 | 45.8336 | 142.0 | | 0.0041 | 53.0 | 6996 | 2.4362 | 48.7095 | 29.3335 | 31.9509 | 46.4161 | 142.0 | | 0.003 | 54.0 | 7128 | 2.4307 | 48.836 | 29.6069 | 32.4004 | 46.1986 | 142.0 | | 0.003 | 55.0 | 7260 | 2.4292 | 47.2945 | 26.7577 | 28.9719 | 43.8988 | 142.0 | | 0.003 | 56.0 | 7392 | 2.4425 | 45.2261 | 25.6879 | 28.8129 | 42.6474 | 142.0 | | 0.0024 | 57.0 | 7524 | 2.4386 | 47.967 | 28.5415 | 32.2049 | 45.5111 | 142.0 | | 0.0024 | 58.0 | 7656 | 2.4528 | 47.5552 | 27.6397 | 30.9151 | 44.2627 | 142.0 | | 0.0024 | 59.0 | 7788 | 2.4574 | 46.7821 | 27.3368 | 30.6334 | 44.0533 | 142.0 | | 0.0024 | 60.0 | 7920 | 2.4659 | 47.3507 | 26.8371 | 30.4566 | 44.4452 | 142.0 | | 0.0018 | 61.0 | 8052 | 2.4766 | 47.9847 | 28.2678 | 30.0664 | 45.0071 | 142.0 | | 0.0018 | 62.0 | 8184 | 2.4682 | 46.8392 | 27.1275 | 30.144 | 43.6379 | 142.0 | | 0.0018 | 63.0 | 8316 | 2.4754 | 45.6338 | 26.2812 | 29.4831 | 42.8744 | 142.0 | | 0.0018 | 64.0 | 8448 | 2.4772 | 46.5444 | 27.4056 | 29.6779 | 44.0905 | 142.0 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
ml4pubmed/albert-base-v2_pub_section
ml4pubmed
2022-05-04T00:09:08Z
5
0
transformers
[ "transformers", "pytorch", "albert", "text-classification", "en", "dataset:pubmed", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-03T23:25:25Z
--- language: - en datasets: - pubmed metrics: - f1 pipeline_tag: text-classification widget: - text: "many pathogenic processes and diseases are the result of an erroneous activation of the complement cascade and a number of inhibitors of complement have thus been examined for anti-inflammatory actions." example_title: "background example" - text: "a total of 192 mi patients and 140 control persons were included." example_title: "methods example" - text: "mi patients had 18 % higher plasma levels of map44 (iqr 11-25 %) as compared to the healthy control group (p < 0. 001.)" example_title: "results example" - text: "the finding that a brief cb group intervention delivered by real-world providers significantly reduced mdd onset relative to both brochure control and bibliotherapy is very encouraging, although effects on continuous outcome measures were small or nonsignificant and approximately half the magnitude of those found in efficacy research, potentially because the present sample reported lower initial depression." example_title: "conclusions example" - text: "in order to understand and update the prevalence of myopia in taiwan, a nationwide survey was performed in 1995." example_title: "objective example" --- # albert-base-v2_pub_section - original model file name: textclassifer_albert-base-v2_pubmed_full - This is a fine-tuned checkpoint of `albert-base-v2` for document section text classification - possible document section classes are:BACKGROUND, CONCLUSIONS, METHODS, OBJECTIVE, RESULTS, ## metadata ### training_parameters - date_run: Apr-26-2022_t-04 - huggingface_tag: albert-base-v2
Lauler/sentiment-classifier
Lauler
2022-05-03T23:28:00Z
6
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-03T23:25:23Z
## Sentiment classifier Sentiment classifier for Swedish trained on ScandiSent dataset.
theojolliffe/bart-large-cnn-finetuned-roundup-32
theojolliffe
2022-05-03T21:24:20Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-03T19:23:27Z
--- license: mit tags: - generated_from_trainer metrics: - rouge model-index: - name: bart-large-cnn-finetuned-roundup-32 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-cnn-finetuned-roundup-32 This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.2324 - Rouge1: 46.462 - Rouge2: 25.9506 - Rougel: 29.4584 - Rougelsum: 44.1863 - Gen Len: 142.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 32 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 132 | 1.3139 | 48.8247 | 29.2173 | 31.7628 | 45.8992 | 142.0 | | No log | 2.0 | 264 | 1.2287 | 47.9398 | 29.4061 | 30.9133 | 44.9142 | 140.9 | | No log | 3.0 | 396 | 1.2676 | 49.2743 | 30.4469 | 32.8893 | 46.6208 | 142.0 | | 0.9578 | 4.0 | 528 | 1.3218 | 47.315 | 26.7303 | 30.5007 | 44.7654 | 142.0 | | 0.9578 | 5.0 | 660 | 1.3173 | 47.1476 | 25.9408 | 29.4257 | 44.4956 | 142.0 | | 0.9578 | 6.0 | 792 | 1.4283 | 47.5836 | 27.1572 | 29.8553 | 44.8858 | 142.0 | | 0.9578 | 7.0 | 924 | 1.5005 | 46.6839 | 26.2214 | 30.1895 | 43.8753 | 140.75 | | 0.3306 | 8.0 | 1056 | 1.5316 | 47.7611 | 27.1105 | 30.8142 | 44.7598 | 142.0 | | 0.3306 | 9.0 | 1188 | 1.6295 | 48.4416 | 27.6912 | 30.3409 | 45.317 | 142.0 | | 0.3306 | 10.0 | 1320 | 1.6564 | 46.5751 | 27.2306 | 29.7265 | 43.7327 | 142.0 | | 0.3306 | 11.0 | 1452 | 1.7471 | 47.9684 | 27.5739 | 30.7018 | 44.6852 | 141.75 | | 0.145 | 12.0 | 1584 | 1.7700 | 47.9274 | 28.5129 | 31.129 | 45.1009 | 142.0 | | 0.145 | 13.0 | 1716 | 1.8391 | 49.8091 | 30.1597 | 33.6004 | 47.2007 | 141.95 | | 0.145 | 14.0 | 1848 | 1.9212 | 45.2195 | 25.033 | 27.4181 | 42.6161 | 142.0 | | 0.145 | 15.0 | 1980 | 1.9267 | 48.4959 | 28.1 | 31.2796 | 46.2758 | 142.0 | | 0.0723 | 16.0 | 2112 | 1.9130 | 47.0765 | 27.4929 | 30.6862 | 44.1458 | 142.0 | | 0.0723 | 17.0 | 2244 | 1.9514 | 48.5354 | 28.4909 | 31.8966 | 45.7116 | 142.0 | | 0.0723 | 18.0 | 2376 | 2.0064 | 47.9339 | 28.6862 | 32.4472 | 45.3704 | 142.0 | | 0.042 | 19.0 | 2508 | 2.0210 | 48.3169 | 28.1579 | 30.2681 | 45.3831 | 141.3 | | 0.042 | 20.0 | 2640 | 2.0377 | 46.8156 | 26.0122 | 28.817 | 43.9383 | 142.0 | | 0.042 | 21.0 | 2772 | 2.0587 | 46.3813 | 27.3555 | 29.875 | 43.6605 | 142.0 | | 0.042 | 22.0 | 2904 | 2.0695 | 45.6728 | 26.0639 | 29.5653 | 42.3772 | 142.0 | | 0.025 | 23.0 | 3036 | 2.1617 | 46.7283 | 26.2082 | 28.52 | 43.3304 | 142.0 | | 0.025 | 24.0 | 3168 | 2.1375 | 48.1347 | 28.3444 | 31.7509 | 45.4907 | 142.0 | | 0.025 | 25.0 | 3300 | 2.1911 | 47.3358 | 27.1479 | 29.4923 | 44.0087 | 142.0 | | 0.025 | 26.0 | 3432 | 2.1806 | 47.2218 | 26.8421 | 30.03 | 44.2417 | 142.0 | | 0.0153 | 27.0 | 3564 | 2.1890 | 46.3745 | 27.0095 | 29.7274 | 43.3372 | 142.0 | | 0.0153 | 28.0 | 3696 | 2.2235 | 50.1274 | 30.8817 | 32.8766 | 46.7486 | 141.5 | | 0.0153 | 29.0 | 3828 | 2.2236 | 50.1785 | 30.8079 | 32.8886 | 46.9888 | 142.0 | | 0.0153 | 30.0 | 3960 | 2.2312 | 46.7468 | 26.4272 | 30.1175 | 43.9132 | 142.0 | | 0.0096 | 31.0 | 4092 | 2.2287 | 47.558 | 26.3933 | 29.9122 | 44.5752 | 142.0 | | 0.0096 | 32.0 | 4224 | 2.2324 | 46.462 | 25.9506 | 29.4584 | 44.1863 | 142.0 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
SebastianS/distilbert-base-uncased-finetuned-imdb
SebastianS
2022-05-03T20:42:53Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-05-03T19:56:43Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - eval_loss: 0.0122 - eval_runtime: 27.9861 - eval_samples_per_second: 35.732 - eval_steps_per_second: 0.572 - epoch: 2.13 - step: 334 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
stevemobs/bert-finetuned-squad-pytorch
stevemobs
2022-05-03T20:17:32Z
8
1
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-05-03T17:49:44Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-finetuned-squad-pytorch results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad-pytorch This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
BigSalmon/ConciseAndFormal
BigSalmon
2022-05-03T19:42:53Z
5
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-03T19:34:00Z
how to start prompt: ``` wordy: ``` example: ``` wordy: the ndp has turned into the country's darling of the young. ``` output: ``` the ndp is youth-driven. ``` OR ``` informal english: ``` example: ``` informal english: corn fields are all across illinois, visible once you leave chicago. ``` output: ``` corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago. ```
theojolliffe/bart-large-cnn-finetuned-roundup-16
theojolliffe
2022-05-03T19:21:08Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-03T18:14:34Z
--- license: mit tags: - generated_from_trainer metrics: - rouge model-index: - name: bart-large-cnn-finetuned-roundup-16 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-cnn-finetuned-roundup-16 This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8957 - Rouge1: 49.4097 - Rouge2: 29.3516 - Rougel: 31.527 - Rougelsum: 46.4241 - Gen Len: 141.9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 16 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 132 | 1.3170 | 48.412 | 29.2017 | 31.6679 | 45.494 | 141.85 | | No log | 2.0 | 264 | 1.2292 | 49.0133 | 29.6645 | 30.7612 | 46.1673 | 142.0 | | No log | 3.0 | 396 | 1.2670 | 49.183 | 29.4104 | 31.573 | 46.7082 | 142.0 | | 0.9596 | 4.0 | 528 | 1.3059 | 47.3854 | 26.6865 | 28.4666 | 44.4934 | 141.8 | | 0.9596 | 5.0 | 660 | 1.3288 | 48.1189 | 26.9242 | 31.2938 | 45.3462 | 142.0 | | 0.9596 | 6.0 | 792 | 1.4084 | 47.5713 | 26.7488 | 29.2959 | 45.1764 | 141.3 | | 0.9596 | 7.0 | 924 | 1.5043 | 46.5407 | 26.0995 | 29.9007 | 43.9335 | 142.0 | | 0.3369 | 8.0 | 1056 | 1.5115 | 49.6891 | 29.0514 | 32.33 | 46.9357 | 142.0 | | 0.3369 | 9.0 | 1188 | 1.6131 | 47.5773 | 27.6348 | 30.5294 | 45.1151 | 142.0 | | 0.3369 | 10.0 | 1320 | 1.6837 | 46.5699 | 26.3805 | 29.8581 | 43.5252 | 142.0 | | 0.3369 | 11.0 | 1452 | 1.7874 | 47.1383 | 26.535 | 30.1724 | 44.2508 | 142.0 | | 0.148 | 12.0 | 1584 | 1.7776 | 49.8061 | 30.1994 | 33.2405 | 47.6102 | 142.0 | | 0.148 | 13.0 | 1716 | 1.8144 | 48.4451 | 28.2949 | 30.9026 | 45.6614 | 142.0 | | 0.148 | 14.0 | 1848 | 1.8646 | 50.1964 | 30.4426 | 32.8156 | 47.4134 | 142.0 | | 0.148 | 15.0 | 1980 | 1.8829 | 48.8129 | 29.2358 | 32.3247 | 46.2233 | 142.0 | | 0.0726 | 16.0 | 2112 | 1.8957 | 49.4097 | 29.3516 | 31.527 | 46.4241 | 141.9 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
mak109/distilgpt2-finetuned-lyrics
mak109
2022-05-03T19:20:58Z
5
0
transformers
[ "transformers", "tf", "tensorboard", "gpt2", "text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-05-03T15:48:21Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: mak109/distilgpt2-finetuned-lyrics results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # mak109/distilgpt2-finetuned-lyrics This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.0226 - Validation Loss: 3.0275 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.2907 | 3.1500 | 0 | | 3.1607 | 3.0962 | 1 | | 3.1005 | 3.0664 | 2 | | 3.0573 | 3.0430 | 3 | | 3.0226 | 3.0275 | 4 | ### Framework versions - Transformers 4.18.0 - TensorFlow 2.6.3 - Datasets 2.1.0 - Tokenizers 0.12.1
laituan245/molt5-base-caption2smiles
laituan245
2022-05-03T18:08:45Z
764
1
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "arxiv:2204.11817", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-03T04:08:16Z
--- license: apache-2.0 --- This model can be used to generate a SMILES string from an input caption. ## Example Usage ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("laituan245/molt5-base-caption2smiles", model_max_length=512) model = T5ForConditionalGeneration.from_pretrained('laituan245/molt5-base-caption2smiles') input_text = 'The molecule is a monomethoxybenzene that is 2-methoxyphenol substituted by a hydroxymethyl group at position 4. It has a role as a plant metabolite. It is a member of guaiacols and a member of benzyl alcohols.' input_ids = tokenizer(input_text, return_tensors="pt").input_ids outputs = model.generate(input_ids, num_beams=5, max_length=512) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) # The model will generate "COC1=C(C=CC(=C1)CCCO)O". The ground-truth is "COC1=C(C=CC(=C1)CO)O". ``` ## Paper For more information, please take a look at our paper. Paper: [Translation between Molecules and Natural Language](https://arxiv.org/abs/2204.11817) Authors: *Carl Edwards\*, Tuan Lai\*, Kevin Ros, Garrett Honke, Heng Ji*
laituan245/molt5-large-caption2smiles
laituan245
2022-05-03T18:08:19Z
7,081
1
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "arxiv:2204.11817", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-03T15:58:10Z
--- license: apache-2.0 --- This model can be used to generate a SMILES string from an input caption. ## Example Usage ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("laituan245/molt5-large-caption2smiles", model_max_length=512) model = T5ForConditionalGeneration.from_pretrained('laituan245/molt5-large-caption2smiles') input_text = 'The molecule is a monomethoxybenzene that is 2-methoxyphenol substituted by a hydroxymethyl group at position 4. It has a role as a plant metabolite. It is a member of guaiacols and a member of benzyl alcohols.' input_ids = tokenizer(input_text, return_tensors="pt").input_ids outputs = model.generate(input_ids, num_beams=5, max_length=512) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Paper For more information, please take a look at our paper. Paper: [Translation between Molecules and Natural Language](https://arxiv.org/abs/2204.11817) Authors: *Carl Edwards\*, Tuan Lai\*, Kevin Ros, Garrett Honke, Heng Ji*
TehranNLP-org/electra-base-hateXplain
TehranNLP-org
2022-05-03T17:00:31Z
5
0
transformers
[ "transformers", "pytorch", "electra", "text-classification", "generated_from_trainer", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-30T12:51:26Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: SEED0042 results: - task: name: Text Classification type: text-classification dataset: name: HATEXPLAIN type: '' args: hatexplain metrics: - name: Accuracy type: accuracy value: 0.4162330905306972 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SEED0042 This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on the HATEXPLAIN dataset. It achieves the following results on the evaluation set: - Loss: 0.7667 - Accuracy: 0.4162 - Accuracy 0: 0.8145 - Accuracy 1: 0.1895 - Accuracy 2: 0.3084 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - distributed_type: not_parallel - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 150 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Accuracy 0 | Accuracy 1 | Accuracy 2 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:----------:|:----------:| | No log | 1.0 | 481 | 0.7431 | 0.4152 | 0.7707 | 0.1805 | 0.3650 | | No log | 2.0 | 962 | 0.7346 | 0.4152 | 0.8010 | 0.2190 | 0.2774 | | No log | 3.0 | 1443 | 0.7667 | 0.4162 | 0.8145 | 0.1895 | 0.3084 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu113 - Datasets 2.1.0 - Tokenizers 0.11.6
mrm8488/data2vec-text-base-finetuned-stsb
mrm8488
2022-05-03T16:28:24Z
14
0
transformers
[ "transformers", "pytorch", "tensorboard", "data2vec-text", "text-classification", "generated_from_trainer", "dataset:glue", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-03T15:51:59Z
--- license: mit tags: - generated_from_trainer datasets: - glue metrics: - spearmanr model-index: - name: data2vec-text-base-finetuned-stsb results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: stsb metrics: - name: Spearmanr type: spearmanr value: 0.8716633516590501 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # data2vec-text-base-finetuned-stsb This model is a fine-tuned version of [facebook/data2vec-text-base](https://huggingface.co/facebook/data2vec-text-base) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5530 - Pearson: 0.8732 - Spearmanr: 0.8717 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.725353773731373e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 5 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:| | No log | 1.0 | 180 | 1.0650 | 0.8102 | 0.8380 | | No log | 2.0 | 360 | 0.6211 | 0.8524 | 0.8497 | | 0.9312 | 3.0 | 540 | 0.5917 | 0.8640 | 0.8642 | | 0.9312 | 4.0 | 720 | 0.5672 | 0.8695 | 0.8686 | | 0.9312 | 5.0 | 900 | 0.5530 | 0.8732 | 0.8717 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
facebook/data2vec-vision-base
facebook
2022-05-03T15:52:10Z
664
3
transformers
[ "transformers", "pytorch", "tf", "data2vec-vision", "image-feature-extraction", "image-classification", "vision", "dataset:imagenet", "dataset:imagenet-1k", "arxiv:2202.03555", "arxiv:2106.08254", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-classification
2022-04-14T08:08:12Z
--- license: apache-2.0 tags: - image-classification - vision datasets: - imagenet - imagenet-1k --- # Data2Vec-Vision (base-sized model, pre-trained only) BEiT model pre-trained in a self-supervised fashion on ImageNet-1k (1,2 million images, 1000 classes) at resolution 224x224. It was introduced in the paper [data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli and first released in [this repository](https://github.com/facebookresearch/data2vec_vision/tree/main/beit). Disclaimer: The team releasing Facebook team did not write a model card for this model so this model card has been written by the Hugging Face team. ## Pre-Training method ![model image](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/data2vec.png) For more information, please take a look at the [official paper](https://arxiv.org/abs/2202.03555). ## Abstract *While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind. To get us closer to general self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech, NLP or computer vision. The core idea is to predict latent representations of the full input data based on a masked view of the input in a selfdistillation setup using a standard Transformer architecture. Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which are local in nature, data2vec predicts contextualized latent representations that contain information from the entire input. Experiments on the major benchmarks of speech recognition, image classification, and natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches.* ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?other=data2vec-vision) to look for fine-tuned versions on a task that interests you. ## Training data The BEiT model was pretrained on [ImageNet-1k](http://www.image-net.org/), a dataset consisting of 1,2 million images and 1k classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/microsoft/unilm/blob/master/beit/datasets.py). Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5). ### Pretraining For all pre-training related hyperparameters, we refer to the [original paper](https://arxiv.org/abs/2106.08254) and the [original codebase](https://github.com/facebookresearch/data2vec_vision/tree/main/beit) ## Evaluation results For evaluation results on several image classification benchmarks, we refer to tables 1 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution. Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info ```bibtex @misc{https://doi.org/10.48550/arxiv.2202.03555, doi = {10.48550/ARXIV.2202.03555}, url = {https://arxiv.org/abs/2202.03555}, author = {Baevski, Alexei and Hsu, Wei-Ning and Xu, Qiantong and Babu, Arun and Gu, Jiatao and Auli, Michael}, keywords = {Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language}, publisher = {arXiv}, year = {2022}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
soyasis/gpt2-finetuned-how-to-qa
soyasis
2022-05-03T15:32:40Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "en", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-04-01T13:10:42Z
--- language: en license: mit --- # HowTo QA with GPT-2 base GPT-2 English language model fine-tuned with Β±2.000 entries from WikiHow. You can try it here: https://how-to-generator.herokuapp.com/ Input prompt should follow the following format: `\n<|startoftext|>[WP] How to {text} \n[RESPONSE]` Example: `\n<|startoftext|>[WP] How to create a universe \n[RESPONSE]`
pietrolesci/t5v1_1-base-mnli_snli_anli
pietrolesci
2022-05-03T14:46:07Z
4
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-03T14:33:00Z
## Overview T5-Base v1.1 model trained to generate hypotheses given a premise and a label. Below the settings used to train it. ```yaml Experiment configurations β”œβ”€β”€ datasets β”‚ └── snli_train: β”‚ dataset_name: snli β”‚ dataset_config_name: null β”‚ cache_dir: null β”‚ input_fields: β”‚ - premise β”‚ - hypothesis β”‚ target_field: label β”‚ train_subset_names: null β”‚ val_subset_names: validation β”‚ test_subset_names: none β”‚ train_val_split: null β”‚ limit_train_samples: null β”‚ limit_val_samples: null β”‚ limit_test_samples: null β”‚ sampling_kwargs: β”‚ sampling_strategy: random β”‚ seed: 42 β”‚ replace: false β”‚ align_labels_with_mapping: null β”‚ avoid_consistency_check: false β”‚ predict_label_mapping: null β”‚ anli_train: β”‚ dataset_name: anli β”‚ dataset_config_name: null β”‚ cache_dir: null β”‚ input_fields: β”‚ - premise β”‚ - hypothesis β”‚ target_field: label β”‚ train_subset_names: β”‚ - train_r1 β”‚ - train_r2 β”‚ - train_r3 β”‚ val_subset_names: β”‚ - dev_r1 β”‚ - dev_r2 β”‚ - dev_r3 β”‚ test_subset_names: none β”‚ train_val_split: null β”‚ limit_train_samples: null β”‚ limit_val_samples: null β”‚ limit_test_samples: null β”‚ sampling_kwargs: β”‚ sampling_strategy: random β”‚ seed: 42 β”‚ replace: false β”‚ align_labels_with_mapping: null β”‚ avoid_consistency_check: false β”‚ predict_label_mapping: null β”‚ mnli_train: β”‚ dataset_name: multi_nli β”‚ dataset_config_name: null β”‚ cache_dir: null β”‚ input_fields: β”‚ - premise β”‚ - hypothesis β”‚ target_field: label β”‚ train_subset_names: null β”‚ val_subset_names: validation_matched β”‚ test_subset_names: none β”‚ train_val_split: null β”‚ limit_train_samples: null β”‚ limit_val_samples: null β”‚ limit_test_samples: null β”‚ sampling_kwargs: β”‚ sampling_strategy: random β”‚ seed: 42 β”‚ replace: false β”‚ align_labels_with_mapping: null β”‚ avoid_consistency_check: false β”‚ predict_label_mapping: null β”‚ snli: β”‚ dataset_name: snli β”‚ dataset_config_name: null β”‚ cache_dir: null β”‚ input_fields: β”‚ - premise β”‚ - hypothesis β”‚ target_field: label β”‚ train_subset_names: none β”‚ val_subset_names: none β”‚ test_subset_names: null β”‚ train_val_split: null β”‚ limit_train_samples: null β”‚ limit_val_samples: null β”‚ limit_test_samples: null β”‚ sampling_kwargs: β”‚ sampling_strategy: random β”‚ seed: 42 β”‚ replace: false β”‚ align_labels_with_mapping: null β”‚ avoid_consistency_check: false β”‚ predict_label_mapping: null β”‚ anli: β”‚ dataset_name: anli β”‚ dataset_config_name: null β”‚ cache_dir: null β”‚ input_fields: β”‚ - premise β”‚ - hypothesis β”‚ target_field: label β”‚ train_subset_names: none β”‚ val_subset_names: none β”‚ test_subset_names: β”‚ - test_r1 β”‚ - test_r2 β”‚ - test_r3 β”‚ train_val_split: null β”‚ limit_train_samples: null β”‚ limit_val_samples: null β”‚ limit_test_samples: null β”‚ sampling_kwargs: β”‚ sampling_strategy: random β”‚ seed: 42 β”‚ replace: false β”‚ align_labels_with_mapping: null β”‚ avoid_consistency_check: false β”‚ predict_label_mapping: null β”‚ mnli: β”‚ dataset_name: multi_nli β”‚ dataset_config_name: null β”‚ cache_dir: null β”‚ input_fields: β”‚ - premise β”‚ - hypothesis β”‚ target_field: label β”‚ train_subset_names: none β”‚ val_subset_names: none β”‚ test_subset_names: validation_mismatched β”‚ train_val_split: null β”‚ limit_train_samples: null β”‚ limit_val_samples: null β”‚ limit_test_samples: null β”‚ sampling_kwargs: β”‚ sampling_strategy: random β”‚ seed: 42 β”‚ replace: false β”‚ align_labels_with_mapping: null β”‚ avoid_consistency_check: false β”‚ predict_label_mapping: null β”‚ β”œβ”€β”€ data β”‚ └── _target_: src.task.nli.data.NLIGenerationData.from_config β”‚ main_dataset_name: null β”‚ use_additional_as_test: null β”‚ dataloader: β”‚ batch_size: 96 β”‚ eval_batch_size: 96 β”‚ num_workers: 8 β”‚ pin_memory: true β”‚ drop_last: false β”‚ persistent_workers: false β”‚ shuffle: true β”‚ seed_dataloader: 42 β”‚ replacement: false β”‚ processing: β”‚ preprocessing_num_workers: 8 β”‚ preprocessing_batch_size: 1000 β”‚ load_from_cache_file: true β”‚ padding: longest β”‚ truncation: longest_first β”‚ max_source_length: 128 β”‚ max_target_length: 128 β”‚ template: 'premise: $premise $label hypothesis: ' β”‚ tokenizer: β”‚ _target_: transformers.AutoTokenizer.from_pretrained β”‚ pretrained_model_name_or_path: pietrolesci/t5-v1_1-base_nli_gen β”‚ use_fast: true β”‚ β”œβ”€β”€ task β”‚ └── optimizer: β”‚ name: Adafactor β”‚ lr: 0.001 β”‚ weight_decay: 0.0 β”‚ no_decay: β”‚ - bias β”‚ - LayerNorm.weight β”‚ decay_rate: -0.8 β”‚ clip_threshold: 1.0 β”‚ relative_step: false β”‚ scale_parameter: false β”‚ warmup_init: false β”‚ scheduler: β”‚ name: constant_schedule β”‚ model: β”‚ model_name_or_path: pietrolesci/t5-v1_1-base_nli_gen β”‚ checkpoint_path: null β”‚ freeze: false β”‚ seed_init_weight: 42 β”‚ _target_: src.task.nli.NLIGenerationTask.from_config β”‚ generation: β”‚ generation_max_length: 128 β”‚ generation_min_length: 3 β”‚ do_sample: true β”‚ early_stopping: false β”‚ num_beams: 1 β”‚ temperature: 1.0 β”‚ top_k: 50 β”‚ top_p: 0.95 β”‚ repetition_penalty: null β”‚ length_penalty: null β”‚ no_repeat_ngram_size: null β”‚ encoder_no_repeat_ngram_size: null β”‚ num_return_sequences: 1 β”‚ max_time: null β”‚ max_new_tokens: null β”‚ decoder_start_token_id: null β”‚ use_cache: null β”‚ num_beam_groups: null β”‚ diversity_penalty: null β”‚ β”œβ”€β”€ trainer β”‚ └── _target_: pytorch_lightning.Trainer β”‚ callbacks: β”‚ lr_monitor: β”‚ _target_: pytorch_lightning.callbacks.LearningRateMonitor β”‚ logging_interval: step β”‚ log_momentum: false β”‚ model_checkpoint: β”‚ _target_: pytorch_lightning.callbacks.ModelCheckpoint β”‚ dirpath: ./checkpoints/ β”‚ filename: nli_generator_sma-epoch={epoch:02d}-val_loss={val/aggregat β”‚ monitor: val/aggregated_loss β”‚ mode: min β”‚ verbose: false β”‚ save_last: true β”‚ save_top_k: 1 β”‚ auto_insert_metric_name: false β”‚ save_on_train_epoch_end: false β”‚ rich_model_summary: β”‚ _target_: pytorch_lightning.callbacks.RichModelSummary β”‚ max_depth: 1 β”‚ log_grad_norm: β”‚ _target_: src.core.callbacks.LogGradNorm β”‚ norm_type: 2 β”‚ group_separator: / β”‚ only_total: true β”‚ on_step: true β”‚ on_epoch: false β”‚ prog_bar: true β”‚ log_generated_text: β”‚ _target_: src.core.callbacks.GenerateAndLogText β”‚ dirpath: ./generated_text β”‚ type: generated_text β”‚ pop_keys_after_logging: true β”‚ on_train: false β”‚ on_validation: false β”‚ on_test: true β”‚ log_to_wandb: true β”‚ wandb_log_dataset_sizes: β”‚ _target_: src.core.callbacks.WandbLogDatasetSizes β”‚ logger: β”‚ wandb: β”‚ _target_: pytorch_lightning.loggers.WandbLogger β”‚ project: nli_debiasing β”‚ entity: team_brushino β”‚ name: nli_generator_sma β”‚ save_dir: ./ β”‚ offline: false β”‚ log_model: false β”‚ group: generator β”‚ job_type: genearator_training β”‚ tags: β”‚ - nli_generator_sma β”‚ - seed=42 β”‚ - seed_dataloader=42 β”‚ notes: nli_generator_sma_time=01-37-04 β”‚ enable_checkpointing: true β”‚ enable_progress_bar: true β”‚ enable_model_summary: true β”‚ gradient_clip_val: 6 β”‚ gradient_clip_algorithm: null β”‚ accelerator: gpu β”‚ devices: auto β”‚ gpus: null β”‚ auto_select_gpus: true β”‚ accumulate_grad_batches: 1 β”‚ max_epochs: 2 β”‚ min_epochs: 1 β”‚ max_steps: -1 β”‚ min_steps: null β”‚ max_time: null β”‚ num_sanity_val_steps: 2 β”‚ overfit_batches: 0.0 β”‚ fast_dev_run: false β”‚ limit_train_batches: 1.0 β”‚ limit_val_batches: 1.0 β”‚ limit_test_batches: 1.0 β”‚ profiler: null β”‚ detect_anomaly: false β”‚ deterministic: false β”‚ check_val_every_n_epoch: 1 β”‚ val_check_interval: 0.5 β”‚ log_every_n_steps: 1 β”‚ move_metrics_to_cpu: false β”‚ └── training └── run_val_before_fit: false run_val_after_fit: false run_test_before_fit: false run_test_after_fit: true lr: 0.001 seed: 42 show_batch: false batch_size: 96 eval_batch_size: 96 num_workers: 8 pin_memory: true drop_last: false persistent_workers: false shuffle: true seed_dataloader: 42 ignore_warnings: true experiment_name: nli_generator_sma ```
PoloHuggingface/French_grammar_error_corrector
PoloHuggingface
2022-05-03T13:32:40Z
102
6
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "text2text generation", "fr", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-04-02T15:45:49Z
--- content: language: - fr tags: - text2text generation widget: - text: "improve grammar: Elle ne peux jamais aller au cinΓ©ma avec son amis" example_title: "Grammar correction" --- # Finetuned T5 on the french part of Lang-8 to automatically correct sentences. Since the Lang-8 dataset contains really short sentences, the model does not generalize well with sentences larger than 10 words. I'll upload soon the cleaned dataset that I've used for training.
sanchit-gandhi/flax-wav2vec2-2-bart-large-960h
sanchit-gandhi
2022-05-03T12:24:52Z
3
0
transformers
[ "transformers", "jax", "speech-encoder-decoder", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-05-03T12:07:42Z
2.5% WER on dev.clean: https://wandb.ai/sanchit-gandhi/flax-wav2vec2-2-bart-large-960h/runs/2lhazd5v
datauma/bert-finetuned-ner
datauma
2022-05-03T11:52:53Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-05-03T11:24:33Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.9312510328871261 - name: Recall type: recall value: 0.9483338943116796 - name: F1 type: f1 value: 0.9397148336529643 - name: Accuracy type: accuracy value: 0.9855624889621475 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0630 - Precision: 0.9313 - Recall: 0.9483 - F1: 0.9397 - Accuracy: 0.9856 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.084 | 1.0 | 1756 | 0.0652 | 0.9203 | 0.9387 | 0.9294 | 0.9842 | | 0.0387 | 2.0 | 3512 | 0.0589 | 0.9271 | 0.9504 | 0.9386 | 0.9853 | | 0.0203 | 3.0 | 5268 | 0.0630 | 0.9313 | 0.9483 | 0.9397 | 0.9856 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1