modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-02 18:52:31
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-02 18:52:05
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
allenai/multicite-multilabel-roberta-large
|
allenai
| 2022-05-10T17:46:12Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"Roberta",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-06T12:23:33Z |
---
language: en
tags:
- Roberta
license: mit
---
# MultiCite: Multi-label Citation Intent Classification with Roberta-large (NAACL 2022)
This model has been trained on the data available here: https://github.com/allenai/multicite.
|
allenai/multicite-multilabel-scibert
|
allenai
| 2022-05-10T17:45:24Z | 123 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"scibert",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-06T12:02:26Z |
---
language: en
tags:
- scibert
license: mit
---
# MultiCite: Multi-label Citation Intent Classification with SciBERT (NAACL 2022)
This model has been trained on the data available here: https://github.com/allenai/multicite
|
pglauner/distilbert-base-uncased-finetuned-emotion
|
pglauner
| 2022-05-10T17:42:18Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-10T15:12:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9265
- name: F1
type: f1
value: 0.9265216393152228
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2251
- Accuracy: 0.9265
- F1: 0.9265
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8432 | 1.0 | 250 | 0.3353 | 0.8975 | 0.8939 |
| 0.2582 | 2.0 | 500 | 0.2251 | 0.9265 | 0.9265 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
paultimothymooney/distilbert-rater
|
paultimothymooney
| 2022-05-10T17:40:47Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-10T16:11:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-rater
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-rater
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.11.6
|
husnu/wav2vec2-large-xls-r-300m-turkish-colab_common_voice-8_5
|
husnu
| 2022-05-10T17:22:15Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-10T13:23:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab_common_voice-8_5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab_common_voice-8_5
This model is a fine-tuned version of [husnu/wav2vec2-large-xls-r-300m-turkish-colab_common_voice-8_4](https://huggingface.co/husnu/wav2vec2-large-xls-r-300m-turkish-colab_common_voice-8_4) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3439
- Wer: 0.3634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1243 | 0.51 | 400 | 0.4312 | 0.4202 |
| 0.1956 | 1.02 | 800 | 0.4421 | 0.4498 |
| 0.1816 | 1.53 | 1200 | 0.4012 | 0.4285 |
| 0.1548 | 2.04 | 1600 | 0.3720 | 0.3845 |
| 0.1171 | 2.55 | 2000 | 0.3439 | 0.3634 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 2.1.0
- Tokenizers 0.10.3
|
domenicrosati/question_converter-3b
|
domenicrosati
| 2022-05-10T17:05:23Z | 41 | 3 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:domenicrosati/QA2D",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-04T17:03:22Z |
---
language:
- en
tags:
- text2text-generation
datasets:
- domenicrosati/QA2D
widget:
- text: "Where in the world is Carmen Sandiego. She is in Abruzzo"
example_title: "Where is Carmen Sandiego?"
- text: "Halifax is a city in which province. Nova Scotia"
example_title: "A Halifact"
---
# Question-Answer to Statement Converter
A question answer pair to statement converter from https://github.com/jifan-chen/QA-Verification-Via-NLI
See:
```
@article{chen2021can,
title={Can NLI Models Verify QA Systems' Predictions?},
author={Chen, Jifan and Choi, Eunsol and Durrett, Greg},
journal={EMNLP Findings},
year={2021}
}
```
**Note:** I am not the maintainer or orginal author just keeping it here to use huggingface APIs to produce statements from question answer pair for downstream applications.
## TL;DR:
We fine-tune a seq2seq model,
T5-3B (Raffel et al., 2020), using the \\((a, q, d)\\) pairs
annotated by Demszky et al. (2018).
Where a is answer, q is question, and d is declerative sentence (i.e. a statement).
See Appendex B.2 of Chen et al. for more.
## Usage
The prompt should be `{question} {seperator} {answer}` where the seperator is `</s>`.
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained('domenicrosati/question_converter-3b')
model = AutoModelForSeq2SeqLM.from_pretrained('domenicrosati/question_converter-3b')
question = "Where in the world is Carmen Sandiego?"
answer = "She is in Abruzzo"
prompt = f'{question} </s> {answer}'
input_ids = tokenizer(prompt, return_tensors='pt').input_ids
output_ids = model.generate(input_ids)
responses = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
```
> `['Carmen Sandiego is in Abruzzo.']`
|
datauma/mt5-small-finetuned-amazon-en-es
|
datauma
| 2022-05-10T16:52:35Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-04T04:07:58Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: datauma/mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# datauma/mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.2505
- Validation Loss: 3.4530
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 9672, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 11.9288 | 5.8713 | 0 |
| 6.6821 | 4.3246 | 1 |
| 5.6453 | 3.8715 | 2 |
| 5.0908 | 3.6368 | 3 |
| 4.7348 | 3.5496 | 4 |
| 4.5106 | 3.4939 | 5 |
| 4.3261 | 3.4659 | 6 |
| 4.2505 | 3.4530 | 7 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
anuragshas/wav2vec2-xls-r-300m-ur-cv9-with-lm
|
anuragshas
| 2022-05-10T16:51:19Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_9_0",
"generated_from_trainer",
"ur",
"dataset:mozilla-foundation/common_voice_9_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-04T14:27:44Z |
---
language:
- ur
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_9_0
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_9_0
metrics:
- wer
model-index:
- name: XLS-R-300M - Urdu
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_9_0
name: Common Voice 9
args: ur
metrics:
- type: wer
value: 23.750
name: Test WER
- name: Test CER
type: cer
value: 8.310
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_9_0 - UR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4147
- Wer: 0.3172
- Cer: 0.1050
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 5108
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 3.2894 | 7.83 | 400 | 3.1501 | 1.0 | 1.0 |
| 1.8586 | 15.68 | 800 | 0.8871 | 0.6721 | 0.2402 |
| 1.3431 | 23.52 | 1200 | 0.5813 | 0.5502 | 0.1939 |
| 1.2052 | 31.37 | 1600 | 0.4956 | 0.4788 | 0.1665 |
| 1.1097 | 39.21 | 2000 | 0.4447 | 0.4143 | 0.1397 |
| 1.0528 | 47.06 | 2400 | 0.4439 | 0.3961 | 0.1333 |
| 0.9939 | 54.89 | 2800 | 0.4348 | 0.4014 | 0.1379 |
| 0.9441 | 62.74 | 3200 | 0.4236 | 0.3653 | 0.1223 |
| 0.913 | 70.58 | 3600 | 0.4309 | 0.3475 | 0.1157 |
| 0.8678 | 78.43 | 4000 | 0.4270 | 0.3337 | 0.1110 |
| 0.8414 | 86.27 | 4400 | 0.4158 | 0.3220 | 0.1070 |
| 0.817 | 94.12 | 4800 | 0.4185 | 0.3231 | 0.1072 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.1.1.dev0
- Tokenizers 0.12.1
|
Joiner/ppoLunarLanding-v2
|
Joiner
| 2022-05-10T16:44:09Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-10T16:43:26Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 126.84 +/- 80.67
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
dtiapkin/ppo-LunalLander-v2
|
dtiapkin
| 2022-05-10T16:40:06Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-10T16:38:30Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 230.56 +/- 74.36
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
arjunpatel/distilgpt2-finetuned-wikitext2
|
arjunpatel
| 2022-05-10T16:34:52Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-10T01:46:36Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: arjunpatel/distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# arjunpatel/distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.7979
- Validation Loss: 3.6723
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.7979 | 3.6723 | 0 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Extred/TEST2ppo-LunarLander-v2
|
Extred
| 2022-05-10T16:23:47Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-10T16:23:20Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 256.82 +/- 17.70
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
Udi-Aharon/distilbert-base-uncased-finetuned-ner
|
Udi-Aharon
| 2022-05-10T15:59:20Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-05-10T11:50:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.924327912379688
- name: Recall
type: recall
value: 0.9346683074169371
- name: F1
type: f1
value: 0.9294693514295249
- name: Accuracy
type: accuracy
value: 0.9836529143565221
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0615
- Precision: 0.9243
- Recall: 0.9347
- F1: 0.9295
- Accuracy: 0.9837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2396 | 1.0 | 878 | 0.0715 | 0.9135 | 0.9228 | 0.9181 | 0.9805 |
| 0.051 | 2.0 | 1756 | 0.0617 | 0.9192 | 0.9334 | 0.9263 | 0.9826 |
| 0.0295 | 3.0 | 2634 | 0.0615 | 0.9243 | 0.9347 | 0.9295 | 0.9837 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
arunprasadh/ppo-LunarLander-v3
|
arunprasadh
| 2022-05-10T14:32:49Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-10T14:32:18Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 285.62 +/- 20.33
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
Davincilee/closure_system_door_inne-bert-base-uncased
|
Davincilee
| 2022-05-10T13:49:44Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-04-30T15:08:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: closure_system_door_inne-bert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# closure_system_door_inne-bert-base-uncased
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7907
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 7
- eval_batch_size: 7
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7321 | 1.0 | 2 | 2.5801 |
| 2.6039 | 2.0 | 4 | 2.0081 |
| 2.4556 | 3.0 | 6 | 2.3329 |
| 2.3587 | 4.0 | 8 | 2.4156 |
| 2.2565 | 5.0 | 10 | 2.0009 |
| 2.3489 | 6.0 | 12 | 1.7774 |
| 2.2622 | 7.0 | 14 | 2.2064 |
| 2.415 | 8.0 | 16 | 1.9671 |
| 2.1873 | 9.0 | 18 | 2.0729 |
| 2.2377 | 10.0 | 20 | 2.0052 |
| 2.352 | 11.0 | 22 | 1.9614 |
| 2.2347 | 12.0 | 24 | 2.2437 |
| 2.1113 | 13.0 | 26 | 1.7145 |
| 2.1939 | 14.0 | 28 | 1.5418 |
| 2.0645 | 15.0 | 30 | 2.1882 |
| 2.1499 | 16.0 | 32 | 2.0266 |
| 2.1432 | 17.0 | 34 | 2.3583 |
| 2.0656 | 18.0 | 36 | 2.3147 |
| 2.0348 | 19.0 | 38 | 2.2807 |
| 2.0502 | 20.0 | 40 | 1.7122 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
florentgbelidji/setfit_emotion
|
florentgbelidji
| 2022-05-10T12:57:31Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-05-10T12:25:57Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# florentgbelidji/setfit_emotion
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('florentgbelidji/setfit_emotion')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=florentgbelidji/setfit_emotion)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 203 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.dataloader._InfiniteConstantSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.BatchHardTripletLoss.BatchHardTripletLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 4060,
"warmup_steps": 406,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
patrickvonplaten/wav2vec2-base-timit-demo-google-colab
|
patrickvonplaten
| 2022-05-10T12:33:52Z | 19 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-10T11:02:23Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5185
- Wer: 0.3370
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5137 | 1.0 | 500 | 1.6719 | 0.9580 |
| 0.8324 | 2.01 | 1000 | 0.5546 | 0.5341 |
| 0.4365 | 3.01 | 1500 | 0.4567 | 0.4635 |
| 0.3058 | 4.02 | 2000 | 0.4429 | 0.4454 |
| 0.2284 | 5.02 | 2500 | 0.4734 | 0.4186 |
| 0.1892 | 6.02 | 3000 | 0.4191 | 0.4030 |
| 0.1542 | 7.03 | 3500 | 0.4522 | 0.3985 |
| 0.1364 | 8.03 | 4000 | 0.4749 | 0.3922 |
| 0.1239 | 9.04 | 4500 | 0.4950 | 0.3977 |
| 0.1092 | 10.04 | 5000 | 0.4468 | 0.3779 |
| 0.0956 | 11.04 | 5500 | 0.4897 | 0.3789 |
| 0.0897 | 12.05 | 6000 | 0.4927 | 0.3718 |
| 0.0792 | 13.05 | 6500 | 0.5242 | 0.3699 |
| 0.0731 | 14.06 | 7000 | 0.5202 | 0.3772 |
| 0.0681 | 15.06 | 7500 | 0.5046 | 0.3637 |
| 0.062 | 16.06 | 8000 | 0.5336 | 0.3664 |
| 0.0556 | 17.07 | 8500 | 0.5017 | 0.3633 |
| 0.0556 | 18.07 | 9000 | 0.5466 | 0.3736 |
| 0.0461 | 19.08 | 9500 | 0.5489 | 0.3566 |
| 0.0439 | 20.08 | 10000 | 0.5399 | 0.3559 |
| 0.0397 | 21.08 | 10500 | 0.5154 | 0.3539 |
| 0.0346 | 22.09 | 11000 | 0.5170 | 0.3513 |
| 0.0338 | 23.09 | 11500 | 0.5236 | 0.3492 |
| 0.0342 | 24.1 | 12000 | 0.5288 | 0.3493 |
| 0.0282 | 25.1 | 12500 | 0.5147 | 0.3449 |
| 0.0251 | 26.1 | 13000 | 0.5092 | 0.3442 |
| 0.0268 | 27.11 | 13500 | 0.5093 | 0.3413 |
| 0.021 | 28.11 | 14000 | 0.5310 | 0.3399 |
| 0.022 | 29.12 | 14500 | 0.5185 | 0.3370 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
nepp1d0/prot_bert_classification_finetuned_no_finetune
|
nepp1d0
| 2022-05-10T12:27:27Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-09T22:29:53Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: prot_bert_classification_finetuned_no_finetune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# prot_bert_classification_finetuned_no_finetune
This model is a fine-tuned version of [Rostlab/prot_bert](https://huggingface.co/Rostlab/prot_bert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6212
- Accuracy: 0.6473
- F1: 0.6623
- Precision: 0.6201
- Recall: 0.7107
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 3
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.6494 | 1.0 | 3332 | 0.6479 | 0.6439 | 0.6679 | 0.6116 | 0.7357 |
| 0.5357 | 2.0 | 6664 | 0.6440 | 0.6148 | 0.6459 | 0.5845 | 0.7218 |
| 0.4661 | 3.0 | 9996 | 0.6265 | 0.6283 | 0.6414 | 0.6047 | 0.6829 |
| 0.506 | 4.0 | 13328 | 0.6192 | 0.6439 | 0.6567 | 0.6187 | 0.6996 |
| 0.4204 | 5.0 | 16660 | 0.6122 | 0.6567 | 0.6752 | 0.6259 | 0.7330 |
| 0.6071 | 6.0 | 19992 | 0.6212 | 0.6473 | 0.6623 | 0.6201 | 0.7107 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
rairo/landing-v2
|
rairo
| 2022-05-10T12:22:18Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-10T12:21:38Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 256.23 +/- 14.87
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
darshanz/occupation-prediction
|
darshanz
| 2022-05-10T11:59:28Z | 35 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"vit",
"image-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-05-08T04:35:30Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: darshanz/occupaion-prediction
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# darshanz/occupation-prediction
This model is ViT base patch16. Which is pretrained on imagenet dataset, then trained on our custom dataset which is based on occupation prediction. This dataset contains facial images of Indian people which are labeled by occupation. This model predicts the occupation of a person from the facial image of a person. This model categorizes input facial images into 5 classes: Anchor, Athlete, Doctor, Professor, and Farmer. This model gives an accuracy of 84.43%.
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 0.0001, 'decay_steps': 70, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.4}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch |
|:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:|
| 1.0840 | 0.6156 | 0.8813 | 0.6843 | 0.75 | 0.9700 | 0 |
| 0.4686 | 0.8406 | 0.9875 | 0.5345 | 0.8100 | 0.9867 | 1 |
| 0.2600 | 0.9312 | 0.9953 | 0.4805 | 0.8333 | 0.9800 | 2 |
| 0.1515 | 0.9609 | 0.9969 | 0.5071 | 0.8267 | 0.9733 | 3 |
| 0.0746 | 0.9875 | 1.0 | 0.4853 | 0.8500 | 0.9833 | 4 |
| 0.0468 | 0.9953 | 1.0 | 0.5006 | 0.8433 | 0.9733 | 5 |
| 0.0378 | 0.9953 | 1.0 | 0.4967 | 0.8433 | 0.9800 | 6 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Tokenizers 0.12.1
|
huggingtweets/_avichalp_
|
huggingtweets
| 2022-05-10T11:56:46Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-10T11:55:53Z |
---
language: en
thumbnail: http://www.huggingtweets.com/_avichalp_/1652183801632/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1472922431396331520/eqT17_QF_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">avi</div>
<div style="text-align: center; font-size: 14px;">@_avichalp_</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from avi.
| Data | avi |
| --- | --- |
| Tweets downloaded | 2625 |
| Retweets | 259 |
| Short tweets | 596 |
| Tweets kept | 1770 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2wg7ysai/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @_avichalp_'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3ae6t1qq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3ae6t1qq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/_avichalp_')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/broductmanager
|
huggingtweets
| 2022-05-10T11:36:53Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-10T11:35:53Z |
---
language: en
thumbnail: http://www.huggingtweets.com/broductmanager/1652182609331/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1522425562895044608/H93gVhPH_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">rahul</div>
<div style="text-align: center; font-size: 14px;">@broductmanager</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from rahul.
| Data | rahul |
| --- | --- |
| Tweets downloaded | 3244 |
| Retweets | 85 |
| Short tweets | 1164 |
| Tweets kept | 1995 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1r967jne/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @broductmanager's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2zx676ih) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2zx676ih/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/broductmanager')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Santiagot1105/wav2vec2-large-xlsr-es-col-pro
|
Santiagot1105
| 2022-05-10T11:19:34Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-09T22:14:32Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-es-col-pro
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-es-col-pro
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-spanish](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-spanish) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0636
- Wer: 0.0507
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1032 | 7.4 | 400 | 0.0618 | 0.0656 |
| 0.0687 | 14.81 | 800 | 0.0670 | 0.0619 |
| 0.0402 | 22.22 | 1200 | 0.0693 | 0.0573 |
| 0.0252 | 29.62 | 1600 | 0.0636 | 0.0507 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.1+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
|
osanseviero/TEST2ppo-LunarLander-v3
|
osanseviero
| 2022-05-10T10:41:13Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"MountainCar-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-04T09:38:06Z |
---
library_name: stable-baselines3
tags:
- MountainCar-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -97.87 +/- 143.38
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MountainCar-v0
type: MountainCar-v0
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
patrickvonplaten/wav2vec2-base-timit-demo-colab
|
patrickvonplaten
| 2022-05-10T09:38:48Z | 449 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4888
- Wer: 0.3392
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1134 | 4.0 | 500 | 0.4250 | 0.3626 |
| 0.1035 | 8.0 | 1000 | 0.4980 | 0.3650 |
| 0.0801 | 12.0 | 1500 | 0.5563 | 0.3632 |
| 0.0592 | 16.0 | 2000 | 0.6222 | 0.3607 |
| 0.0563 | 20.0 | 2500 | 0.4763 | 0.3457 |
| 0.0611 | 24.0 | 3000 | 0.4938 | 0.3489 |
| 0.0475 | 28.0 | 3500 | 0.4888 | 0.3392 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
usmanazhar/finetuning-sentiment-model-3000-samples
|
usmanazhar
| 2022-05-10T09:27:23Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-10T04:50:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8733333333333333
- name: F1
type: f1
value: 0.8766233766233766
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3346
- Accuracy: 0.8733
- F1: 0.8766
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
etsymba/ppo-LunarLander-v2
|
etsymba
| 2022-05-10T09:26:45Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-10T09:23:14Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 208.93 +/- 53.16
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
melodisease/ppo-LunarLander-v2
|
melodisease
| 2022-05-10T08:57:17Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-10T08:56:43Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 243.43 +/- 22.55
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
mrm8488/electricidad-base-finetuned-parmex
|
mrm8488
| 2022-05-10T08:18:19Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-10T07:56:42Z |
---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: electricidad-base-finetuned-parmex
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electricidad-base-finetuned-parmex
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0372
- F1: 0.9764
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8.309269976237555e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 208 | 0.0377 | 0.9801 |
| No log | 2.0 | 416 | 0.0372 | 0.9764 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
tomhosking/deberta-v3-base-debiased-nli
|
tomhosking
| 2022-05-10T08:15:40Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-10T07:35:49Z |
---
license: apache-2.0
widget:
- text: "[CLS] Rover is a dog. [SEP] Rover is a cat. [SEP]"
---
`deberta-v3-base`, fine tuned on the debiased NLI dataset from "Generating Data to Mitigate Spurious Correlations in Natural Language Inference Datasets", Wu et al., 2022.
Tuned using the code at https://github.com/jimmycode/gen-debiased-nli
|
jabot/PPPO_LunarLanderV2_1000000Steps
|
jabot
| 2022-05-10T07:54:22Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-10T07:53:57Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 261.06 +/- 28.61
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
Theimisa/distilbert-base-uncased-aisera_texts-v3
|
Theimisa
| 2022-05-10T07:49:12Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-05-09T11:41:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-aisera_texts-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-aisera_texts-v3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8106
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0183 | 1.0 | 3875 | 1.8913 |
| 1.9018 | 2.0 | 7750 | 1.8106 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Slientea/TEST2ppo-LunarLander-v2
|
Slientea
| 2022-05-10T07:13:07Z | 0 | 1 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-10T07:12:34Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 202.32 +/- 21.75
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
describeai/gemini-small
|
describeai
| 2022-05-10T06:00:56Z | 247 | 4 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"Explain code",
"Code Summarization",
"Summarization",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language: en
tags:
- Explain code
- Code Summarization
- Summarization
license: mit
---
# Gemini
For in-depth understanding of our model and methods, please see our blog [here](https://www.describe-ai.com/gemini)
## Model description
Gemini is a transformer based on Google's T5 model. The model is pre-trained on approximately 800k code/description pairs and then fine-tuned on 10k higher-level explanations that were synthetically generated. Gemini is capable of summarization/explaining short to medium code snippets in:
- Python
- Javascript (mostly vanilla JS, however, it can handle frameworks like React as well)
- Java
- Ruby
- Go
And outputs a description in English.
## Intended uses & limitations
Gemini without any additional fine-tuning is capable of explaining code in a sentence or two and typically performs best in Python and Javascript. We recommend using Gemini for either simple code explanation, documentation or producing more synthetic data to improve its explanations.
### How to use
You can use this model directly with a pipeline for Text2Text generation, as shown below:
```python
from transformers import pipeline, set_seed
summarizer = pipeline('text2text-generation', model='describeai/gemini-small')
code = "print('hello world!')"
response = summarizer(code, max_length=100, num_beams=3)
print("Summarized code: " + response[0]['generated_text'])
```
Which should yield something along the lines of:
```
Summarized code: The following code is greeting the world.
```
### Model sizes
- Gemini: 770 Million Parameters
- Gemini-Small (this repo): 220 Million Parameters
### Limitations
Typically, Gemini may produce overly simplistic descriptions that don't encompass the entire code snippet. We suspect with more training data, this could be circumvented and will produce better results.
### About Us
A Describe.ai, we are focused on building Artificial Intelligence systems that can understand language as well as humans. While a long path, we plan to contribute our findings to our API to the Open Source community.
|
ncduy/ppo-LunarLander-v2
|
ncduy
| 2022-05-10T05:22:19Z | 1 | 0 |
transformers
|
[
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
#@title
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 290.76 +/- 18.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# {name_of_your_repo}
This is a pre-trained model of a {algo} agent playing {environment} using the [stable-baselines3](https://github.com/DLR-RM/stable-baselines3) library.
### Usage (with Stable-baselines3)
Using this model becomes easy when you have stable-baselines3 and huggingface_sb3 installed:
```
pip install stable-baselines3
pip install huggingface_sb3
```
Then, you can use the model like this:
```python
import gym
from huggingface_sb3 import load_from_hub
from stable_baselines3 import PPO
from stable_baselines3.common.evaluation import evaluate_policy
# Retrieve the model from the hub
## repo_id = id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name})
## filename = name of the model zip file from the repository
checkpoint = load_from_hub(repo_id="{repo_id}", filename="{filename}.zip")
model = PPO.load(checkpoint)
# Evaluate the agent
eval_env = gym.make('{environment}')
mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True)
print(f"mean_reward={mean_reward:.2f} +/- {std_reward}")
# Watch the agent play
obs = env.reset()
for i in range(1000):
action, _state = model.predict(obs)
obs, reward, done, info = env.step(action)
env.render()
if done:
obs = env.reset()
env.close()
```
### Evaluation Results
Mean_reward: {your_evaluation_results}
### Demo
<video src="https://huggingface.co/ncduy/ppo-LunarLander-v2/resolve/main/output.mp4" controls autoplay loop></video>
|
madatnlp/gamza-bart-for-kormath128
|
madatnlp
| 2022-05-10T05:16:17Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"bart",
"text2text-generation",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-10T05:01:54Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: madatnlp/gamza-bart-for-kormath128
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# madatnlp/gamza-bart-for-kormath128
This model is a fine-tuned version of [gogamza/kobart-base-v2](https://huggingface.co/gogamza/kobart-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1429
- Validation Loss: 0.3575
- Epoch: 42
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 1e-04, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 5.9513 | 3.2241 | 0 |
| 2.6808 | 1.8567 | 1 |
| 1.6770 | 1.2966 | 2 |
| 1.2253 | 1.0402 | 3 |
| 1.0279 | 0.9159 | 4 |
| 0.9241 | 0.8158 | 5 |
| 0.8570 | 0.8047 | 6 |
| 0.8130 | 0.7684 | 7 |
| 0.7771 | 0.7817 | 8 |
| 0.7522 | 0.7653 | 9 |
| 0.7318 | 0.6813 | 10 |
| 0.7111 | 0.6535 | 11 |
| 0.6916 | 0.6719 | 12 |
| 0.6901 | 0.7191 | 13 |
| 0.6551 | 0.6330 | 14 |
| 0.6495 | 0.6242 | 15 |
| 0.6258 | 0.6048 | 16 |
| 0.6184 | 0.6590 | 17 |
| 0.6055 | 0.6622 | 18 |
| 0.5946 | 0.6377 | 19 |
| 0.5807 | 0.5994 | 20 |
| 0.5781 | 0.5797 | 21 |
| 0.5644 | 0.6154 | 22 |
| 0.5466 | 0.5777 | 23 |
| 0.5417 | 0.6324 | 24 |
| 0.5204 | 0.5763 | 25 |
| 0.5081 | 0.5751 | 26 |
| 0.4923 | 0.5908 | 27 |
| 0.4616 | 0.5433 | 28 |
| 0.4238 | 0.4823 | 29 |
| 0.3765 | 0.4474 | 30 |
| 0.3447 | 0.4306 | 31 |
| 0.3156 | 0.3817 | 32 |
| 0.2832 | 0.3824 | 33 |
| 0.2632 | 0.3204 | 34 |
| 0.2365 | 0.3539 | 35 |
| 0.2179 | 0.3162 | 36 |
| 0.2024 | 0.3385 | 37 |
| 0.1860 | 0.3367 | 38 |
| 0.1801 | 0.3019 | 39 |
| 0.1629 | 0.3045 | 40 |
| 0.1533 | 0.2567 | 41 |
| 0.1429 | 0.3575 | 42 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
csebuetnlp/banglishbert
|
csebuetnlp
| 2022-05-10T05:13:47Z | 730 | 2 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"pretraining",
"bn",
"en",
"arxiv:2101.00204",
"endpoints_compatible",
"region:us"
] | null | 2022-05-04T09:47:49Z |
---
language:
- bn
- en
licenses:
- cc-by-nc-sa-4.0
---
# BanglishBERT
This repository contains the pretrained discriminator checkpoint of the model **BanglishBERT**. This is an [ELECTRA](https://openreview.net/pdf?id=r1xMH1BtvB) discriminator model pretrained with the Replaced Token Detection (RTD) objective on large amounts of Bengali and English corpora. BanglishBERT achieves state-of-the-art **zero-shot cross-lingual transfer** results in many of the NLP tasks in Bengali.
For finetuning on different downstream tasks such as `Sentiment classification`, `Named Entity Recognition`, `Natural Language Inference` etc., refer to the scripts in the official GitHub [repository](https://github.com/csebuetnlp/banglabert).
**Note**: This model was pretrained using a specific normalization pipeline available [here](https://github.com/csebuetnlp/normalizer). All finetuning scripts in the official GitHub repository uses this normalization by default. If you need to adapt the pretrained model for a different task make sure the text units are normalized using this pipeline before tokenizing to get best results. A basic example is given below:
## Using this model as a discriminator in `transformers` (tested on 4.11.0.dev0)
```python
from transformers import AutoModelForPreTraining, AutoTokenizer
from normalizer import normalize # pip install git+https://github.com/csebuetnlp/normalizer
import torch
model = AutoModelForPreTraining.from_pretrained("csebuetnlp/banglishbert")
tokenizer = AutoTokenizer.from_pretrained("csebuetnlp/banglishbert")
original_sentence = "আমি কৃতজ্ঞ কারণ আপনি আমার জন্য অনেক কিছু করেছেন।"
fake_sentence = "আমি হতাশ কারণ আপনি আমার জন্য অনেক কিছু করেছেন।"
fake_sentence = normalize(fake_sentence) # this normalization step is required before tokenizing the text
fake_tokens = tokenizer.tokenize(fake_sentence)
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
discriminator_outputs = model(fake_inputs).logits
predictions = torch.round((torch.sign(discriminator_outputs) + 1) / 2)
[print("%7s" % token, end="") for token in fake_tokens]
print("\n" + "-" * 50)
[print("%7s" % int(prediction), end="") for prediction in predictions.squeeze().tolist()[1:-1]]
print("\n" + "-" * 50)
```
## Benchmarks
* Zero-shot cross-lingual transfer-learning
| Model | Params | SC (macro-F1) | NLI (accuracy) | NER (micro-F1) | QA (EM/F1) | BangLUE score |
|----------------|-----------|-----------|-----------|-----------|-----------|-----------|
|[mBERT](https://huggingface.co/bert-base-multilingual-cased) | 180M | 27.05 | 62.22 | 39.27 | 59.01/64.18 | 50.35 |
|[XLM-R (base)](https://huggingface.co/xlm-roberta-base) | 270M | 42.03 | 72.18 | 45.37 | 55.03/61.83 | 55.29 |
|[XLM-R (large)](https://huggingface.co/xlm-roberta-large) | 550M | 49.49 | 78.13 | 56.48 | 71.13/77.70 | 66.59 |
|[BanglishBERT](https://huggingface.co/csebuetnlp/banglishbert) | 110M | 48.39 | 75.26 | 55.56 | 72.87/78.63 | 66.14 |
* Supervised fine-tuning
| Model | Params | SC (macro-F1) | NLI (accuracy) | NER (micro-F1) | QA (EM/F1) | BangLUE score |
|----------------|-----------|-----------|-----------|-----------|-----------|-----------|
|[mBERT](https://huggingface.co/bert-base-multilingual-cased) | 180M | 67.59 | 75.13 | 68.97 | 67.12/72.64 | 70.29 |
|[XLM-R (base)](https://huggingface.co/xlm-roberta-base) | 270M | 69.54 | 78.46 | 73.32 | 68.09/74.27 | 72.82 |
|[XLM-R (large)](https://huggingface.co/xlm-roberta-large) | 550M | 70.97 | 82.40 | 78.39 | 73.15/79.06 | 76.79 |
|[sahajBERT](https://huggingface.co/neuropark/sahajBERT) | 18M | 71.12 | 76.92 | 70.94 | 65.48/70.69 | 71.03 |
|[BanglishBERT](https://huggingface.co/csebuetnlp/banglishbert) | 110M | 70.61 | 80.95 | 76.28 | 72.43/78.40 | 75.73 |
|[BanglaBERT](https://huggingface.co/csebuetnlp/banglabert) | 110M | 72.89 | 82.80 | 77.78 | 72.63/79.34 | **77.09** |
The benchmarking datasets are as follows:
* **SC:** **[Sentiment Classification](https://aclanthology.org/2021.findings-emnlp.278)**
* **NER:** **[Named Entity Recognition](https://multiconer.github.io/competition)**
* **NLI:** **[Natural Language Inference](https://github.com/csebuetnlp/banglabert/#datasets)**
* **QA:** **[Question Answering](https://github.com/csebuetnlp/banglabert/#datasets)**
## Citation
If you use this model, please cite the following paper:
```
@inproceedings{bhattacharjee-etal-2022-banglabert,
title = {BanglaBERT: Lagnuage Model Pretraining and Benchmarks for Low-Resource Language Understanding Evaluation in Bangla},
author = "Bhattacharjee, Abhik and
Hasan, Tahmid and
Mubasshir, Kazi and
Islam, Md. Saiful and
Uddin, Wasi Ahmad and
Iqbal, Anindya and
Rahman, M. Sohel and
Shahriyar, Rifat",
booktitle = "Findings of the North American Chapter of the Association for Computational Linguistics: NAACL 2022",
month = july,
year = {2022},
url = {https://arxiv.org/abs/2101.00204},
eprinttype = {arXiv},
eprint = {2101.00204}
}
```
If you use the normalization module, please cite the following paper:
```
@inproceedings{hasan-etal-2020-low,
title = "Not Low-Resource Anymore: Aligner Ensembling, Batch Filtering, and New Datasets for {B}engali-{E}nglish Machine Translation",
author = "Hasan, Tahmid and
Bhattacharjee, Abhik and
Samin, Kazi and
Hasan, Masum and
Basak, Madhusudan and
Rahman, M. Sohel and
Shahriyar, Rifat",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.207",
doi = "10.18653/v1/2020.emnlp-main.207",
pages = "2612--2623",
abstract = "Despite being the seventh most widely spoken language in the world, Bengali has received much less attention in machine translation literature due to being low in resources. Most publicly available parallel corpora for Bengali are not large enough; and have rather poor quality, mostly because of incorrect sentence alignments resulting from erroneous sentence segmentation, and also because of a high volume of noise present in them. In this work, we build a customized sentence segmenter for Bengali and propose two novel methods for parallel corpus creation on low-resource setups: aligner ensembling and batch filtering. With the segmenter and the two methods combined, we compile a high-quality Bengali-English parallel corpus comprising of 2.75 million sentence pairs, more than 2 million of which were not available before. Training on neural models, we achieve an improvement of more than 9 BLEU score over previous approaches to Bengali-English machine translation. We also evaluate on a new test set of 1000 pairs made with extensive quality control. We release the segmenter, parallel corpus, and the evaluation set, thus elevating Bengali from its low-resource status. To the best of our knowledge, this is the first ever large scale study on Bengali-English machine translation. We believe our study will pave the way for future research on Bengali-English machine translation as well as other low-resource languages. Our data and code are available at https://github.com/csebuetnlp/banglanmt.",
}
```
|
kornosk/bert-political-election2020-twitter-mlm
|
kornosk
| 2022-05-10T04:45:45Z | 88 | 4 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"twitter",
"masked-token-prediction",
"election2020",
"politics",
"en",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: "en"
tags:
- twitter
- masked-token-prediction
- election2020
- politics
license: "gpl-3.0"
---
# Pre-trained BERT on Twitter US Political Election 2020
Pre-trained weights for [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021.
We use the initialized weights from BERT-base (uncased) or `bert-base-uncased`.
# Training Data
This model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election.
# Training Objective
This model is initialized with BERT-base and trained with normal MLM objective.
# Usage
This pre-trained language model **can be fine-tunned to any downstream task (e.g. classification)**.
Please see the [official repository](https://github.com/GU-DataLab/stance-detection-KE-MLM) for more detail.
```python
from transformers import BertTokenizer, BertForMaskedLM, pipeline
import torch
# Choose GPU if available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Select mode path here
pretrained_LM_path = "kornosk/bert-political-election2020-twitter-mlm"
# Load model
tokenizer = BertTokenizer.from_pretrained(pretrained_LM_path)
model = BertForMaskedLM.from_pretrained(pretrained_LM_path)
# Fill mask
example = "Trump is the [MASK] of USA"
fill_mask = pipeline('fill-mask', model=model, tokenizer=tokenizer)
# Use following line instead of the above one does not work.
# Huggingface have been updated, newer version accepts a string of model name instead.
fill_mask = pipeline('fill-mask', model=pretrained_LM_path, tokenizer=tokenizer)
outputs = fill_mask(example)
print(outputs)
# See embeddings
inputs = tokenizer(example, return_tensors="pt")
outputs = model(**inputs)
print(outputs)
# OR you can use this model to train on your downstream task!
# Please consider citing our paper if you feel this is useful :)
```
# Reference
- [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021.
# Citation
```bibtex
@inproceedings{kawintiranon2021knowledge,
title={Knowledge Enhanced Masked Language Model for Stance Detection},
author={Kawintiranon, Kornraphop and Singh, Lisa},
booktitle={Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies},
year={2021},
publisher={Association for Computational Linguistics},
url={https://www.aclweb.org/anthology/2021.naacl-main.376}
}
```
|
husnu/wav2vec2-large-xls-r-300m-turkish-colab_common_voice-8_4
|
husnu
| 2022-05-10T04:41:58Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-09T13:54:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab_common_voice-8_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab_common_voice-8_4
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3201
- Wer: 0.3295
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 11
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.9268 | 0.51 | 400 | 1.3204 | 0.9175 |
| 0.7491 | 1.02 | 800 | 0.5880 | 0.6388 |
| 0.4911 | 1.53 | 1200 | 0.4680 | 0.5613 |
| 0.4265 | 2.04 | 1600 | 0.4213 | 0.5059 |
| 0.3473 | 2.55 | 2000 | 0.4199 | 0.4955 |
| 0.3291 | 3.07 | 2400 | 0.4323 | 0.5061 |
| 0.2819 | 3.58 | 2800 | 0.4026 | 0.4490 |
| 0.2628 | 4.09 | 3200 | 0.3831 | 0.4446 |
| 0.2371 | 4.6 | 3600 | 0.3622 | 0.4234 |
| 0.2274 | 5.11 | 4000 | 0.3473 | 0.4012 |
| 0.2051 | 5.62 | 4400 | 0.3471 | 0.3998 |
| 0.1985 | 6.13 | 4800 | 0.3759 | 0.4088 |
| 0.1767 | 6.64 | 5200 | 0.3620 | 0.4012 |
| 0.1707 | 7.15 | 5600 | 0.3415 | 0.3700 |
| 0.1559 | 7.66 | 6000 | 0.3317 | 0.3661 |
| 0.147 | 8.17 | 6400 | 0.3265 | 0.3618 |
| 0.1339 | 8.68 | 6800 | 0.3293 | 0.3586 |
| 0.126 | 9.2 | 7200 | 0.3386 | 0.3458 |
| 0.1149 | 9.71 | 7600 | 0.3305 | 0.3397 |
| 0.1051 | 10.22 | 8000 | 0.3235 | 0.3354 |
| 0.1005 | 10.73 | 8400 | 0.3201 | 0.3295 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 2.1.0
- Tokenizers 0.10.3
|
Sounak/distilbert-finetuned
|
Sounak
| 2022-05-10T04:05:02Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-05-10T04:00:48Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Sounak/distilbert-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Sounak/distilbert-finetuned
This model is a fine-tuned version of [distilbert-base-uncased-distilled-squad](https://huggingface.co/distilbert-base-uncased-distilled-squad) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0422
- Validation Loss: 1.7343
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 468, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.9989 | 1.6524 | 0 |
| 1.3489 | 1.6702 | 1 |
| 1.0422 | 1.7343 | 2 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Eugene-Bond/ppo-LunarLander-v2
|
Eugene-Bond
| 2022-05-10T03:41:07Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-08T14:31:13Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 282.88 +/- 14.89
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
```
from typing import Callable
def linear_schedule(initial_value: float) -> Callable[[float], float]:
def func(progress_remaining: float) -> float:
return progress_remaining * initial_value
return func
model = PPO(policy="MlpPolicy", env=env, verbose=1, n_epochs=10, learning_rate=linear_schedule(0.005), n_steps=1500)
```
|
ckiplab/bert-base-chinese-ws
|
ckiplab
| 2022-05-10T03:28:12Z | 202,812 | 15 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"bert",
"token-classification",
"zh",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
language:
- zh
thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png
tags:
- pytorch
- token-classification
- bert
- zh
license: gpl-3.0
---
# CKIP BERT Base Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- https://github.com/ckiplab/ckip-transformers
## Contributers
- [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
```
from transformers import (
BertTokenizerFast,
AutoModel,
)
tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
model = AutoModel.from_pretrained('ckiplab/bert-base-chinese-ws')
```
For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers.
有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
|
ckiplab/albert-tiny-chinese-ws
|
ckiplab
| 2022-05-10T03:28:12Z | 86,338 | 6 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"token-classification",
"zh",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
language:
- zh
thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png
tags:
- pytorch
- token-classification
- albert
- zh
license: gpl-3.0
---
# CKIP ALBERT Tiny Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- https://github.com/ckiplab/ckip-transformers
## Contributers
- [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
```
from transformers import (
BertTokenizerFast,
AutoModel,
)
tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
model = AutoModel.from_pretrained('ckiplab/albert-tiny-chinese-ws')
```
For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers.
有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
|
ckiplab/gpt2-base-chinese
|
ckiplab
| 2022-05-10T03:28:12Z | 73,107 | 30 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"lm-head",
"zh",
"license:gpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language:
- zh
thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png
tags:
- pytorch
- lm-head
- gpt2
- zh
license: gpl-3.0
---
# CKIP GPT2 Base Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- https://github.com/ckiplab/ckip-transformers
## Contributers
- [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
```
from transformers import (
BertTokenizerFast,
AutoModel,
)
tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
model = AutoModel.from_pretrained('ckiplab/gpt2-base-chinese')
```
For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers.
有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
|
ckiplab/bert-tiny-chinese-ws
|
ckiplab
| 2022-05-10T03:28:12Z | 1,641 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"zh",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-05-10T02:54:32Z |
---
language:
- zh
thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png
tags:
- pytorch
- token-classification
- bert
- zh
license: gpl-3.0
---
# CKIP BERT Tiny Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- https://github.com/ckiplab/ckip-transformers
## Contributers
- [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
```
from transformers import (
BertTokenizerFast,
AutoModel,
)
tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
model = AutoModel.from_pretrained('ckiplab/bert-tiny-chinese-ws')
```
For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers.
有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
|
ckiplab/albert-tiny-chinese-ner
|
ckiplab
| 2022-05-10T03:28:10Z | 122 | 2 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"token-classification",
"zh",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
language:
- zh
thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png
tags:
- pytorch
- token-classification
- albert
- zh
license: gpl-3.0
---
# CKIP ALBERT Tiny Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- https://github.com/ckiplab/ckip-transformers
## Contributers
- [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
```
from transformers import (
BertTokenizerFast,
AutoModel,
)
tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
model = AutoModel.from_pretrained('ckiplab/albert-tiny-chinese-ner')
```
For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers.
有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
|
ckiplab/albert-base-chinese-pos
|
ckiplab
| 2022-05-10T03:28:09Z | 1,144 | 2 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"token-classification",
"zh",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
language:
- zh
thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png
tags:
- pytorch
- token-classification
- albert
- zh
license: gpl-3.0
---
# CKIP ALBERT Base Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- https://github.com/ckiplab/ckip-transformers
## Contributers
- [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
```
from transformers import (
BertTokenizerFast,
AutoModel,
)
tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
model = AutoModel.from_pretrained('ckiplab/albert-base-chinese-pos')
```
For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers.
有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
|
ckiplab/albert-tiny-chinese
|
ckiplab
| 2022-05-10T03:28:09Z | 1,193 | 10 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"fill-mask",
"lm-head",
"zh",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language:
- zh
thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png
tags:
- pytorch
- lm-head
- albert
- zh
license: gpl-3.0
---
# CKIP ALBERT Tiny Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- https://github.com/ckiplab/ckip-transformers
## Contributers
- [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
```
from transformers import (
BertTokenizerFast,
AutoModel,
)
tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
model = AutoModel.from_pretrained('ckiplab/albert-tiny-chinese')
```
For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers.
有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
|
ckiplab/albert-base-chinese-ws
|
ckiplab
| 2022-05-10T03:28:09Z | 1,733 | 2 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"token-classification",
"zh",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
language:
- zh
thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png
tags:
- pytorch
- token-classification
- albert
- zh
license: gpl-3.0
---
# CKIP ALBERT Base Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- https://github.com/ckiplab/ckip-transformers
## Contributers
- [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
```
from transformers import (
BertTokenizerFast,
AutoModel,
)
tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
model = AutoModel.from_pretrained('ckiplab/albert-base-chinese-ws')
```
For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers.
有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
|
ckiplab/albert-base-chinese
|
ckiplab
| 2022-05-10T03:28:08Z | 1,117 | 12 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"fill-mask",
"lm-head",
"zh",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language:
- zh
thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png
tags:
- pytorch
- lm-head
- albert
- zh
license: gpl-3.0
---
# CKIP ALBERT Base Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- https://github.com/ckiplab/ckip-transformers
## Contributers
- [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
```
from transformers import (
BertTokenizerFast,
AutoModel,
)
tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
model = AutoModel.from_pretrained('ckiplab/albert-base-chinese')
```
For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers.
有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
|
dfsj/xlm-roberta-base-finetuned-panx-de
|
dfsj
| 2022-05-10T03:20:57Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-05-10T02:25:57Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8674931756141947
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1326
- F1: 0.8675
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2654 | 1.0 | 525 | 0.1745 | 0.8133 |
| 0.1317 | 2.0 | 1050 | 0.1428 | 0.8427 |
| 0.0823 | 3.0 | 1575 | 0.1326 | 0.8675 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu102
- Datasets 2.0.0
- Tokenizers 0.12.1
|
truckli/distilbert-base-uncased-finetuned-cola
|
truckli
| 2022-05-10T03:08:21Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-10T02:02:33Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: truckli/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# truckli/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1784
- Validation Loss: 0.6462
- Train Matthews Correlation: 0.4750
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2670, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5225 | 0.4622 | 0.4667 | 0 |
| 0.3210 | 0.4788 | 0.4909 | 1 |
| 0.1784 | 0.6462 | 0.4750 | 2 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
salil-malhotra/test01-ppo-LunarLander-v2
|
salil-malhotra
| 2022-05-10T02:42:21Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-09T21:07:56Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 175.56 +/- 103.29
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
murdockthedude/wav2vec2-base-timit-demo-colab
|
murdockthedude
| 2022-05-10T02:31:15Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-10T00:02:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4627
- Wer: 0.3518
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4716 | 4.0 | 500 | 1.3023 | 0.9254 |
| 0.5958 | 8.0 | 1000 | 0.4582 | 0.4399 |
| 0.2223 | 12.0 | 1500 | 0.4477 | 0.3886 |
| 0.1373 | 16.0 | 2000 | 0.4791 | 0.3630 |
| 0.101 | 20.0 | 2500 | 0.4676 | 0.3561 |
| 0.0724 | 24.0 | 3000 | 0.4539 | 0.3510 |
| 0.0513 | 28.0 | 3500 | 0.4627 | 0.3518 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 1.17.0
- Tokenizers 0.12.1
|
dbarbedillo/ppo-LunarLander-v2-3
|
dbarbedillo
| 2022-05-10T01:26:02Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-10T01:08:03Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 294.85 +/- 15.48
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
jhoonk/distilbert-base-uncased-finetuned-squad
|
jhoonk
| 2022-05-10T00:07:59Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-05-02T11:03:03Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1622
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2107 | 1.0 | 5533 | 1.1478 |
| 0.949 | 2.0 | 11066 | 1.1191 |
| 0.7396 | 3.0 | 16599 | 1.1622 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
pinecone/distiluse-podcast-nq
|
pinecone
| 2022-05-09T22:47:45Z | 113 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-04-06T15:57:43Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# DistilUSE Podcast Natural Questions
This is a [sentence-transformers](https://www.SBERT.net) model built for asymmetric semantic search of Podcast episodes. It replicates the fine-tuning process of Spotify's podcast search model, as [described here](https://www.pinecone.io/learn/spotify-podcast-search/).
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["podcast about climate change", "how to make money on the internet"]
model = SentenceTransformer('pinecone/distiluse-podcast-nq')
embeddings = model.encode(sentences)
```
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 3748 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.RerankingEvaluator.RerankingEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 374,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
James Briggs, [How Spotify Uses Semantic Search for Podcasts](https://www.pinecone.io/learn/spotify-podcast-search/), Pinecone
|
leumastai/LunarLander-TestModel
|
leumastai
| 2022-05-09T21:57:03Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-09T21:56:26Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 124.09 +/- 113.84
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
IvanTi/ppo-lunarlander-v0
|
IvanTi
| 2022-05-09T20:39:24Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-09T19:38:49Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 263.54 +/- 22.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
KenP/marian-finetuned-kde4-en-to-fr
|
KenP
| 2022-05-09T20:36:25Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"marian",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-09T18:11:12Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: KenP/marian-finetuned-kde4-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# KenP/marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6855
- Validation Loss: 0.8088
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 17733, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.0599 | 0.8835 | 0 |
| 0.7975 | 0.8254 | 1 |
| 0.6855 | 0.8088 | 2 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
suppabob/TEST2ppo-LunarLander-v2
|
suppabob
| 2022-05-09T18:55:43Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-09T18:55:08Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 218.36 +/- 65.70
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
lm1991/PPO
|
lm1991
| 2022-05-09T18:43:38Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-09T18:35:20Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 232.96 +/- 23.88
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
ironbar/ppo-lunarlander-v2-local-train
|
ironbar
| 2022-05-09T17:48:00Z | 6 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-09T17:46:13Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 301.16 +/- 11.98
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
Joshwabail/lunar_lander_test
|
Joshwabail
| 2022-05-09T16:57:52Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-09T16:29:40Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -177.16 +/- 72.05
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
huxxx657/roberta-base-finetuned-squad-2
|
huxxx657
| 2022-05-09T15:58:57Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-05-09T14:48:30Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-finetuned-squad-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-squad-2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 5.9506
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.9519 | 1.0 | 5536 | 5.9506 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
angelinux/PPO-LunarLander-v2
|
angelinux
| 2022-05-09T15:35:01Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-09T15:34:38Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 269.81 +/- 34.66
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
princeton-nlp/CoFi-MRPC-s95
|
princeton-nlp
| 2022-05-09T15:24:40Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2204.00408",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-09T15:19:17Z |
This is a model checkpoint for "[Structured Pruning Learns Compact and Accurate Models](https://arxiv.org/pdf/2204.00408.pdf)". The model is pruned from `bert-base-uncased` to a 95% sparsity on dataset MRPC. Please go to [our repository](https://github.com/princeton-nlp/CoFiPruning) for more details on how to use the model for inference. Note that you would have to use the model class specified in our repository to load the model.
|
princeton-nlp/CoFi-MRPC-s60
|
princeton-nlp
| 2022-05-09T15:24:25Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2204.00408",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-09T15:19:52Z |
This is a model checkpoint for "[Structured Pruning Learns Compact and Accurate Models](https://arxiv.org/pdf/2204.00408.pdf)". The model is pruned from `bert-base-uncased` to a 60% sparsity on dataset MRPC. Please go to [our repository](https://github.com/princeton-nlp/CoFiPruning) for more details on how to use the model for inference. Note that you would have to use the model class specified in our repository to load the model.
|
princeton-nlp/CoFi-CoLA-s95
|
princeton-nlp
| 2022-05-09T15:24:06Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2204.00408",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-09T15:20:55Z |
This is a model checkpoint for "[Structured Pruning Learns Compact and Accurate Models](https://arxiv.org/pdf/2204.00408.pdf)". The model is pruned from `bert-base-uncased` to a 95% sparsity on dataset CoLA. Please go to [our repository](https://github.com/princeton-nlp/CoFiPruning) for more details on how to use the model for inference. Note that you would have to use the model class specified in our repository to load the model.
|
princeton-nlp/CoFi-RTE-s60
|
princeton-nlp
| 2022-05-09T15:23:20Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2204.00408",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-09T15:10:20Z |
This is a model checkpoint for "[Structured Pruning Learns Compact and Accurate Models](https://arxiv.org/pdf/2204.00408.pdf)". The model is pruned from `bert-base-uncased` to a 60% sparsity on dataset RTE. Please go to [our repository](https://github.com/princeton-nlp/CoFiPruning) for more details on how to use the model for inference. Note that you would have to use the model class specified in our repository to load the model.
|
princeton-nlp/CoFi-RTE-s96
|
princeton-nlp
| 2022-05-09T15:21:16Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2204.00408",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-09T15:11:06Z |
This is a model checkpoint for "[Structured Pruning Learns Compact and Accurate Models](https://arxiv.org/pdf/2204.00408.pdf)". The model is pruned from `bert-base-uncased` to a 96% sparsity on dataset RTE. Please go to [our repository](https://github.com/princeton-nlp/CoFiPruning) for more details on how to use the model for inference. Note that you would have to use the model class specified in our repository to load the model.
|
ansegura/ppo-LunarLander-v2-test-1
|
ansegura
| 2022-05-09T14:54:56Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-09T14:54:27Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 266.06 +/- 17.29
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
angelinux/PPO-LunarLander-v1
|
angelinux
| 2022-05-09T14:40:42Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-05T15:03:12Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 257.69 +/- 14.91
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
darttur/ppo-lunarlander-l
|
darttur
| 2022-05-09T13:34:17Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-09T12:27:44Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ppo
results:
- metrics:
- type: mean_reward
value: 284.71 +/- 16.95
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **ppo** Agent playing **LunarLander-v2**
This is a trained model of a **ppo** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
theojolliffe/bart-cnn-pubmed-arxiv-pubmed-v3-e10
|
theojolliffe
| 2022-05-09T12:37:02Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-09T10:44:01Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-cnn-pubmed-arxiv-pubmed-v3-e10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-pubmed-arxiv-pubmed-v3-e10
This model is a fine-tuned version of [theojolliffe/bart-cnn-pubmed-arxiv-pubmed](https://huggingface.co/theojolliffe/bart-cnn-pubmed-arxiv-pubmed) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8410
- Rouge1: 56.5123
- Rouge2: 41.1641
- Rougel: 43.4495
- Rougelsum: 54.544
- Gen Len: 141.6667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 1.254 | 1.0 | 795 | 0.9244 | 52.4478 | 32.5958 | 34.8756 | 49.8059 | 142.0 |
| 0.6985 | 2.0 | 1590 | 0.8156 | 52.4786 | 33.2296 | 35.5063 | 49.737 | 141.7963 |
| 0.5252 | 3.0 | 2385 | 0.7821 | 52.0494 | 32.953 | 36.5502 | 49.7292 | 142.0 |
| 0.3389 | 4.0 | 3180 | 0.7422 | 53.5408 | 36.2206 | 39.8389 | 51.6693 | 142.0 |
| 0.26 | 5.0 | 3975 | 0.7670 | 54.4279 | 36.5972 | 40.255 | 52.0877 | 142.0 |
| 0.1678 | 6.0 | 4770 | 0.8106 | 54.6811 | 37.8329 | 40.8512 | 52.3482 | 141.963 |
| 0.1243 | 7.0 | 5565 | 0.7926 | 54.5081 | 37.9596 | 41.912 | 52.5097 | 142.0 |
| 0.0967 | 8.0 | 6360 | 0.8079 | 56.0795 | 40.0954 | 43.7055 | 54.2041 | 142.0 |
| 0.0709 | 9.0 | 7155 | 0.8390 | 55.5257 | 38.5546 | 42.1562 | 53.5524 | 141.963 |
| 0.0691 | 10.0 | 7950 | 0.8410 | 56.5123 | 41.1641 | 43.4495 | 54.544 | 141.6667 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
karimdaou/ppo-LunarLander-v2
|
karimdaou
| 2022-05-09T11:07:14Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-09T11:06:32Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 211.84 +/- 25.60
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
NTUYG/ComFormer
|
NTUYG
| 2022-05-09T10:55:14Z | 12 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"summarization",
"en",
"dataset:DeepCom",
"arxiv:2107.03644",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-05-09T04:20:31Z |
---
language:
- en
tags:
- summarization
license: apache-2.0
datasets:
- DeepCom
metrics:
- bleu
---
# How To Use
```PYTHON
from transformers import BartForConditionalGeneration, BartTokenizer
model = BartForConditionalGeneration.from_pretrained("NTUYG/ComFormer")
tokenizer = BartTokenizer.from_pretrained("NTUYG/ComFormer")
code = '''
public static void copyFile( File in, File out )
throws IOException
{
FileChannel inChannel = new FileInputStream( in ).getChannel();
FileChannel outChannel = new FileOutputStream( out ).getChannel();
try
{
// inChannel.transferTo(0, inChannel.size(), outChannel); // original -- apparently has trouble copying large files on Windows
// magic number for Windows, 64Mb - 32Kb)
int maxCount = (64 * 1024 * 1024) - (32 * 1024);
long size = inChannel.size();
long position = 0;
while ( position < size )
{
position += inChannel.transferTo( position, maxCount, outChannel );
}
}
finally
{
if ( inChannel != null )
{
inChannel.close();
}
if ( outChannel != null )
{
outChannel.close();
}
}
}
'''
code_seq, sbt = utils.transformer(code) #can find in https://github.com/NTDXYG/ComFormer
input_text = code_seq + sbt
input_ids = tokenizer.encode(input_text, return_tensors="pt", max_length=256, truncation=True)
summary_text_ids = model.generate(
input_ids=input_ids,
bos_token_id=model.config.bos_token_id,
eos_token_id=model.config.eos_token_id,
length_penalty=2.0,
max_length=30,
min_length=2,
num_beams=5,
)
comment = tokenizer.decode(summary_text_ids[0], skip_special_tokens=True)
print(comment)
```
# BibTeX entry and citation info
```
@misc{yang2021comformer,
title={ComFormer: Code Comment Generation via Transformer and Fusion Method-based Hybrid Code Representation},
author={Guang Yang and Xiang Chen and Jinxin Cao and Shuyuan Xu and Zhanqi Cui and Chi Yu and Ke Liu},
year={2021},
eprint={2107.03644},
archivePrefix={arXiv},
primaryClass={cs.SE}
}
```
|
745H1N/LunarLander-v2-PPO
|
745H1N
| 2022-05-09T10:53:28Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-09T08:25:15Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 271.03 +/- 12.91
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
madatnlp/ke-t5-scratch
|
madatnlp
| 2022-05-09T10:52:51Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-08T02:59:40Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: madatnlp/ke-t5-scratch
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# madatnlp/ke-t5-scratch
This model is a fine-tuned version of [madatnlp/ke-t5-math-py](https://huggingface.co/madatnlp/ke-t5-math-py) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4760
- Validation Loss: 0.7360
- Epoch: 36
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 1e-04, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.2751 | 2.1074 | 0 |
| 2.2716 | 1.7945 | 1 |
| 1.8889 | 1.5726 | 2 |
| 1.6760 | 1.3722 | 3 |
| 1.5021 | 1.3280 | 4 |
| 1.4369 | 1.2523 | 5 |
| 1.3352 | 1.0619 | 6 |
| 1.2749 | 1.1156 | 7 |
| 1.2170 | 1.0452 | 8 |
| 1.1713 | 1.0596 | 9 |
| 1.1410 | 1.0080 | 10 |
| 1.0884 | 1.0213 | 11 |
| 1.0508 | 0.9223 | 12 |
| 0.9933 | 0.9353 | 13 |
| 0.9871 | 0.8749 | 14 |
| 0.9251 | 0.9173 | 15 |
| 0.9282 | 0.8620 | 16 |
| 0.8849 | 0.8093 | 17 |
| 0.8613 | 0.7823 | 18 |
| 0.8322 | 0.8016 | 19 |
| 0.8070 | 0.8844 | 20 |
| 0.7737 | 0.7635 | 21 |
| 0.7465 | 0.8440 | 22 |
| 0.7178 | 0.7958 | 23 |
| 0.7036 | 0.7739 | 24 |
| 0.6813 | 0.7347 | 25 |
| 0.6597 | 0.7545 | 26 |
| 0.6427 | 0.7394 | 27 |
| 0.6154 | 0.7212 | 28 |
| 0.5892 | 0.7653 | 29 |
| 0.5696 | 0.7073 | 30 |
| 0.5644 | 0.6977 | 31 |
| 0.5307 | 0.6977 | 32 |
| 0.5159 | 0.7736 | 33 |
| 0.5131 | 0.8138 | 34 |
| 0.4812 | 0.7623 | 35 |
| 0.4760 | 0.7360 | 36 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
deepgai/tweet_eval-sentiment-finetuned
|
deepgai
| 2022-05-09T10:46:47Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-08T19:20:19Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
- f1
model-index:
- name: tweet_eval-sentiment-finetuned
results:
- task:
name: Sentiment Analysis
type: sentiment-analysis
dataset:
name: tweeteval
type: tweeteval
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7099
- name: f1
type: f1
value: 0.7097
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tweet_eval-sentiment-finetuned
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the Tweet_Eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6532
- Accuracy: 0.744
- F1: 0.7437
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 128
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7491 | 1.0 | 357 | 0.6089 | 0.7345 | 0.7314 |
| 0.5516 | 2.0 | 714 | 0.5958 | 0.751 | 0.7516 |
| 0.4618 | 3.0 | 1071 | 0.6131 | 0.748 | 0.7487 |
| 0.4066 | 4.0 | 1428 | 0.6532 | 0.744 | 0.7437 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.9.1
- Datasets 2.1.0
- Tokenizers 0.12.1
|
jhoonk/bert-base-uncased-finetuned-swag
|
jhoonk
| 2022-05-09T10:41:40Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"generated_from_trainer",
"dataset:swag",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2022-05-02T10:57:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- swag
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-swag
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-swag
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the swag dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0337
- Accuracy: 0.7888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.7451 | 1.0 | 4597 | 0.5944 | 0.7696 |
| 0.3709 | 2.0 | 9194 | 0.6454 | 0.7803 |
| 0.1444 | 3.0 | 13791 | 1.0337 | 0.7888 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
theojolliffe/distilbart-cnn-arxiv-pubmed-v3-e16
|
theojolliffe
| 2022-05-09T10:37:42Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-09T08:51:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: distilbart-cnn-arxiv-pubmed-v3-e16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-arxiv-pubmed-v3-e16
This model is a fine-tuned version of [theojolliffe/distilbart-cnn-arxiv-pubmed](https://huggingface.co/theojolliffe/distilbart-cnn-arxiv-pubmed) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8502
- Rouge1: 57.1726
- Rouge2: 42.87
- Rougel: 44.7485
- Rougelsum: 55.6955
- Gen Len: 141.5926
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 1.4961 | 1.0 | 795 | 1.0907 | 53.2509 | 33.4232 | 34.4499 | 50.987 | 142.0 |
| 0.8874 | 2.0 | 1590 | 0.9408 | 52.9708 | 34.499 | 36.537 | 50.3924 | 140.4074 |
| 0.6994 | 3.0 | 2385 | 0.8731 | 53.4488 | 34.2476 | 37.4579 | 51.1979 | 142.0 |
| 0.4883 | 4.0 | 3180 | 0.8521 | 53.5463 | 34.7519 | 37.8143 | 51.106 | 142.0 |
| 0.3923 | 5.0 | 3975 | 0.8227 | 53.3556 | 35.0361 | 37.1719 | 50.9195 | 141.2222 |
| 0.2727 | 6.0 | 4770 | 0.8323 | 54.8422 | 37.333 | 39.6388 | 52.2975 | 141.8148 |
| 0.2158 | 7.0 | 5565 | 0.8252 | 54.0343 | 36.0109 | 38.34 | 51.6282 | 142.0 |
| 0.1734 | 8.0 | 6360 | 0.7985 | 54.9597 | 38.283 | 41.0033 | 52.9537 | 142.0 |
| 0.1366 | 9.0 | 7155 | 0.8112 | 56.315 | 40.3948 | 42.2944 | 54.3719 | 142.0 |
| 0.1275 | 10.0 | 7950 | 0.8238 | 55.8688 | 39.4747 | 43.0286 | 53.9269 | 142.0 |
| 0.0978 | 11.0 | 8745 | 0.8345 | 54.9934 | 40.0148 | 42.2721 | 53.324 | 142.0 |
| 0.0738 | 12.0 | 9540 | 0.8322 | 56.3862 | 41.4322 | 44.1406 | 54.4768 | 142.0 |
| 0.0688 | 13.0 | 10335 | 0.8384 | 55.9261 | 40.7102 | 43.5825 | 54.2394 | 142.0 |
| 0.0587 | 14.0 | 11130 | 0.8435 | 56.8475 | 41.7188 | 44.0671 | 54.9813 | 142.0 |
| 0.0529 | 15.0 | 11925 | 0.8476 | 57.4678 | 42.3804 | 45.4776 | 55.746 | 142.0 |
| 0.0469 | 16.0 | 12720 | 0.8502 | 57.1726 | 42.87 | 44.7485 | 55.6955 | 141.5926 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
RajSang/pegasus-sports-titles
|
RajSang
| 2022-05-09T09:26:14Z | 16 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
tags:
- generated_from_trainer
widget:
- text: "Coutinho was just about to be introduced by Villa boss Gerrard midway through the second half when Bruno Fernandes slammed home
his second goal of the game off the underside of the bar. But the Brazilian proved the catalyst for a memorable response.
First he drove at the United defence, helping to create the space which Jacob Ramsey exploited to halve the deficit. Then Ramsey slid over an excellent
cross from the left which Raphael Varane was unable to intercept as he slid back, leaving Coutinho to finish into an empty net.
The goal brought celebrations at both ends of the pitch as Emiliano Martinez also went into the crowd in relief - it was the Argentine's horrible sixth-minute error that had gifted Fernandes the visitors' opener.
Given his background - with Liverpool, Barcelona and Bayern Munich - Coutinho is a bold loan signing by Villa, and underlines the pedigree of the man they appointed as manager in November.
Gerrard is not at Villa to learn how to avoid relegation.
His demands remain as high as they were as a player and Coutinho's arrival is an example of that.
Villa are a better team since Gerrard's arrival and, after a sluggish start against opponents they dominated but lost to in the FA Cup five days ago, they grew into the game.
The club's other newboy, Lucas Digne, was among those denied by United keeper David de Gea at the end of the first half - in unorthodox fashion, with his knees.
Ollie Watkins did not really test the Spain keeper when Villa broke after Edinson Cavani lost possession in his own half. However, Emi Buendia certainly did with a near-post header. Rooted to his line, De Gea's reactions were up to the job as he beat Buendia's effort away.
When De Gea produced more saves after half-time to deny Ramsey and Digne again, it appeared the image of the night for Villa would be midfielder Morgan Sanson kicking a drinks bottle in fury after his error in gifting Fred possession to set up Fernandes for the visitors' second had been followed immediately by his substitution.
However, as it was the prelude to Coutinho's arrival, it was the moment that changed the course of the game - and the acclaim for the Brazilian at the final whistle indicated Villa's fans are already firmly behind him."
language: en
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-sports-titles
This model is a fine-tuned pegasus on some **sports news articles scraped from the internet. (For educational purposes only)**. The model can generate titles for sports articles. Try it out using the inference API.
## Model description
A Pegasus model tuned on generating scientific titles has been further fine-tuned to generate titles for sports articles. While training articles on **Tennis, Football (Soccer), Cricket , Athletics and Rugby** were used to generate titles. I experimented training the Tokenizer from scratch but it did not give good results compared to the pre-trained tokenizer.
## Usage
```python
from transformers import pipeline
#Feel free to play around with the generation parameters.
#Reduce the beam width for faster inference
#Note that the maximum length for the generated titles is 64
gen_kwargs = {"length_penalty": 0.6, "num_beams":4, "num_return_sequences": 4,"num_beam_groups":4,"diversity_penalty":2.0}
pipe = pipeline("summarization", model="RajSang/pegasus-sports-titles")
#Change the article according to your wish
article="""
Coutinho was just about to be introduced by Villa boss Gerrard midway through the second half when Bruno Fernandes slammed home
his second goal of the game off the underside of the bar. But the Brazilian proved the catalyst for a memorable response.
First he drove at the United defence, helping to create the space which Jacob Ramsey exploited to halve the deficit. Then Ramsey slid over an excellent
cross from the left which Raphael Varane was unable to intercept as he slid back, leaving Coutinho to finish into an empty net.
The goal brought celebrations at both ends of the pitch as Emiliano Martinez also went into the crowd in relief - it was the Argentine's horrible sixth-minute error that had gifted Fernandes the visitors' opener.
Given his background - with Liverpool, Barcelona and Bayern Munich - Coutinho is a bold loan signing by Villa, and underlines the pedigree of the man they appointed as manager in November.
Gerrard is not at Villa to learn how to avoid relegation.
His demands remain as high as they were as a player and Coutinho's arrival is an example of that.
Villa are a better team since Gerrard's arrival and, after a sluggish start against opponents they dominated but lost to in the FA Cup five days ago, they grew into the game.
The club's other newboy, Lucas Digne, was among those denied by United keeper David de Gea at the end of the first half - in unorthodox fashion, with his knees.
Ollie Watkins did not really test the Spain keeper when Villa broke after Edinson Cavani lost possession in his own half. However, Emi Buendia certainly did with a near-post header. Rooted to his line, De Gea's reactions were up to the job as he beat Buendia's effort away.
When De Gea produced more saves after half-time to deny Ramsey and Digne again, it appeared the image of the night for Villa would be midfielder Morgan Sanson kicking a drinks bottle in fury after his error in gifting Fred possession to set up Fernandes for the visitors' second had been followed immediately by his substitution.
However, as it was the prelude to Coutinho's arrival, it was the moment that changed the course of the game - and the acclaim for the Brazilian at the final whistle indicated Villa's fans are already firmly behind him.
"""
result=pipe(article, **gen_kwargs)[0]["summary_text"]
print(result)
''' Output
Title 1 :
Coutinho's arrival sparks Villa comeback
Title 2 :
Philippe Coutinho marked his debut for Aston Villa with a goal and an assist as Steven Gerrard's side came from two goals down to draw with Manchester United.
Title 3 :
Steven Gerrard's first game in charge of Aston Villa ended in a dramatic draw against Manchester United - but it was the arrival of Philippe Coutinho that marked the night.
Title 4 :
Liverpool loanee Philippe Coutinho marked his first appearance for Aston Villa with two goals as Steven Gerrard's side came from two goals down to draw 2-2.'''
```
## Training procedure
While training, **short titles were combined with the subtitles for the articles to improve the quality of the generated titles and the subtitles were removed from the main body of the articles.**
##Limitations
In rare cases, if the opening few lines of a passage/article are descriptive enough, the model often just copies these lines instead of looking for information further down the articles, which may not be conducive in some cases.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 2
### Training results
**Rouge1:38.2315**
**Rouge2: 18.6598**
**RougueL: 31.7393**
**RougeLsum: 31.7086**
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
theojolliffe/distilbart-cnn-arxiv-pubmed-v3-e12
|
theojolliffe
| 2022-05-09T08:38:28Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-08T20:46:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: distilbart-cnn-arxiv-pubmed-v3-e12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-arxiv-pubmed-v3-e12
This model is a fine-tuned version of [theojolliffe/distilbart-cnn-arxiv-pubmed](https://huggingface.co/theojolliffe/distilbart-cnn-arxiv-pubmed) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8157
- Rouge1: 56.7429
- Rouge2: 41.0185
- Rougel: 44.1014
- Rougelsum: 54.8121
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 1.5037 | 1.0 | 795 | 1.0815 | 52.4727 | 33.4915 | 35.3774 | 50.1955 | 142.0 |
| 0.8894 | 2.0 | 1590 | 0.9462 | 52.8867 | 34.0406 | 36.5249 | 50.4636 | 141.5741 |
| 0.7037 | 3.0 | 2385 | 0.8841 | 53.7966 | 35.0969 | 38.4158 | 51.3369 | 142.0 |
| 0.4914 | 4.0 | 3180 | 0.8437 | 52.6766 | 34.0573 | 36.8907 | 50.3088 | 142.0 |
| 0.3945 | 5.0 | 3975 | 0.8067 | 54.3147 | 36.2081 | 39.6366 | 52.1494 | 142.0 |
| 0.2799 | 6.0 | 4770 | 0.8403 | 54.2813 | 37.0786 | 39.9196 | 51.9176 | 141.9815 |
| 0.2211 | 7.0 | 5565 | 0.8207 | 53.9403 | 36.517 | 39.0372 | 51.4491 | 141.9815 |
| 0.1795 | 8.0 | 6360 | 0.8014 | 55.6607 | 39.3082 | 41.8295 | 53.4674 | 142.0 |
| 0.1428 | 9.0 | 7155 | 0.8051 | 55.0575 | 38.823 | 41.8849 | 52.9606 | 142.0 |
| 0.1358 | 10.0 | 7950 | 0.8149 | 56.6986 | 41.0 | 43.5207 | 54.6402 | 142.0 |
| 0.1122 | 11.0 | 8745 | 0.8134 | 56.5416 | 40.9495 | 44.2989 | 54.5623 | 142.0 |
| 0.0873 | 12.0 | 9540 | 0.8157 | 56.7429 | 41.0185 | 44.1014 | 54.8121 | 142.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
KushalRamaiya/ppo-LunarLander-v2
|
KushalRamaiya
| 2022-05-09T07:15:37Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-09T06:54:45Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 268.32 +/- 24.24
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
GuanOrg/DeepRLCourse2022
|
GuanOrg
| 2022-05-09T06:40:29Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-05T01:45:14Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 224.76 +/- 21.41
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
huggingtweets/malnote
|
huggingtweets
| 2022-05-09T05:36:36Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-09T05:35:39Z |
---
language: en
thumbnail: http://www.huggingtweets.com/malnote/1652074591822/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1475058675626561537/bI19TTid_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Arantxa Štefan</div>
<div style="text-align: center; font-size: 14px;">@malnote</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Arantxa Štefan.
| Data | Arantxa Štefan |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 6 |
| Short tweets | 218 |
| Tweets kept | 3026 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ow72fqyd/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @malnote's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/33l50h31) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/33l50h31/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/malnote')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/computerforever
|
huggingtweets
| 2022-05-09T05:19:58Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-09T05:19:20Z |
---
language: en
thumbnail: http://www.huggingtweets.com/computerforever/1652073594573/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1518444670266839045/38xr9OAd_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">computer sweetie</div>
<div style="text-align: center; font-size: 14px;">@computerforever</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from computer sweetie.
| Data | computer sweetie |
| --- | --- |
| Tweets downloaded | 2170 |
| Retweets | 48 |
| Short tweets | 313 |
| Tweets kept | 1809 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/9j3sj0ot/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @computerforever's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2iw1hcff) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2iw1hcff/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/computerforever')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/hot_domme
|
huggingtweets
| 2022-05-09T02:29:04Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-02T18:11:16Z |
---
language: en
thumbnail: http://www.huggingtweets.com/hot_domme/1652063339945/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1445280995175911425/JkWNc3mK_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">™STREET DON 🥬⛓🦂غعتس دتعد🦂⛓ Steamin Hot</div>
<div style="text-align: center; font-size: 14px;">@hot_domme</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ™STREET DON 🥬⛓🦂غعتس دتعد🦂⛓ Steamin Hot.
| Data | ™STREET DON 🥬⛓🦂غعتس دتعد🦂⛓ Steamin Hot |
| --- | --- |
| Tweets downloaded | 2733 |
| Retweets | 324 |
| Short tweets | 371 |
| Tweets kept | 2038 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3cv5ajux/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @hot_domme's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2znfpdzh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2znfpdzh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/hot_domme')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
theojolliffe/bart-cnn-pubmed-arxiv-pubmed-v3-e64
|
theojolliffe
| 2022-05-09T02:03:17Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-08T18:50:49Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-cnn-pubmed-arxiv-pubmed-v3-e64
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-pubmed-arxiv-pubmed-v3-e64
This model is a fine-tuned version of [theojolliffe/bart-cnn-pubmed-arxiv-pubmed](https://huggingface.co/theojolliffe/bart-cnn-pubmed-arxiv-pubmed) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0630
- Rouge1: 58.7
- Rouge2: 47.8042
- Rougel: 50.6967
- Rougelsum: 57.5543
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 64
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| No log | 1.0 | 398 | 0.9499 | 53.8396 | 34.0954 | 35.6734 | 51.3453 | 142.0 |
| 1.1219 | 2.0 | 796 | 0.8223 | 53.0414 | 33.3193 | 35.7448 | 50.1675 | 142.0 |
| 0.6681 | 3.0 | 1194 | 0.7689 | 53.6684 | 35.3651 | 37.7087 | 51.1441 | 142.0 |
| 0.4393 | 4.0 | 1592 | 0.7694 | 53.9066 | 35.3925 | 38.8917 | 51.6172 | 142.0 |
| 0.4393 | 5.0 | 1990 | 0.7597 | 54.0746 | 36.1026 | 39.1318 | 51.9272 | 142.0 |
| 0.2947 | 6.0 | 2388 | 0.8284 | 53.1168 | 34.7428 | 38.0573 | 50.9563 | 142.0 |
| 0.2016 | 7.0 | 2786 | 0.7951 | 55.7222 | 39.0458 | 42.5265 | 53.5359 | 142.0 |
| 0.1422 | 8.0 | 3184 | 0.7793 | 56.2376 | 40.3348 | 43.435 | 54.3228 | 142.0 |
| 0.1096 | 9.0 | 3582 | 0.8260 | 55.0372 | 39.0552 | 42.5403 | 53.0694 | 142.0 |
| 0.1096 | 10.0 | 3980 | 0.8397 | 53.849 | 37.519 | 40.674 | 52.1357 | 141.7037 |
| 0.0881 | 11.0 | 4378 | 0.8504 | 56.4835 | 41.0484 | 44.9407 | 54.3557 | 142.0 |
| 0.0693 | 12.0 | 4776 | 0.8285 | 55.7705 | 39.8585 | 43.722 | 53.7607 | 142.0 |
| 0.0572 | 13.0 | 5174 | 0.8327 | 57.932 | 43.5378 | 46.8233 | 55.8739 | 142.0 |
| 0.0461 | 14.0 | 5572 | 0.8720 | 57.6733 | 42.9742 | 45.8698 | 56.018 | 142.0 |
| 0.0461 | 15.0 | 5970 | 0.8723 | 57.6072 | 42.6946 | 45.2551 | 55.8486 | 142.0 |
| 0.0416 | 16.0 | 6368 | 0.8764 | 57.1973 | 43.1931 | 46.4492 | 55.3842 | 142.0 |
| 0.0343 | 17.0 | 6766 | 0.8638 | 57.4474 | 43.3544 | 46.3026 | 55.7863 | 142.0 |
| 0.03 | 18.0 | 7164 | 0.9234 | 57.9166 | 43.8551 | 46.6473 | 56.3895 | 142.0 |
| 0.0252 | 19.0 | 7562 | 0.9393 | 58.2908 | 45.2321 | 47.1398 | 56.6618 | 142.0 |
| 0.0252 | 20.0 | 7960 | 0.8966 | 59.2798 | 46.381 | 49.3514 | 57.6061 | 142.0 |
| 0.024 | 21.0 | 8358 | 0.9056 | 57.8409 | 44.2048 | 47.3329 | 56.2568 | 142.0 |
| 0.0195 | 22.0 | 8756 | 0.9424 | 57.551 | 44.6847 | 47.2771 | 56.2391 | 142.0 |
| 0.0182 | 23.0 | 9154 | 0.9361 | 59.1078 | 46.4704 | 49.4178 | 57.6796 | 142.0 |
| 0.0169 | 24.0 | 9552 | 0.9456 | 56.7966 | 43.3135 | 46.4208 | 55.4646 | 142.0 |
| 0.0169 | 25.0 | 9950 | 0.9867 | 59.5561 | 47.4638 | 50.0725 | 58.2388 | 141.8519 |
| 0.0147 | 26.0 | 10348 | 0.9727 | 58.2574 | 44.9904 | 47.2701 | 56.4274 | 142.0 |
| 0.0125 | 27.0 | 10746 | 0.9589 | 58.6792 | 45.8465 | 48.0781 | 57.0755 | 142.0 |
| 0.0117 | 28.0 | 11144 | 0.9635 | 59.1118 | 46.6614 | 50.0552 | 57.6153 | 142.0 |
| 0.0103 | 29.0 | 11542 | 0.9623 | 58.2517 | 45.6401 | 48.5888 | 56.7733 | 142.0 |
| 0.0103 | 30.0 | 11940 | 0.9752 | 59.0707 | 47.203 | 49.7992 | 57.6216 | 142.0 |
| 0.0096 | 31.0 | 12338 | 0.9610 | 57.6781 | 44.0504 | 47.6718 | 56.1201 | 142.0 |
| 0.0089 | 32.0 | 12736 | 0.9705 | 58.5592 | 45.7397 | 48.681 | 57.0302 | 142.0 |
| 0.008 | 33.0 | 13134 | 0.9989 | 58.1997 | 45.6345 | 48.2551 | 56.8571 | 141.7778 |
| 0.0075 | 34.0 | 13532 | 0.9880 | 57.9632 | 44.7845 | 47.8763 | 56.3979 | 142.0 |
| 0.0075 | 35.0 | 13930 | 1.0041 | 58.1316 | 46.2737 | 49.5986 | 56.8263 | 142.0 |
| 0.0061 | 36.0 | 14328 | 0.9923 | 58.4686 | 46.1735 | 49.1299 | 57.0331 | 142.0 |
| 0.0066 | 37.0 | 14726 | 1.0157 | 58.4277 | 45.6559 | 49.1739 | 56.8198 | 141.6481 |
| 0.0052 | 38.0 | 15124 | 1.0220 | 58.5166 | 46.3883 | 50.0964 | 57.0104 | 142.0 |
| 0.0049 | 39.0 | 15522 | 0.9949 | 59.3697 | 47.0609 | 50.2733 | 58.1388 | 142.0 |
| 0.0049 | 40.0 | 15920 | 1.0368 | 59.9537 | 48.4059 | 51.8185 | 58.8002 | 142.0 |
| 0.0039 | 41.0 | 16318 | 1.0228 | 58.2093 | 46.4807 | 49.54 | 56.9994 | 142.0 |
| 0.0041 | 42.0 | 16716 | 1.0218 | 57.6376 | 45.4951 | 49.003 | 56.4606 | 142.0 |
| 0.0035 | 43.0 | 17114 | 1.0381 | 57.2845 | 43.9593 | 46.779 | 55.6106 | 142.0 |
| 0.0059 | 44.0 | 17512 | 1.0316 | 58.5506 | 46.2111 | 49.4844 | 56.9506 | 142.0 |
| 0.0059 | 45.0 | 17910 | 1.0388 | 58.8383 | 47.6053 | 50.6187 | 57.7125 | 142.0 |
| 0.0028 | 46.0 | 18308 | 1.0068 | 59.3198 | 47.6888 | 50.2478 | 58.0 | 142.0 |
| 0.0028 | 47.0 | 18706 | 1.0446 | 58.8938 | 46.7524 | 49.5642 | 57.3659 | 142.0 |
| 0.0022 | 48.0 | 19104 | 1.0347 | 59.8253 | 48.3871 | 51.3949 | 58.5652 | 142.0 |
| 0.0024 | 49.0 | 19502 | 1.0294 | 60.655 | 50.2339 | 53.1662 | 59.3333 | 142.0 |
| 0.0024 | 50.0 | 19900 | 1.0225 | 58.5131 | 47.3009 | 50.1642 | 57.2287 | 142.0 |
| 0.0022 | 51.0 | 20298 | 1.0320 | 59.6101 | 47.4104 | 50.5291 | 58.075 | 142.0 |
| 0.0018 | 52.0 | 20696 | 1.0507 | 58.7957 | 46.8893 | 50.2996 | 57.3662 | 142.0 |
| 0.0015 | 53.0 | 21094 | 1.0599 | 58.9064 | 47.9433 | 51.3082 | 57.6871 | 142.0 |
| 0.0015 | 54.0 | 21492 | 1.0636 | 59.6607 | 48.5737 | 51.2361 | 58.333 | 142.0 |
| 0.0013 | 55.0 | 21890 | 1.0452 | 58.7026 | 46.5286 | 49.9672 | 57.2521 | 142.0 |
| 0.0012 | 56.0 | 22288 | 1.0418 | 58.9452 | 47.7209 | 50.657 | 57.7103 | 142.0 |
| 0.0011 | 57.0 | 22686 | 1.0578 | 58.485 | 46.0691 | 49.811 | 57.2591 | 142.0 |
| 0.0009 | 58.0 | 23084 | 1.0561 | 59.2268 | 48.1987 | 50.1948 | 57.8871 | 142.0 |
| 0.0009 | 59.0 | 23482 | 1.0548 | 59.6307 | 48.1778 | 50.9934 | 58.2098 | 142.0 |
| 0.0009 | 60.0 | 23880 | 1.0498 | 59.5054 | 48.8866 | 51.5977 | 58.1868 | 142.0 |
| 0.0008 | 61.0 | 24278 | 1.0583 | 60.0232 | 49.2518 | 52.2297 | 58.6774 | 142.0 |
| 0.0007 | 62.0 | 24676 | 1.0659 | 59.1755 | 48.4144 | 51.5157 | 58.0416 | 142.0 |
| 0.0007 | 63.0 | 25074 | 1.0622 | 59.1023 | 47.74 | 50.5188 | 57.9707 | 142.0 |
| 0.0007 | 64.0 | 25472 | 1.0630 | 58.7 | 47.8042 | 50.6967 | 57.5543 | 142.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
exploiter345/dqn_lunar_v2
|
exploiter345
| 2022-05-09T00:27:59Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-09T00:27:28Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 167.08 +/- 79.19
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **DQN** Agent playing **LunarLander-v2**
This is a trained model of a **DQN** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
ebonazza2910/model
|
ebonazza2910
| 2022-05-08T23:12:15Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-03T16:38:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2220
- Wer: 0.1301
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 5.9743 | 0.18 | 400 | 2.1457 | 1.0000 |
| 0.5747 | 0.36 | 800 | 0.3415 | 0.3456 |
| 0.3383 | 0.54 | 1200 | 0.2797 | 0.3095 |
| 0.2967 | 0.72 | 1600 | 0.2464 | 0.2568 |
| 0.2747 | 0.9 | 2000 | 0.2341 | 0.2466 |
| 0.2501 | 1.08 | 2400 | 0.2299 | 0.2317 |
| 0.2309 | 1.26 | 2800 | 0.2306 | 0.2328 |
| 0.2273 | 1.44 | 3200 | 0.2212 | 0.2375 |
| 0.225 | 1.62 | 3600 | 0.2193 | 0.2267 |
| 0.2204 | 1.8 | 4000 | 0.2157 | 0.2295 |
| 0.2256 | 1.98 | 4400 | 0.2165 | 0.2260 |
| 0.1941 | 2.17 | 4800 | 0.2105 | 0.2163 |
| 0.1925 | 2.35 | 5200 | 0.2098 | 0.2153 |
| 0.1925 | 2.53 | 5600 | 0.2120 | 0.2148 |
| 0.1952 | 2.71 | 6000 | 0.2063 | 0.2178 |
| 0.1971 | 2.89 | 6400 | 0.2100 | 0.2158 |
| 0.1888 | 3.07 | 6800 | 0.2131 | 0.2172 |
| 0.1702 | 3.25 | 7200 | 0.2155 | 0.2203 |
| 0.173 | 3.43 | 7600 | 0.2141 | 0.2254 |
| 0.174 | 3.61 | 8000 | 0.2017 | 0.2100 |
| 0.1802 | 3.79 | 8400 | 0.1998 | 0.2043 |
| 0.1717 | 3.97 | 8800 | 0.2070 | 0.2110 |
| 0.162 | 4.15 | 9200 | 0.2082 | 0.2157 |
| 0.154 | 4.33 | 9600 | 0.2163 | 0.2161 |
| 0.1598 | 4.51 | 10000 | 0.2070 | 0.2171 |
| 0.1576 | 4.69 | 10400 | 0.2034 | 0.2116 |
| 0.1601 | 4.87 | 10800 | 0.1990 | 0.2009 |
| 0.152 | 5.05 | 11200 | 0.1994 | 0.2039 |
| 0.1395 | 5.23 | 11600 | 0.2013 | 0.2046 |
| 0.1407 | 5.41 | 12000 | 0.2009 | 0.2022 |
| 0.1449 | 5.59 | 12400 | 0.1982 | 0.1961 |
| 0.1483 | 5.77 | 12800 | 0.2082 | 0.2054 |
| 0.1514 | 5.95 | 13200 | 0.1953 | 0.1985 |
| 0.138 | 6.13 | 13600 | 0.2046 | 0.1965 |
| 0.1322 | 6.31 | 14000 | 0.2076 | 0.1948 |
| 0.1372 | 6.5 | 14400 | 0.1968 | 0.1944 |
| 0.136 | 6.68 | 14800 | 0.1971 | 0.1963 |
| 0.1382 | 6.86 | 15200 | 0.2001 | 0.1990 |
| 0.1335 | 7.04 | 15600 | 0.2026 | 0.1935 |
| 0.1206 | 7.22 | 16000 | 0.1986 | 0.1938 |
| 0.1239 | 7.4 | 16400 | 0.2054 | 0.1919 |
| 0.1254 | 7.58 | 16800 | 0.1918 | 0.1939 |
| 0.1262 | 7.76 | 17200 | 0.1960 | 0.1947 |
| 0.126 | 7.94 | 17600 | 0.1932 | 0.1906 |
| 0.1169 | 8.12 | 18000 | 0.2037 | 0.1916 |
| 0.1142 | 8.3 | 18400 | 0.1999 | 0.1900 |
| 0.1151 | 8.48 | 18800 | 0.1920 | 0.1855 |
| 0.1121 | 8.66 | 19200 | 0.2007 | 0.1859 |
| 0.1135 | 8.84 | 19600 | 0.1932 | 0.1879 |
| 0.1158 | 9.02 | 20000 | 0.1916 | 0.1859 |
| 0.105 | 9.2 | 20400 | 0.1961 | 0.1831 |
| 0.1023 | 9.38 | 20800 | 0.1914 | 0.1791 |
| 0.1004 | 9.56 | 21200 | 0.1881 | 0.1787 |
| 0.1023 | 9.74 | 21600 | 0.1963 | 0.1817 |
| 0.1075 | 9.92 | 22000 | 0.1889 | 0.1861 |
| 0.103 | 10.1 | 22400 | 0.1975 | 0.1791 |
| 0.0952 | 10.28 | 22800 | 0.1979 | 0.1787 |
| 0.0957 | 10.46 | 23200 | 0.1922 | 0.1817 |
| 0.0966 | 10.65 | 23600 | 0.1953 | 0.1857 |
| 0.0997 | 10.83 | 24000 | 0.1902 | 0.1783 |
| 0.0981 | 11.01 | 24400 | 0.1959 | 0.1780 |
| 0.0868 | 11.19 | 24800 | 0.2056 | 0.1783 |
| 0.0905 | 11.37 | 25200 | 0.1958 | 0.1777 |
| 0.0892 | 11.55 | 25600 | 0.1935 | 0.1796 |
| 0.0891 | 11.73 | 26000 | 0.1968 | 0.1763 |
| 0.0888 | 11.91 | 26400 | 0.2043 | 0.1804 |
| 0.0842 | 12.09 | 26800 | 0.2043 | 0.1733 |
| 0.0828 | 12.27 | 27200 | 0.1964 | 0.1715 |
| 0.0827 | 12.45 | 27600 | 0.1991 | 0.1749 |
| 0.0844 | 12.63 | 28000 | 0.2014 | 0.1695 |
| 0.0837 | 12.81 | 28400 | 0.1973 | 0.1759 |
| 0.0872 | 12.99 | 28800 | 0.1975 | 0.1689 |
| 0.0778 | 13.17 | 29200 | 0.1979 | 0.1740 |
| 0.0759 | 13.35 | 29600 | 0.2093 | 0.1753 |
| 0.076 | 13.53 | 30000 | 0.1990 | 0.1731 |
| 0.0762 | 13.71 | 30400 | 0.2024 | 0.1690 |
| 0.0764 | 13.89 | 30800 | 0.2037 | 0.1709 |
| 0.0756 | 14.07 | 31200 | 0.2007 | 0.1716 |
| 0.0702 | 14.25 | 31600 | 0.2011 | 0.1680 |
| 0.0694 | 14.43 | 32000 | 0.2061 | 0.1683 |
| 0.0713 | 14.61 | 32400 | 0.2014 | 0.1687 |
| 0.0693 | 14.79 | 32800 | 0.1961 | 0.1658 |
| 0.071 | 14.98 | 33200 | 0.1921 | 0.1645 |
| 0.0659 | 15.16 | 33600 | 0.2079 | 0.1682 |
| 0.0659 | 15.34 | 34000 | 0.2046 | 0.1649 |
| 0.0685 | 15.52 | 34400 | 0.1994 | 0.1660 |
| 0.0663 | 15.7 | 34800 | 0.1970 | 0.1652 |
| 0.0678 | 15.88 | 35200 | 0.1961 | 0.1634 |
| 0.0644 | 16.06 | 35600 | 0.2141 | 0.1644 |
| 0.0596 | 16.24 | 36000 | 0.2098 | 0.1628 |
| 0.0629 | 16.42 | 36400 | 0.1969 | 0.1616 |
| 0.0598 | 16.6 | 36800 | 0.2026 | 0.1604 |
| 0.0628 | 16.78 | 37200 | 0.2050 | 0.1620 |
| 0.0616 | 16.96 | 37600 | 0.1958 | 0.1618 |
| 0.0538 | 17.14 | 38000 | 0.2093 | 0.1588 |
| 0.0573 | 17.32 | 38400 | 0.1995 | 0.1588 |
| 0.0555 | 17.5 | 38800 | 0.2077 | 0.1608 |
| 0.0555 | 17.68 | 39200 | 0.2036 | 0.1571 |
| 0.0578 | 17.86 | 39600 | 0.2045 | 0.1572 |
| 0.056 | 18.04 | 40000 | 0.2065 | 0.1593 |
| 0.0525 | 18.22 | 40400 | 0.2093 | 0.1580 |
| 0.0527 | 18.4 | 40800 | 0.2141 | 0.1585 |
| 0.0529 | 18.58 | 41200 | 0.2137 | 0.1585 |
| 0.0533 | 18.76 | 41600 | 0.2021 | 0.1558 |
| 0.0529 | 18.94 | 42000 | 0.2108 | 0.1535 |
| 0.05 | 19.12 | 42400 | 0.2114 | 0.1555 |
| 0.0479 | 19.31 | 42800 | 0.2091 | 0.1549 |
| 0.0509 | 19.49 | 43200 | 0.2145 | 0.1554 |
| 0.0486 | 19.67 | 43600 | 0.2061 | 0.1536 |
| 0.049 | 19.85 | 44000 | 0.2132 | 0.1548 |
| 0.0484 | 20.03 | 44400 | 0.2077 | 0.1523 |
| 0.0449 | 20.21 | 44800 | 0.2177 | 0.1529 |
| 0.0452 | 20.39 | 45200 | 0.2204 | 0.1517 |
| 0.0477 | 20.57 | 45600 | 0.2132 | 0.1517 |
| 0.048 | 20.75 | 46000 | 0.2119 | 0.1532 |
| 0.0469 | 20.93 | 46400 | 0.2109 | 0.1524 |
| 0.0439 | 21.11 | 46800 | 0.2118 | 0.1503 |
| 0.044 | 21.29 | 47200 | 0.2033 | 0.1474 |
| 0.0435 | 21.47 | 47600 | 0.2066 | 0.1485 |
| 0.0418 | 21.65 | 48000 | 0.2125 | 0.1491 |
| 0.0417 | 21.83 | 48400 | 0.2139 | 0.1487 |
| 0.0446 | 22.01 | 48800 | 0.2054 | 0.1493 |
| 0.039 | 22.19 | 49200 | 0.2179 | 0.1459 |
| 0.0414 | 22.37 | 49600 | 0.2118 | 0.1466 |
| 0.0394 | 22.55 | 50000 | 0.2104 | 0.1444 |
| 0.0381 | 22.73 | 50400 | 0.2095 | 0.1458 |
| 0.0382 | 22.91 | 50800 | 0.2193 | 0.1471 |
| 0.0391 | 23.09 | 51200 | 0.2143 | 0.1455 |
| 0.0365 | 23.27 | 51600 | 0.2198 | 0.1445 |
| 0.0368 | 23.46 | 52000 | 0.2151 | 0.1444 |
| 0.038 | 23.64 | 52400 | 0.2094 | 0.1439 |
| 0.038 | 23.82 | 52800 | 0.2137 | 0.1422 |
| 0.0374 | 24.0 | 53200 | 0.2180 | 0.1425 |
| 0.0352 | 24.18 | 53600 | 0.2207 | 0.1422 |
| 0.0343 | 24.36 | 54000 | 0.2269 | 0.1445 |
| 0.0353 | 24.54 | 54400 | 0.2222 | 0.1438 |
| 0.0348 | 24.72 | 54800 | 0.2224 | 0.1413 |
| 0.0342 | 24.9 | 55200 | 0.2146 | 0.1401 |
| 0.0337 | 25.08 | 55600 | 0.2246 | 0.1408 |
| 0.0327 | 25.26 | 56000 | 0.2161 | 0.1401 |
| 0.0339 | 25.44 | 56400 | 0.2212 | 0.1402 |
| 0.0324 | 25.62 | 56800 | 0.2203 | 0.1394 |
| 0.0319 | 25.8 | 57200 | 0.2145 | 0.1376 |
| 0.0317 | 25.98 | 57600 | 0.2147 | 0.1375 |
| 0.0302 | 26.16 | 58000 | 0.2213 | 0.1362 |
| 0.0309 | 26.34 | 58400 | 0.2218 | 0.1365 |
| 0.0308 | 26.52 | 58800 | 0.2167 | 0.1362 |
| 0.0294 | 26.7 | 59200 | 0.2169 | 0.1368 |
| 0.0297 | 26.88 | 59600 | 0.2163 | 0.1350 |
| 0.0289 | 27.06 | 60000 | 0.2188 | 0.1348 |
| 0.0284 | 27.24 | 60400 | 0.2172 | 0.1338 |
| 0.0278 | 27.42 | 60800 | 0.2230 | 0.1342 |
| 0.0283 | 27.6 | 61200 | 0.2233 | 0.1342 |
| 0.0292 | 27.79 | 61600 | 0.2238 | 0.1335 |
| 0.0286 | 27.97 | 62000 | 0.2218 | 0.1327 |
| 0.0262 | 28.15 | 62400 | 0.2220 | 0.1324 |
| 0.0274 | 28.33 | 62800 | 0.2182 | 0.1323 |
| 0.0279 | 28.51 | 63200 | 0.2170 | 0.1314 |
| 0.0269 | 28.69 | 63600 | 0.2228 | 0.1313 |
| 0.0264 | 28.87 | 64000 | 0.2209 | 0.1313 |
| 0.0254 | 29.05 | 64400 | 0.2224 | 0.1304 |
| 0.026 | 29.23 | 64800 | 0.2220 | 0.1302 |
| 0.0253 | 29.41 | 65200 | 0.2229 | 0.1304 |
| 0.0244 | 29.59 | 65600 | 0.2217 | 0.1298 |
| 0.025 | 29.77 | 66000 | 0.2223 | 0.1303 |
| 0.0255 | 29.95 | 66400 | 0.2220 | 0.1301 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
|
theojolliffe/bart-cnn-pubmed-arxiv-pubmed-v3-e12
|
theojolliffe
| 2022-05-08T23:01:48Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-08T20:57:25Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-cnn-pubmed-arxiv-pubmed-v3-e12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-pubmed-arxiv-pubmed-v3-e12
This model is a fine-tuned version of [theojolliffe/bart-cnn-pubmed-arxiv-pubmed](https://huggingface.co/theojolliffe/bart-cnn-pubmed-arxiv-pubmed) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8658
- Rouge1: 57.2678
- Rouge2: 43.347
- Rougel: 47.0854
- Rougelsum: 55.4167
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 1.2548 | 1.0 | 795 | 0.9154 | 53.4249 | 34.0377 | 36.4396 | 50.9884 | 141.8889 |
| 0.6994 | 2.0 | 1590 | 0.8213 | 54.7613 | 35.9428 | 38.3899 | 51.9527 | 142.0 |
| 0.5272 | 3.0 | 2385 | 0.7703 | 53.8561 | 35.4871 | 38.0502 | 51.131 | 141.8889 |
| 0.3407 | 4.0 | 3180 | 0.7764 | 53.9514 | 35.8553 | 39.1935 | 51.7005 | 142.0 |
| 0.2612 | 5.0 | 3975 | 0.7529 | 54.4056 | 36.2605 | 40.8003 | 52.0424 | 142.0 |
| 0.1702 | 6.0 | 4770 | 0.8105 | 54.2251 | 37.1441 | 41.2472 | 52.2803 | 142.0 |
| 0.1276 | 7.0 | 5565 | 0.8004 | 56.49 | 40.4009 | 44.018 | 54.2404 | 141.5556 |
| 0.0978 | 8.0 | 6360 | 0.7890 | 56.6339 | 40.9867 | 43.9603 | 54.4468 | 142.0 |
| 0.0711 | 9.0 | 7155 | 0.8285 | 56.0469 | 40.7758 | 44.1395 | 53.9668 | 142.0 |
| 0.0649 | 10.0 | 7950 | 0.8498 | 56.9873 | 42.4721 | 46.705 | 55.2188 | 142.0 |
| 0.0471 | 11.0 | 8745 | 0.8547 | 57.7898 | 43.4238 | 46.5868 | 56.0858 | 142.0 |
| 0.0336 | 12.0 | 9540 | 0.8658 | 57.2678 | 43.347 | 47.0854 | 55.4167 | 142.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
nikuznetsov/roberta-base-finetuned-cola
|
nikuznetsov
| 2022-05-08T21:02:05Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-08T20:43:49Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: roberta-base-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5880199146512337
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-cola
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7832
- Matthews Correlation: 0.5880
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5027 | 1.0 | 535 | 0.6017 | 0.4369 |
| 0.33 | 2.0 | 1070 | 0.5066 | 0.5521 |
| 0.2311 | 3.0 | 1605 | 0.6269 | 0.5727 |
| 0.1767 | 4.0 | 2140 | 0.7832 | 0.5880 |
| 0.1337 | 5.0 | 2675 | 0.9164 | 0.5880 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Siyam/Dansk-wav2vec2-stt
|
Siyam
| 2022-05-08T20:58:42Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-08T16:16:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: Dansk-wav2vec2-stt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Dansk-wav2vec2-stt
This model is a fine-tuned version of [Siyam/Dansk-wav2vec21](https://huggingface.co/Siyam/Dansk-wav2vec21) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7500
- Wer: 0.3929
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0298 | 4.26 | 400 | 0.8420 | 0.4579 |
| 0.0479 | 8.51 | 800 | 0.8713 | 0.4461 |
| 0.0387 | 12.77 | 1200 | 0.8307 | 0.4404 |
| 0.0336 | 17.02 | 1600 | 0.8322 | 0.4144 |
| 0.0322 | 21.28 | 2000 | 0.7493 | 0.4081 |
| 0.0288 | 25.53 | 2400 | 0.7361 | 0.3951 |
| 0.0264 | 29.79 | 2800 | 0.7500 | 0.3929 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 2.1.0
- Tokenizers 0.10.3
|
theojolliffe/distilbart-cnn-arxiv-pubmed-v3-e32
|
theojolliffe
| 2022-05-08T20:42:06Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-08T17:32:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: distilbart-cnn-arxiv-pubmed-v3-e32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-arxiv-pubmed-v3-e32
This model is a fine-tuned version of [theojolliffe/distilbart-cnn-arxiv-pubmed](https://huggingface.co/theojolliffe/distilbart-cnn-arxiv-pubmed) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9622
- Rouge1: 58.4519
- Rouge2: 45.6847
- Rougel: 49.3188
- Rougelsum: 57.1351
- Gen Len: 141.9815
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 32
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 1.4924 | 1.0 | 795 | 1.0924 | 52.3565 | 32.9081 | 34.6648 | 49.6351 | 142.0 |
| 0.8865 | 2.0 | 1590 | 0.9394 | 54.2962 | 35.9725 | 38.3888 | 51.5708 | 140.9815 |
| 0.6979 | 3.0 | 2385 | 0.8831 | 53.6795 | 35.226 | 37.4988 | 51.4424 | 141.8704 |
| 0.4868 | 4.0 | 3180 | 0.8457 | 53.9141 | 35.2212 | 37.6423 | 51.63 | 142.0 |
| 0.3903 | 5.0 | 3975 | 0.8252 | 54.8908 | 36.8468 | 39.072 | 52.6068 | 141.8704 |
| 0.2725 | 6.0 | 4770 | 0.8338 | 54.2424 | 36.4675 | 39.6312 | 51.9973 | 142.0 |
| 0.2177 | 7.0 | 5565 | 0.8224 | 54.0085 | 36.9395 | 39.7131 | 51.8476 | 142.0 |
| 0.1736 | 8.0 | 6360 | 0.8001 | 55.5106 | 38.8828 | 41.7174 | 53.3171 | 141.7222 |
| 0.1368 | 9.0 | 7155 | 0.8036 | 56.7284 | 40.8327 | 42.8486 | 54.6505 | 141.8519 |
| 0.1272 | 10.0 | 7950 | 0.8197 | 54.5703 | 38.5037 | 41.591 | 52.4417 | 141.2963 |
| 0.0977 | 11.0 | 8745 | 0.8463 | 55.3691 | 40.5406 | 43.9156 | 53.6637 | 141.7593 |
| 0.0768 | 12.0 | 9540 | 0.8467 | 56.7099 | 41.6472 | 44.8171 | 54.8111 | 142.0 |
| 0.0702 | 13.0 | 10335 | 0.8488 | 56.6646 | 41.2164 | 43.8938 | 54.7209 | 142.0 |
| 0.0597 | 14.0 | 11130 | 0.8543 | 55.7245 | 40.9593 | 42.5698 | 53.8763 | 142.0 |
| 0.0514 | 15.0 | 11925 | 0.8567 | 56.4837 | 41.8224 | 44.5484 | 54.9102 | 142.0 |
| 0.045 | 16.0 | 12720 | 0.8794 | 57.5862 | 43.4725 | 46.3658 | 55.9579 | 142.0 |
| 0.0367 | 17.0 | 13515 | 0.8974 | 57.1023 | 42.9042 | 45.8444 | 55.2216 | 142.0 |
| 0.0346 | 18.0 | 14310 | 0.9143 | 57.7781 | 43.8333 | 47.0943 | 56.0032 | 142.0 |
| 0.03 | 19.0 | 15105 | 0.9044 | 56.9211 | 41.9678 | 44.5081 | 54.8092 | 141.6667 |
| 0.0241 | 20.0 | 15900 | 0.9109 | 57.7747 | 44.1122 | 46.5743 | 55.9199 | 141.8148 |
| 0.0225 | 21.0 | 16695 | 0.9180 | 56.2307 | 42.2787 | 45.602 | 54.6285 | 142.0 |
| 0.0184 | 22.0 | 17490 | 0.9120 | 57.4024 | 43.657 | 46.5646 | 55.4614 | 142.0 |
| 0.0182 | 23.0 | 18285 | 0.9262 | 57.292 | 42.8935 | 46.1294 | 55.3741 | 141.963 |
| 0.016 | 24.0 | 19080 | 0.9268 | 58.2018 | 44.3914 | 47.7056 | 56.4628 | 142.0 |
| 0.0139 | 25.0 | 19875 | 0.9373 | 58.1187 | 44.7233 | 47.8946 | 56.26 | 142.0 |
| 0.0125 | 26.0 | 20670 | 0.9300 | 57.8399 | 44.3073 | 48.4549 | 56.1325 | 141.8889 |
| 0.012 | 27.0 | 21465 | 0.9487 | 57.8585 | 43.8361 | 47.6488 | 56.2748 | 142.0 |
| 0.0095 | 28.0 | 22260 | 0.9620 | 57.5966 | 44.0481 | 46.8771 | 56.079 | 141.6852 |
| 0.009 | 29.0 | 23055 | 0.9526 | 57.8869 | 44.2234 | 48.0884 | 56.3158 | 141.9815 |
| 0.008 | 30.0 | 23850 | 0.9626 | 58.2649 | 45.0371 | 48.5288 | 56.7707 | 141.9815 |
| 0.0076 | 31.0 | 24645 | 0.9640 | 58.1467 | 45.0457 | 48.7258 | 56.7111 | 141.3704 |
| 0.0072 | 32.0 | 25440 | 0.9622 | 58.4519 | 45.6847 | 49.3188 | 57.1351 | 141.9815 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
jecp97/trial-ppo-LunarLander-v2
|
jecp97
| 2022-05-08T20:28:36Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-08T16:22:10Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 206.72 +/- 58.57
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
sam999/t5-end2end-questions-generation
|
sam999
| 2022-05-08T20:01:47Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-08T01:16:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: t5-end2end-questions-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-end2end-questions-generation
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6940
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0297 | 0.07 | 100 | 1.6940 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
huxxx657/roberta-base-finetuned-squad
|
huxxx657
| 2022-05-08T19:57:20Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-05-08T02:59:11Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: roberta-base-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-squad
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8152
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.8557 | 1.0 | 8239 | 0.8152 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
AlphaStar/TEST2ppo-LunarLander-v2
|
AlphaStar
| 2022-05-08T19:47:57Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-08T19:40:37Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 192.40 +/- 60.65
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.