modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-05 18:32:21
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 540
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-05 18:32:07
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
titi7242229/roberta-base-bne-finetuned_personality_multi_4
|
titi7242229
| 2022-06-11T19:13:27Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-11T13:23:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-bne-finetuned_personality_multi_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned_personality_multi_4
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1709
- Accuracy: 0.3470
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.1759 | 1.0 | 125 | 2.1873 | 0.2548 |
| 1.8651 | 2.0 | 250 | 2.2285 | 0.2680 |
| 1.8619 | 3.0 | 375 | 2.1732 | 0.2951 |
| 1.7224 | 4.0 | 500 | 2.0688 | 0.3925 |
| 1.6432 | 5.0 | 625 | 2.1094 | 0.3735 |
| 1.3599 | 6.0 | 750 | 2.1732 | 0.3631 |
| 1.0623 | 7.0 | 875 | 2.4785 | 0.3579 |
| 1.0504 | 8.0 | 1000 | 2.4598 | 0.3844 |
| 0.7662 | 9.0 | 1125 | 2.8081 | 0.3573 |
| 0.9167 | 10.0 | 1250 | 2.9385 | 0.3452 |
| 0.6391 | 11.0 | 1375 | 2.9933 | 0.3320 |
| 0.3893 | 12.0 | 1500 | 3.1037 | 0.3579 |
| 0.673 | 13.0 | 1625 | 3.4369 | 0.3631 |
| 0.3498 | 14.0 | 1750 | 3.6396 | 0.3383 |
| 0.3891 | 15.0 | 1875 | 3.8332 | 0.3556 |
| 0.0818 | 16.0 | 2000 | 3.9451 | 0.3401 |
| 0.1438 | 17.0 | 2125 | 3.9271 | 0.3458 |
| 0.0634 | 18.0 | 2250 | 4.1564 | 0.3481 |
| 0.0121 | 19.0 | 2375 | 4.1405 | 0.3499 |
| 0.0071 | 20.0 | 2500 | 4.1709 | 0.3470 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ahmeddbahaa/t5-arabic-base-finetuned-xlsum-ar
|
ahmeddbahaa
| 2022-06-11T19:13:08Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"summarization",
"ar",
"abstractive summarization",
"xlsum",
"generated_from_trainer",
"dataset:xlsum",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-06-11T01:21:55Z |
---
license: apache-2.0
tags:
- summarization
- t5
- ar
- abstractive summarization
- xlsum
- generated_from_trainer
datasets:
- xlsum
model-index:
- name: t5-arabic-base-finetuned-xlsum-ar
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-arabic-base-finetuned-xlsum-ar
This model is a fine-tuned version of [bakrianoo/t5-arabic-base](https://huggingface.co/bakrianoo/t5-arabic-base) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0328
- Rouge-1: 23.72
- Rouge-2: 10.95
- Rouge-l: 21.59
- Gen Len: 19.0
- Bertscore: 71.81
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 10
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/elonmusk-iamjohnoliver-neiltyson
|
huggingtweets
| 2022-06-11T19:00:50Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-11T18:54:15Z |
---
language: en
thumbnail: http://www.huggingtweets.com/elonmusk-iamjohnoliver-neiltyson/1654974044761/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1529956155937759233/Nyn1HZWF_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1393958859/main_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/74188698/NeilTysonOriginsA-Crop_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & John Oliver & Neil deGrasse Tyson</div>
<div style="text-align: center; font-size: 14px;">@elonmusk-iamjohnoliver-neiltyson</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & John Oliver & Neil deGrasse Tyson.
| Data | Elon Musk | John Oliver | Neil deGrasse Tyson |
| --- | --- | --- | --- |
| Tweets downloaded | 3200 | 636 | 3237 |
| Retweets | 147 | 122 | 10 |
| Short tweets | 954 | 9 | 87 |
| Tweets kept | 2099 | 505 | 3140 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/14h905cr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elonmusk-iamjohnoliver-neiltyson's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3gcc5ko3) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3gcc5ko3/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/elonmusk-iamjohnoliver-neiltyson')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/rterdogan
|
huggingtweets
| 2022-06-11T18:56:47Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1151410974240444416/yVvaD7hU_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Recep Tayyip Erdoğan</div>
<div style="text-align: center; font-size: 14px;">@rterdogan</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Recep Tayyip Erdoğan.
| Data | Recep Tayyip Erdoğan |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 418 |
| Short tweets | 54 |
| Tweets kept | 2778 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/wf1dbaih/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @rterdogan's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1a3w2qxa) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1a3w2qxa/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/rterdogan')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Galeros/dqn-mountaincar-v0-local
|
Galeros
| 2022-06-11T18:38:27Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"MountainCar-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-11T18:38:19Z |
---
library_name: stable-baselines3
tags:
- MountainCar-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: -98.80 +/- 21.88
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MountainCar-v0
type: MountainCar-v0
---
# **DQN** Agent playing **MountainCar-v0**
This is a trained model of a **DQN** agent playing **MountainCar-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
aprischa/bart-large-cnn-aprischa
|
aprischa
| 2022-06-11T17:21:57Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-11T16:53:31Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-aprischa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-aprischa
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3589
- Rouge1: 66.7098
- Rouge2: 57.7992
- Rougel: 63.2231
- Rougelsum: 65.9009
- Gen Len: 141.198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 0.369 | 1.0 | 5403 | 0.3835 | 66.0604 | 56.9948 | 62.4967 | 65.265 | 141.1126 |
| 0.2985 | 2.0 | 10806 | 0.3589 | 66.7098 | 57.7992 | 63.2231 | 65.9009 | 141.198 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
DancingIguana/codeparrot-ds
|
DancingIguana
| 2022-06-11T16:58:04Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-08T21:56:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
bubblecookie/t5-small-finetuned-cnndm_trained
|
bubblecookie
| 2022-06-11T16:48:45Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-10T06:21:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: t5-small-finetuned-cnndm_trained
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnndm_trained
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
tuni/distilbert-base-uncased-finetuned-cola
|
tuni
| 2022-06-11T15:12:53Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-11T13:50:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5324115893962171
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7035
- Matthews Correlation: 0.5324
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.785228097724678e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 28
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5227 | 1.0 | 535 | 0.5005 | 0.4121 |
| 0.318 | 2.0 | 1070 | 0.5265 | 0.4977 |
| 0.1887 | 3.0 | 1605 | 0.7035 | 0.5324 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
IshanKumar/molecular_generation
|
IshanKumar
| 2022-06-11T14:27:39Z | 0 | 0 |
keras
|
[
"keras",
"tensorboard",
"tf-keras",
"mol_gen",
"region:us"
] | null | 2022-06-02T19:30:33Z |
---
library_name: keras
tags:
- mol_gen
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 0.0005, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
## Training Metrics
| Epochs | Train Loss |
|--- |--- |
| 1| 68866.578|
| 2| 68818.219|
| 3| 68850.844|
| 4| 68829.688|
| 5| 68840.258|
| 6| 68813.281|
| 7| 68809.414|
| 8| 68815.312|
| 9| 68805.641|
| 10| 68803.672|
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
neeenway/ppo-LunarLander-v2
|
neeenway
| 2022-06-11T13:43:31Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-11T13:43:03Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ppo
results:
- metrics:
- type: mean_reward
value: 240.31 +/- 12.46
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **ppo** Agent playing **LunarLander-v2**
This is a trained model of a **ppo** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
huggingtweets/nosuba_13
|
huggingtweets
| 2022-06-11T13:40:57Z | 105 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-11T13:40:23Z |
---
language: en
thumbnail: http://www.huggingtweets.com/nosuba_13/1654954852706/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1382014203796553732/DFDiOrcz_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Noel</div>
<div style="text-align: center; font-size: 14px;">@nosuba_13</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Noel.
| Data | Noel |
| --- | --- |
| Tweets downloaded | 3170 |
| Retweets | 859 |
| Short tweets | 369 |
| Tweets kept | 1942 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ui1lp214/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nosuba_13's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/6sn9tlrz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/6sn9tlrz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/nosuba_13')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
YeRyeongLee/albert-base-v2-finetuned-filtered-0609
|
YeRyeongLee
| 2022-06-11T13:33:02Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-11T11:46:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: albert-base-v2-finetuned-filtered-0609
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-finetuned-filtered-0609
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2062
- Accuracy: 0.9723
- Precision: 0.9724
- Recall: 0.9723
- F1: 0.9723
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.2688 | 1.0 | 3180 | 0.2282 | 0.9560 | 0.9577 | 0.9560 | 0.9562 |
| 0.2268 | 2.0 | 6360 | 0.1909 | 0.9638 | 0.9640 | 0.9638 | 0.9638 |
| 0.1831 | 3.0 | 9540 | 0.2590 | 0.9572 | 0.9584 | 0.9572 | 0.9572 |
| 0.1588 | 4.0 | 12720 | 0.1752 | 0.9673 | 0.9678 | 0.9673 | 0.9673 |
| 0.0972 | 5.0 | 15900 | 0.1868 | 0.9695 | 0.9696 | 0.9695 | 0.9695 |
| 0.0854 | 6.0 | 19080 | 0.2042 | 0.9701 | 0.9707 | 0.9701 | 0.9702 |
| 0.0599 | 7.0 | 22260 | 0.1793 | 0.9748 | 0.9749 | 0.9748 | 0.9749 |
| 0.0389 | 8.0 | 25440 | 0.1996 | 0.9742 | 0.9743 | 0.9742 | 0.9742 |
| 0.0202 | 9.0 | 28620 | 0.2188 | 0.9723 | 0.9726 | 0.9723 | 0.9724 |
| 0.0152 | 10.0 | 31800 | 0.2062 | 0.9723 | 0.9724 | 0.9723 | 0.9723 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.9.1+cu111
- Datasets 1.16.1
- Tokenizers 0.12.1
|
marieke93/BERT-evidence-types
|
marieke93
| 2022-06-11T13:32:10Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-08T11:54:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: BERT-evidence-types
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT-evidence-types
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the evidence types dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8008
- Macro f1: 0.4227
- Weighted f1: 0.6976
- Accuracy: 0.7154
- Balanced accuracy: 0.3876
## Training and evaluation data
The data set, as well as the code that was used to fine tune this model can be found in the GitHub repository [BA-Thesis-Information-Science-Persuasion-Strategies](https://github.com/mariekevdh/BA-Thesis-Information-Science-Persuasion-Strategies)
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Macro f1 | Weighted f1 | Accuracy | Balanced accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:--------:|:-----------------:|
| 1.1148 | 1.0 | 125 | 1.0531 | 0.2566 | 0.6570 | 0.6705 | 0.2753 |
| 0.7546 | 2.0 | 250 | 0.9725 | 0.3424 | 0.6947 | 0.7002 | 0.3334 |
| 0.4757 | 3.0 | 375 | 1.1375 | 0.3727 | 0.7113 | 0.7184 | 0.3680 |
| 0.2637 | 4.0 | 500 | 1.3585 | 0.3807 | 0.6836 | 0.6910 | 0.3805 |
| 0.1408 | 5.0 | 625 | 1.6605 | 0.3785 | 0.6765 | 0.6872 | 0.3635 |
| 0.0856 | 6.0 | 750 | 1.9703 | 0.3802 | 0.6890 | 0.7047 | 0.3704 |
| 0.0502 | 7.0 | 875 | 2.1245 | 0.4067 | 0.6995 | 0.7169 | 0.3751 |
| 0.0265 | 8.0 | 1000 | 2.2676 | 0.3756 | 0.6816 | 0.6925 | 0.3647 |
| 0.0147 | 9.0 | 1125 | 2.4286 | 0.4052 | 0.6887 | 0.7062 | 0.3803 |
| 0.0124 | 10.0 | 1250 | 2.5773 | 0.4084 | 0.6853 | 0.7040 | 0.3695 |
| 0.0111 | 11.0 | 1375 | 2.5941 | 0.4146 | 0.6915 | 0.7085 | 0.3834 |
| 0.0076 | 12.0 | 1500 | 2.6124 | 0.4157 | 0.6936 | 0.7078 | 0.3863 |
| 0.0067 | 13.0 | 1625 | 2.7050 | 0.4139 | 0.6925 | 0.7108 | 0.3798 |
| 0.0087 | 14.0 | 1750 | 2.6695 | 0.4252 | 0.7009 | 0.7169 | 0.3920 |
| 0.0056 | 15.0 | 1875 | 2.7357 | 0.4257 | 0.6985 | 0.7161 | 0.3868 |
| 0.0054 | 16.0 | 2000 | 2.7389 | 0.4249 | 0.6955 | 0.7116 | 0.3890 |
| 0.0051 | 17.0 | 2125 | 2.7767 | 0.4197 | 0.6967 | 0.7146 | 0.3863 |
| 0.004 | 18.0 | 2250 | 2.7947 | 0.4211 | 0.6977 | 0.7154 | 0.3876 |
| 0.0041 | 19.0 | 2375 | 2.8030 | 0.4204 | 0.6953 | 0.7131 | 0.3855 |
| 0.0042 | 20.0 | 2500 | 2.8008 | 0.4227 | 0.6976 | 0.7154 | 0.3876 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
send-it/dqn-SpaceInvadersNoFrameskip-v4
|
send-it
| 2022-06-11T13:31:04Z | 7 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-11T13:30:29Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 558.50 +/- 102.18
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga send-it -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga send-it
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
antonioricciardi/FrozenLake-v1
|
antonioricciardi
| 2022-06-11T13:06:56Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"FrozenLake-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-11T13:06:48Z |
---
library_name: stable-baselines3
tags:
- FrozenLake-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1
type: FrozenLake-v1
---
# **PPO** Agent playing **FrozenLake-v1**
This is a trained model of a **PPO** agent playing **FrozenLake-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
louisdeco/camembert-base-finetuned-RankLineCause
|
louisdeco
| 2022-06-11T12:50:01Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"camembert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-11T09:02:07Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- recall
model-index:
- name: camembert-base-finetuned-RankLineCause
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert-base-finetuned-RankLineCause
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3138
- Accuracy: 0.8152
- F1: 0.8297
- Recall: 0.8152
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 50
- eval_batch_size: 50
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:------:|
| 0.3471 | 1.0 | 10019 | 0.3191 | 0.8156 | 0.8137 | 0.8156 |
| 0.317 | 2.0 | 20038 | 0.3138 | 0.8152 | 0.8297 | 0.8152 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/adrianramy
|
huggingtweets
| 2022-06-11T12:12:59Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-11T12:12:20Z |
---
language: en
thumbnail: http://www.huggingtweets.com/adrianramy/1654949574810/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1192394634305134593/kWwF0YSv_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Adri</div>
<div style="text-align: center; font-size: 14px;">@adrianramy</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Adri.
| Data | Adri |
| --- | --- |
| Tweets downloaded | 3050 |
| Retweets | 1585 |
| Short tweets | 275 |
| Tweets kept | 1190 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/30dqbz5d/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @adrianramy's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/16tp54yl) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/16tp54yl/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/adrianramy')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/dekotale
|
huggingtweets
| 2022-06-11T12:08:52Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-11T12:04:17Z |
---
language: en
thumbnail: http://www.huggingtweets.com/dekotale/1654949168644/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1303333944360869888/DcCZvOOS_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Dekotale</div>
<div style="text-align: center; font-size: 14px;">@dekotale</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Dekotale.
| Data | Dekotale |
| --- | --- |
| Tweets downloaded | 3125 |
| Retweets | 1528 |
| Short tweets | 433 |
| Tweets kept | 1164 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1l1uql9a/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dekotale's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/fv8rmutq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/fv8rmutq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dekotale')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
shivarama23/swin-tiny-patch4-window7-224-finetuned-image_quality
|
shivarama23
| 2022-06-11T11:54:49Z | 85 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:image_folder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-06-11T11:41:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-image_quality
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9090909090909091
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-image_quality
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5242
- Accuracy: 0.9091
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 0.6762 | 0.6364 |
| No log | 2.0 | 2 | 0.6309 | 0.7273 |
| No log | 3.0 | 3 | 0.6095 | 0.6364 |
| No log | 4.0 | 4 | 0.5775 | 0.6364 |
| No log | 5.0 | 5 | 0.5443 | 0.8182 |
| No log | 6.0 | 6 | 0.5242 | 0.9091 |
| No log | 7.0 | 7 | 0.5149 | 0.8182 |
| No log | 8.0 | 8 | 0.5094 | 0.8182 |
| No log | 9.0 | 9 | 0.5038 | 0.8182 |
| 0.4095 | 10.0 | 10 | 0.4992 | 0.8182 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
mmillet/distilrubert-tiny-2nd-finetune-epru
|
mmillet
| 2022-06-11T09:50:42Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-11T09:48:50Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilrubert-tiny-2nd-finetune-epru
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilrubert-tiny-2nd-finetune-epru
This model is a fine-tuned version of [mmillet/distilrubert-tiny-cased-conversational-v1_single_finetuned_on_cedr_augmented](https://huggingface.co/mmillet/distilrubert-tiny-cased-conversational-v1_single_finetuned_on_cedr_augmented) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3546
- Accuracy: 0.9325
- F1: 0.9328
- Precision: 0.9359
- Recall: 0.9325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.0686 | 1.0 | 12 | 0.2931 | 0.9141 | 0.9142 | 0.9163 | 0.9141 |
| 0.0269 | 2.0 | 24 | 0.2690 | 0.9448 | 0.9444 | 0.9449 | 0.9448 |
| 0.0282 | 3.0 | 36 | 0.3140 | 0.9141 | 0.9140 | 0.9168 | 0.9141 |
| 0.0185 | 4.0 | 48 | 0.2977 | 0.9571 | 0.9570 | 0.9576 | 0.9571 |
| 0.0103 | 5.0 | 60 | 0.3368 | 0.9264 | 0.9265 | 0.9296 | 0.9264 |
| 0.0088 | 6.0 | 72 | 0.3067 | 0.9387 | 0.9385 | 0.9389 | 0.9387 |
| 0.0152 | 7.0 | 84 | 0.3660 | 0.9264 | 0.9263 | 0.9282 | 0.9264 |
| 0.0315 | 8.0 | 96 | 0.3793 | 0.9325 | 0.9328 | 0.9359 | 0.9325 |
| 0.0258 | 9.0 | 108 | 0.3546 | 0.9325 | 0.9328 | 0.9359 | 0.9325 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Theivaprakasham/layoutlmv3-finetuned-wildreceipt
|
Theivaprakasham
| 2022-06-11T09:14:40Z | 28 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"dataset:wild_receipt",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-11T07:21:14Z |
---
tags:
- generated_from_trainer
datasets:
- wild_receipt
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-finetuned-wildreceipt
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wild_receipt
type: wild_receipt
args: WildReceipt
metrics:
- name: Precision
type: precision
value: 0.877212237618329
- name: Recall
type: recall
value: 0.8798678959680749
- name: F1
type: f1
value: 0.8785380599065679
- name: Accuracy
type: accuracy
value: 0.9249204782274871
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned-wildreceipt
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the wild_receipt dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3108
- Precision: 0.8772
- Recall: 0.8799
- F1: 0.8785
- Accuracy: 0.9249
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
The WildReceipt dataset consists of 1740 receipt images, and contains 25 key information categories, and a total of about 69000 text boxes. 1268 and 472 images are used for training and testing respectively to train the LayoutLMv3 model for Key Information Extraction.
## Training procedure
The training code: https://github.com/Theivaprakasham/layoutlmv3/blob/main/training_codes/LayoutLMv3_training_WildReceipts_dataset.ipynb
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 0.32 | 100 | 1.3143 | 0.6709 | 0.2679 | 0.3829 | 0.6700 |
| No log | 0.63 | 200 | 0.8814 | 0.6478 | 0.5195 | 0.5766 | 0.7786 |
| No log | 0.95 | 300 | 0.6568 | 0.7205 | 0.6491 | 0.6829 | 0.8303 |
| No log | 1.26 | 400 | 0.5618 | 0.7544 | 0.7072 | 0.7300 | 0.8519 |
| 1.0284 | 1.58 | 500 | 0.5003 | 0.7802 | 0.7566 | 0.7682 | 0.8687 |
| 1.0284 | 1.89 | 600 | 0.4454 | 0.7941 | 0.7679 | 0.7807 | 0.8748 |
| 1.0284 | 2.21 | 700 | 0.4314 | 0.8142 | 0.7928 | 0.8033 | 0.8852 |
| 1.0284 | 2.52 | 800 | 0.3870 | 0.8172 | 0.8200 | 0.8186 | 0.8953 |
| 1.0284 | 2.84 | 900 | 0.3629 | 0.8288 | 0.8369 | 0.8329 | 0.9025 |
| 0.4167 | 3.15 | 1000 | 0.3537 | 0.8540 | 0.8200 | 0.8366 | 0.9052 |
| 0.4167 | 3.47 | 1100 | 0.3383 | 0.8438 | 0.8285 | 0.8361 | 0.9063 |
| 0.4167 | 3.79 | 1200 | 0.3403 | 0.8297 | 0.8493 | 0.8394 | 0.9062 |
| 0.4167 | 4.1 | 1300 | 0.3271 | 0.8428 | 0.8545 | 0.8487 | 0.9110 |
| 0.4167 | 4.42 | 1400 | 0.3182 | 0.8491 | 0.8518 | 0.8504 | 0.9131 |
| 0.2766 | 4.73 | 1500 | 0.3111 | 0.8491 | 0.8539 | 0.8515 | 0.9129 |
| 0.2766 | 5.05 | 1600 | 0.3177 | 0.8397 | 0.8620 | 0.8507 | 0.9124 |
| 0.2766 | 5.36 | 1700 | 0.3091 | 0.8676 | 0.8548 | 0.8612 | 0.9191 |
| 0.2766 | 5.68 | 1800 | 0.3080 | 0.8508 | 0.8645 | 0.8576 | 0.9162 |
| 0.2766 | 5.99 | 1900 | 0.3059 | 0.8492 | 0.8662 | 0.8576 | 0.9163 |
| 0.2114 | 6.31 | 2000 | 0.3184 | 0.8536 | 0.8657 | 0.8596 | 0.9147 |
| 0.2114 | 6.62 | 2100 | 0.3161 | 0.8583 | 0.8713 | 0.8648 | 0.9184 |
| 0.2114 | 6.94 | 2200 | 0.3055 | 0.8707 | 0.8682 | 0.8694 | 0.9220 |
| 0.2114 | 7.26 | 2300 | 0.3004 | 0.8689 | 0.8745 | 0.8717 | 0.9219 |
| 0.2114 | 7.57 | 2400 | 0.3111 | 0.8701 | 0.8720 | 0.8711 | 0.9211 |
| 0.174 | 7.89 | 2500 | 0.3130 | 0.8599 | 0.8741 | 0.8669 | 0.9198 |
| 0.174 | 8.2 | 2600 | 0.3034 | 0.8661 | 0.8748 | 0.8704 | 0.9219 |
| 0.174 | 8.52 | 2700 | 0.3005 | 0.8799 | 0.8673 | 0.8736 | 0.9225 |
| 0.174 | 8.83 | 2800 | 0.3043 | 0.8687 | 0.8804 | 0.8745 | 0.9240 |
| 0.174 | 9.15 | 2900 | 0.3121 | 0.8776 | 0.8704 | 0.8740 | 0.9242 |
| 0.1412 | 9.46 | 3000 | 0.3131 | 0.8631 | 0.8755 | 0.8692 | 0.9204 |
| 0.1412 | 9.78 | 3100 | 0.3067 | 0.8715 | 0.8773 | 0.8744 | 0.9233 |
| 0.1412 | 10.09 | 3200 | 0.3021 | 0.8751 | 0.8812 | 0.8782 | 0.9248 |
| 0.1412 | 10.41 | 3300 | 0.3092 | 0.8651 | 0.8808 | 0.8729 | 0.9228 |
| 0.1412 | 10.73 | 3400 | 0.3084 | 0.8776 | 0.8749 | 0.8762 | 0.9237 |
| 0.1254 | 11.04 | 3500 | 0.3156 | 0.8738 | 0.8785 | 0.8761 | 0.9237 |
| 0.1254 | 11.36 | 3600 | 0.3131 | 0.8723 | 0.8818 | 0.8770 | 0.9244 |
| 0.1254 | 11.67 | 3700 | 0.3108 | 0.8778 | 0.8781 | 0.8780 | 0.9250 |
| 0.1254 | 11.99 | 3800 | 0.3097 | 0.8778 | 0.8771 | 0.8775 | 0.9239 |
| 0.1254 | 12.3 | 3900 | 0.3115 | 0.8785 | 0.8801 | 0.8793 | 0.9251 |
| 0.111 | 12.62 | 4000 | 0.3108 | 0.8772 | 0.8799 | 0.8785 | 0.9249 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/gustholomulers
|
huggingtweets
| 2022-06-11T07:53:54Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-11T07:50:54Z |
---
language: en
thumbnail: http://www.huggingtweets.com/gustholomulers/1654934015981/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1535477036353040384/tXI_s1Yi_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">soppy</div>
<div style="text-align: center; font-size: 14px;">@gustholomulers</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from soppy.
| Data | soppy |
| --- | --- |
| Tweets downloaded | 1482 |
| Retweets | 55 |
| Short tweets | 329 |
| Tweets kept | 1098 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1nhfbopf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gustholomulers's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3p5yu4wm) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3p5yu4wm/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gustholomulers')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
orzhan/t5-long-extract
|
orzhan
| 2022-06-11T07:20:59Z | 105 | 1 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"feature-extraction",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
T5-small model fine-tuned for extractive summarization on long documents.
Repository: [GitHub](https://github.com/orzhan/t5-long-extract)
|
orzhan/rut5-base-detox-v2
|
orzhan
| 2022-06-11T07:18:47Z | 8 | 1 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"PyTorch",
"Transformers",
"ru",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-25T06:51:41Z |
---
language:
- ru
tags:
- PyTorch
- Transformers
---
# rut5-base-detox-v2
Model was fine-tuned from sberbank-ai/ruT5-base on parallel detoxification corpus.
* Task: `text2text generation`
* Type: `encoder-decoder`
* Tokenizer: `bpe`
* Dict size: `32 101`
* Num Parameters: `222 M`
|
titi7242229/roberta-base-bne-finetuned_personality_multi_2
|
titi7242229
| 2022-06-11T06:21:27Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-11T05:27:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-bne-finetuned_personality_multi_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned_personality_multi_2
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2983
- Accuracy: 0.5429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.3256 | 1.0 | 125 | 2.2642 | 0.2161 |
| 1.815 | 2.0 | 250 | 1.9569 | 0.3919 |
| 1.614 | 3.0 | 375 | 1.7264 | 0.5014 |
| 1.1718 | 4.0 | 500 | 1.6387 | 0.5239 |
| 1.135 | 5.0 | 625 | 1.6259 | 0.5245 |
| 0.5637 | 6.0 | 750 | 1.6443 | 0.5372 |
| 0.3672 | 7.0 | 875 | 1.7146 | 0.5326 |
| 0.3249 | 8.0 | 1000 | 1.8099 | 0.5297 |
| 0.1791 | 9.0 | 1125 | 1.8888 | 0.5285 |
| 0.2175 | 10.0 | 1250 | 1.9228 | 0.5326 |
| 0.0465 | 11.0 | 1375 | 1.9753 | 0.5435 |
| 0.1154 | 12.0 | 1500 | 2.1102 | 0.5256 |
| 0.0745 | 13.0 | 1625 | 2.1319 | 0.5429 |
| 0.0281 | 14.0 | 1750 | 2.1743 | 0.5360 |
| 0.0173 | 15.0 | 1875 | 2.2087 | 0.5441 |
| 0.0269 | 16.0 | 2000 | 2.2456 | 0.5424 |
| 0.0107 | 17.0 | 2125 | 2.2685 | 0.5458 |
| 0.0268 | 18.0 | 2250 | 2.2893 | 0.5383 |
| 0.0245 | 19.0 | 2375 | 2.2943 | 0.5418 |
| 0.0156 | 20.0 | 2500 | 2.2983 | 0.5429 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/waffle_64
|
huggingtweets
| 2022-06-11T04:39:14Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-11T04:35:42Z |
---
language: en
thumbnail: http://www.huggingtweets.com/waffle_64/1654922313776/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1534033778787639296/a9JUby19_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">🧇 Werewaffle🐺LOU NATION🐺</div>
<div style="text-align: center; font-size: 14px;">@waffle_64</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 🧇 Werewaffle🐺LOU NATION🐺.
| Data | 🧇 Werewaffle🐺LOU NATION🐺 |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 110 |
| Short tweets | 217 |
| Tweets kept | 2922 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1rq6yndm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @waffle_64's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ucwnzfby) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ucwnzfby/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/waffle_64')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ablam/distilgpt2_fine_tuned_gcode
|
ablam
| 2022-06-11T03:52:00Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-11T01:09:05Z |
---
tags:
- generated_from_trainer
model-index:
- name: distilgpt2_fine_tuned_gcode
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2_fine_tuned_gcode
This model is a fine-tuned version of [congcongwang/distilgpt2_fine_tuned_coder](https://huggingface.co/congcongwang/distilgpt2_fine_tuned_coder) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1670
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.1
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 4.1754 | 1.0 | 52144 | 4.1670 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1
- Datasets 2.1.0
- Tokenizers 0.10.3
|
tclong/wav2vec2-base-vios-commonvoice-1
|
tclong
| 2022-06-11T03:01:54Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-10T11:09:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-vios-commonvoice-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-vios-commonvoice-1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8913
- Wer: 0.3621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.4706 | 0.55 | 500 | 3.4725 | 1.0 |
| 3.202 | 1.1 | 1000 | 2.7555 | 1.0008 |
| 1.0507 | 1.66 | 1500 | 1.0481 | 0.6196 |
| 0.7325 | 2.21 | 2000 | 0.8120 | 0.4958 |
| 0.599 | 2.76 | 2500 | 0.7035 | 0.4447 |
| 0.5224 | 3.31 | 3000 | 0.6761 | 0.4078 |
| 0.4844 | 3.86 | 3500 | 0.6688 | 0.4011 |
| 0.4234 | 4.42 | 4000 | 0.6080 | 0.3729 |
| 0.4237 | 4.97 | 4500 | 0.5953 | 0.3556 |
| 0.3986 | 5.52 | 5000 | 0.6054 | 0.3478 |
| 0.3554 | 6.07 | 5500 | 0.6193 | 0.3479 |
| 0.3446 | 6.62 | 6000 | 0.5809 | 0.3302 |
| 0.3104 | 7.17 | 6500 | 0.5713 | 0.3283 |
| 0.3166 | 7.73 | 7000 | 0.5593 | 0.3133 |
| 0.2938 | 8.28 | 7500 | 0.5645 | 0.3081 |
| 0.3061 | 8.83 | 8000 | 0.5508 | 0.3020 |
| 0.2986 | 9.38 | 8500 | 0.5462 | 0.3024 |
| 0.2939 | 9.93 | 9000 | 0.5544 | 0.3028 |
| 0.2633 | 10.49 | 9500 | 0.5496 | 0.3024 |
| 0.2683 | 11.04 | 10000 | 0.5439 | 0.2946 |
| 0.2714 | 11.59 | 10500 | 0.5524 | 0.2947 |
| 0.2354 | 12.14 | 11000 | 0.5267 | 0.2918 |
| 0.2488 | 12.69 | 11500 | 0.5728 | 0.2938 |
| 0.2479 | 13.25 | 12000 | 0.5802 | 0.2951 |
| 0.245 | 13.8 | 12500 | 0.5571 | 0.2890 |
| 0.2422 | 14.35 | 13000 | 0.5531 | 0.2871 |
| 0.2369 | 14.9 | 13500 | 0.5453 | 0.2860 |
| 0.2345 | 15.45 | 14000 | 0.5452 | 0.2847 |
| 0.2507 | 16.0 | 14500 | 0.5536 | 0.2884 |
| 0.2454 | 16.56 | 15000 | 0.5577 | 0.2871 |
| 0.2729 | 17.11 | 15500 | 0.6019 | 0.2931 |
| 0.2743 | 17.66 | 16000 | 0.5619 | 0.2905 |
| 0.3031 | 18.21 | 16500 | 0.6401 | 0.3006 |
| 0.315 | 18.76 | 17000 | 0.6044 | 0.2990 |
| 0.4025 | 19.32 | 17500 | 0.6739 | 0.3304 |
| 0.4915 | 19.87 | 18000 | 0.7267 | 0.3472 |
| 0.5539 | 20.42 | 18500 | 0.8078 | 0.3483 |
| 0.7138 | 20.97 | 19000 | 0.9362 | 0.3765 |
| 0.5766 | 21.52 | 19500 | 0.7921 | 0.3392 |
| 0.688 | 22.08 | 20000 | 0.8833 | 0.3693 |
| 0.6964 | 22.63 | 20500 | 0.9137 | 0.3469 |
| 0.7389 | 23.18 | 21000 | 0.9379 | 0.3460 |
| 0.7851 | 23.73 | 21500 | 1.0438 | 0.3653 |
| 0.7619 | 24.28 | 22000 | 0.9313 | 0.3873 |
| 0.7175 | 24.83 | 22500 | 0.8668 | 0.3789 |
| 0.6842 | 25.39 | 23000 | 0.8243 | 0.3761 |
| 0.6941 | 25.94 | 23500 | 0.8557 | 0.3804 |
| 0.7167 | 26.49 | 24000 | 0.8618 | 0.3875 |
| 0.721 | 27.04 | 24500 | 0.8686 | 0.3764 |
| 0.6949 | 27.59 | 25000 | 0.8773 | 0.3690 |
| 0.727 | 28.15 | 25500 | 0.8769 | 0.3666 |
| 0.7363 | 28.7 | 26000 | 0.8867 | 0.3634 |
| 0.7157 | 29.25 | 26500 | 0.8895 | 0.3626 |
| 0.7385 | 29.8 | 27000 | 0.8913 | 0.3621 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
enoriega/rule_learning_margin_1mm
|
enoriega
| 2022-06-11T02:04:28Z | 22 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"generated_from_trainer",
"dataset:enoriega/odinsynth_dataset",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-06-10T01:52:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- enoriega/odinsynth_dataset
model-index:
- name: rule_learning_margin_1mm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rule_learning_margin_1mm
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the enoriega/odinsynth_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3806
- Margin Accuracy: 0.8239
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2000
- total_train_batch_size: 8000
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Margin Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------------:|
| 0.6482 | 0.16 | 20 | 0.6494 | 0.7263 |
| 0.5151 | 0.32 | 40 | 0.5088 | 0.7792 |
| 0.4822 | 0.48 | 60 | 0.4429 | 0.8045 |
| 0.4472 | 0.64 | 80 | 0.4265 | 0.8107 |
| 0.4352 | 0.8 | 100 | 0.4155 | 0.8132 |
| 0.4335 | 0.96 | 120 | 0.4128 | 0.8116 |
| 0.4113 | 1.12 | 140 | 0.4119 | 0.8142 |
| 0.4186 | 1.28 | 160 | 0.4075 | 0.8120 |
| 0.42 | 1.44 | 180 | 0.4072 | 0.8123 |
| 0.4175 | 1.6 | 200 | 0.4080 | 0.8130 |
| 0.4097 | 1.76 | 220 | 0.4031 | 0.8128 |
| 0.397 | 1.92 | 240 | 0.4004 | 0.8130 |
| 0.4115 | 2.08 | 260 | 0.3979 | 0.8136 |
| 0.4108 | 2.24 | 280 | 0.3940 | 0.8167 |
| 0.4125 | 2.4 | 300 | 0.3879 | 0.8218 |
| 0.4117 | 2.56 | 320 | 0.3848 | 0.8217 |
| 0.3967 | 2.72 | 340 | 0.3818 | 0.8231 |
| 0.3947 | 2.88 | 360 | 0.3813 | 0.8240 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
huggingtweets/tonebot_
|
huggingtweets
| 2022-06-11T00:15:41Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-11T00:14:25Z |
---
language: en
thumbnail: http://www.huggingtweets.com/tonebot_/1654906535396/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1447253318380793858/VVNhWBGI_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">tone bot</div>
<div style="text-align: center; font-size: 14px;">@tonebot_</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from tone bot.
| Data | tone bot |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 0 |
| Short tweets | 537 |
| Tweets kept | 2713 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2ot29sc5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tonebot_'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3g614pb8) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3g614pb8/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/tonebot_')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/froliki2108
|
huggingtweets
| 2022-06-11T00:04:16Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-11T00:02:55Z |
---
language: en
thumbnail: http://www.huggingtweets.com/froliki2108/1654905851117/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1447692349493100549/1PV2c-PJ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Froliki💉💉💉</div>
<div style="text-align: center; font-size: 14px;">@froliki2108</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Froliki💉💉💉.
| Data | Froliki💉💉💉 |
| --- | --- |
| Tweets downloaded | 2223 |
| Retweets | 1133 |
| Short tweets | 229 |
| Tweets kept | 861 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2tug3miv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @froliki2108's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3otsf5pj) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3otsf5pj/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/froliki2108')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
nateraw/modelcard-creator-demo
|
nateraw
| 2022-06-10T23:58:39Z | 0 | 0 |
pytorch
|
[
"pytorch",
"modelcards",
"autogenerated-modelcard",
"en",
"dataset:beans",
"arxiv:1810.03993",
"arxiv:1910.09700",
"license:mit",
"region:us"
] | null | 2022-06-10T23:40:23Z |
---
language:
- en
license: mit
library_name: pytorch
tags:
- modelcards
- autogenerated-modelcard
datasets:
- beans
metrics:
- accuracy
---
# modelcard-creator-demo
## Table of Contents
- [Model Details](#model-details)
- [How To Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Direct Use](#direct-use)
- [Downstream Use](#downstream-use)
- [Misuse and Out of Scope Use](#misuse-and-out-of-scope-use)
- [Limitations and Biases](#limitations-and-biases)
- [Training](#training)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Evaluation Results](#evaluation-results)
- [Environmental Impact](#environmental-impact)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Model Details
<!-- Give an overview of your model, the relevant research paper, who trained it, etc. -->
This isn't really a model, it's just a test repo to see if the [model card creator](https://huggingface.co/spaces/nateraw/modelcard-creator) works!
- Developed by: Nathan Raw
- Language(s):
- License: modelcard-creator-demo is licensed under the mit license
- Resources for more information:
- [Research Paper](https://arxiv.org/pdf/1810.03993.pdf)
- [GitHub Repo](https://github.com/nateraw/modelcards)
## How to Get Started with the Model
Use the code below to get started with the model.
```python
# A nice code snippet here that describes how to use the model...
```
## Uses
#### Direct Use
<!-- Describe what kind of tasks this model can be used for directly or problems it can solve. -->
[More Information Needed]
#### Downstream Use
<!-- Describe how this model could be leveraged by a downstream model (if applicable) -->
[More Information Needed]
#### Misuse and Out-of-scope Use
<!-- Describe ways in which this model ***should not*** be used. -->
[More Information Needed]
## Limitations and Biases
<!-- Describe limitations and biases of this model or models of it's type. -->
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.**
[More Information Needed]
## Training
#### Training Data
<!-- Describe the dataset used to train this model. -->
<!-- Refer to data card if dataset is provided and exists on the hub -->
See the data card for additional information.
#### Training Procedure
<!-- Describe the preprocessing, hardware used, training hyperparameters, etc. -->
[More Information Needed]
## Evaluation Results
<!-- Describe evaluation results of this model across any datasets it was evaluated on. -->
[More Information Needed]
## Environmental Impact
<!-- Provide information to document the environmental impact of this model -->
You can estimate carbon emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700)
- **Hardware Type:**
- **Hours used:**
- **Cloud Provider:**
- **Compute Region:**
- **Carbon Emitted:**
## Citation Information
```bibtex
@inproceedings{Mitchell_2019,
doi = {10.1145/3287560.3287596},
url = {https://doi.org/10.1145%2F3287560.3287596},
year = 2019,
month = {jan},
publisher = {{ACM}
},
author = {Margaret Mitchell and Simone Wu and Andrew Zaldivar and Parker Barnes and Lucy Vasserman and Ben Hutchinson and Elena Spitzer and Inioluwa Deborah Raji and Timnit Gebru},
title = {Model Cards for Model Reporting},
booktitle = {Proceedings of the Conference on Fairness, Accountability, and Transparency}
}
```
|
ahmeddbahaa/t5-arabic-base-finetuned-wikilingua-ar
|
ahmeddbahaa
| 2022-06-10T23:54:52Z | 12 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"summarization",
"mt5",
"ar",
"abstractive summarization",
"generated_from_trainer",
"dataset:wiki_lingua",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-06-10T15:19:23Z |
---
license: apache-2.0
tags:
- summarization
- mt5
- ar
- abstractive summarization
- generated_from_trainer
datasets:
- wiki_lingua
model-index:
- name: t5-arabic-base-finetuned-wikilingua-ar
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-arabic-base-finetuned-wikilingua-ar
This model is a fine-tuned version of [bakrianoo/t5-arabic-base](https://huggingface.co/bakrianoo/t5-arabic-base) on the wiki_lingua dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2735
- Rouge-1: 20.72
- Rouge-2: 7.63
- Rouge-l: 18.75
- Gen Len: 18.74
- Bertscore: 70.79
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 8
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/jedwill1999
|
huggingtweets
| 2022-06-10T23:10:10Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-10T23:09:22Z |
---
language: en
thumbnail: http://www.huggingtweets.com/jedwill1999/1654902604867/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1510152678919135250/lfEmlEGJ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">a local</div>
<div style="text-align: center; font-size: 14px;">@jedwill1999</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from a local.
| Data | a local |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 1080 |
| Short tweets | 525 |
| Tweets kept | 1641 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1qsnsp6t/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jedwill1999's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/mjjc73pu) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/mjjc73pu/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jedwill1999')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/boopysaur
|
huggingtweets
| 2022-06-10T22:57:09Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-10T22:56:08Z |
---
language: en
thumbnail: http://www.huggingtweets.com/boopysaur/1654901824865/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1476816918879297559/2jt_Rt2L_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">boop ♡</div>
<div style="text-align: center; font-size: 14px;">@boopysaur</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from boop ♡.
| Data | boop ♡ |
| --- | --- |
| Tweets downloaded | 920 |
| Retweets | 162 |
| Short tweets | 128 |
| Tweets kept | 630 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/398l195g/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @boopysaur's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3te0suw6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3te0suw6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/boopysaur')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
facebook/roberta-hate-speech-dynabench-r2-target
|
facebook
| 2022-06-10T22:36:17Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"en",
"arxiv:2012.15761",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-10T21:52:46Z |
---
language: en
---
# LFTW R2 Target
The R2 Target model from [Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection](https://arxiv.org/abs/2012.15761)
## Citation Information
```bibtex
@inproceedings{vidgen2021lftw,
title={Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection},
author={Bertie Vidgen and Tristan Thrush and Zeerak Waseem and Douwe Kiela},
booktitle={ACL},
year={2021}
}
```
Thanks to Kushal Tirumala and Adina Williams for helping the authors put the model on the hub!
|
torli/trijki
|
torli
| 2022-06-10T20:45:14Z | 0 | 1 | null |
[
"license:artistic-2.0",
"region:us"
] | null | 2022-06-10T20:43:32Z |
---
license: artistic-2.0
---
git lfs install
git clone https://huggingface.co/torli/trijki
|
FritzOS/TEdetection_distiBERT_NER_V5
|
FritzOS
| 2022-06-10T20:35:11Z | 63 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-10T20:34:58Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: TEdetection_distiBERT_NER_V5
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# TEdetection_distiBERT_NER_V5
This model is a fine-tuned version of [FritzOS/TEdetection_distilBERT_mLM_V5](https://huggingface.co/FritzOS/TEdetection_distilBERT_mLM_V5) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0029
- Validation Loss: 0.0032
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 208018, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.0029 | 0.0032 | 0 |
### Framework versions
- Transformers 4.19.4
- TensorFlow 2.8.2
- Datasets 2.2.2
- Tokenizers 0.12.1
|
mmillet/distilrubert-tiny-cased-conversational-v1_single_finetuned_on_cedr_augmented
|
mmillet
| 2022-06-10T20:27:38Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-10T20:14:44Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilrubert-tiny-cased-conversational-v1_single_finetuned_on_cedr_augmented
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilrubert-tiny-cased-conversational-v1_single_finetuned_on_cedr_augmented
This model is a fine-tuned version of [DeepPavlov/distilrubert-tiny-cased-conversational-v1](https://huggingface.co/DeepPavlov/distilrubert-tiny-cased-conversational-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5908
- Accuracy: 0.8653
- F1: 0.8656
- Precision: 0.8665
- Recall: 0.8653
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.9172 | 1.0 | 69 | 0.5124 | 0.8246 | 0.8220 | 0.8271 | 0.8246 |
| 0.4709 | 2.0 | 138 | 0.4279 | 0.8528 | 0.8505 | 0.8588 | 0.8528 |
| 0.3194 | 3.0 | 207 | 0.3770 | 0.8737 | 0.8727 | 0.8740 | 0.8737 |
| 0.2459 | 4.0 | 276 | 0.3951 | 0.8685 | 0.8682 | 0.8692 | 0.8685 |
| 0.1824 | 5.0 | 345 | 0.4005 | 0.8831 | 0.8834 | 0.8841 | 0.8831 |
| 0.1515 | 6.0 | 414 | 0.4356 | 0.8800 | 0.8797 | 0.8801 | 0.8800 |
| 0.1274 | 7.0 | 483 | 0.4642 | 0.8727 | 0.8726 | 0.8731 | 0.8727 |
| 0.0833 | 8.0 | 552 | 0.5226 | 0.8633 | 0.8627 | 0.8631 | 0.8633 |
| 0.073 | 9.0 | 621 | 0.5327 | 0.8695 | 0.8686 | 0.8692 | 0.8695 |
| 0.0575 | 10.0 | 690 | 0.5908 | 0.8653 | 0.8656 | 0.8665 | 0.8653 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
FritzOS/TEdetection_distilBERT_mLM_V5
|
FritzOS
| 2022-06-10T19:43:24Z | 63 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-10T19:43:11Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: TEdetection_distilBERT_mLM_V5
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# TEdetection_distilBERT_mLM_V5
This model is a fine-tuned version of [FritzOS/TEdetection_distiBERT_mLM_V2](https://huggingface.co/FritzOS/TEdetection_distiBERT_mLM_V2) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 208018, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.19.3
- TensorFlow 2.8.2
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/malzliebchen
|
huggingtweets
| 2022-06-10T18:29:39Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-10T18:26:43Z |
---
language: en
thumbnail: http://www.huggingtweets.com/malzliebchen/1654885748305/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1521909233024913408/4QsF2YzM_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Malzbeard's Severed Head</div>
<div style="text-align: center; font-size: 14px;">@malzliebchen</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Malzbeard's Severed Head.
| Data | Malzbeard's Severed Head |
| --- | --- |
| Tweets downloaded | 3247 |
| Retweets | 41 |
| Short tweets | 486 |
| Tweets kept | 2720 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/e1wzn1e5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @malzliebchen's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/38g20s6n) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/38g20s6n/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/malzliebchen')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Clody0071/camembert-base-finetuned-paraphrase
|
Clody0071
| 2022-06-10T18:05:49Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"camembert",
"text-classification",
"generated_from_trainer",
"dataset:pawsx",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-10T16:20:01Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- pawsx
metrics:
- accuracy
- f1
model-index:
- name: camembert-base-finetuned-paraphrase
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: pawsx
type: pawsx
args: fr
metrics:
- name: Accuracy
type: accuracy
value: 0.9085
- name: F1
type: f1
value: 0.9088724090678741
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert-base-finetuned-paraphrase
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on the pawsx dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2708
- Accuracy: 0.9085
- F1: 0.9089
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3918 | 1.0 | 772 | 0.3211 | 0.869 | 0.8696 |
| 0.2103 | 2.0 | 1544 | 0.2448 | 0.9075 | 0.9077 |
| 0.1622 | 3.0 | 2316 | 0.2577 | 0.9055 | 0.9059 |
| 0.1344 | 4.0 | 3088 | 0.2708 | 0.9085 | 0.9089 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
meln1k/dqn-SpaceInvadersNoFrameskip-v4
|
meln1k
| 2022-06-10T17:30:42Z | 5 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-10T17:30:14Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 817.50 +/- 327.32
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga meln1k -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga meln1k
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
juancopi81/mt5-small-finetuned-amazon-en-es
|
juancopi81
| 2022-06-10T15:58:27Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-10T13:57:35Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: juancopi81/mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# juancopi81/mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.1238
- Validation Loss: 3.4046
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 9672, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.2166 | 4.4331 | 0 |
| 6.0386 | 3.8849 | 1 |
| 5.2369 | 3.6628 | 2 |
| 4.7882 | 3.5569 | 3 |
| 4.5111 | 3.4850 | 4 |
| 4.3250 | 3.4330 | 5 |
| 4.1930 | 3.4163 | 6 |
| 4.1238 | 3.4046 | 7 |
### Framework versions
- Transformers 4.19.3
- TensorFlow 2.8.2
- Datasets 2.2.2
- Tokenizers 0.12.1
|
OTQ/q-FrozenLake-v1-4x4-noSlippery
|
OTQ
| 2022-06-10T15:14:57Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-10T15:14:51Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
ahmeddbahaa/mT5_multilingual_XLSum-finetuned-wikilingua-ar
|
ahmeddbahaa
| 2022-06-10T14:19:32Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"mT5_multilingual_XLSum",
"abstractive summarization",
"ar",
"generated_from_trainer",
"dataset:wiki_lingua",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-06-10T02:47:03Z |
---
tags:
- summarization
- mT5_multilingual_XLSum
- mt5
- abstractive summarization
- ar
- generated_from_trainer
datasets:
- wiki_lingua
model-index:
- name: mT5_multilingual_XLSum-finetuned-wikilingua-ar
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mT5_multilingual_XLSum-finetuned-wikilingua-ar
This model is a fine-tuned version of [csebuetnlp/mT5_multilingual_XLSum](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) on the wiki_lingua dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5540
- Rouge-1: 27.46
- Rouge-2: 9.0
- Rouge-l: 22.59
- Gen Len: 43.41
- Bertscore: 73.7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 8
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
RalphX1/dqn-SpaceInvadersNoFrameskip-v4
|
RalphX1
| 2022-06-10T13:57:03Z | 6 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-10T13:11:26Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 374.00 +/- 214.89
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga RalphX1 -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga RalphX1
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
google/muril-base-cased
|
google
| 2022-06-10T13:33:04Z | 10,230 | 35 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"arxiv:2103.10730",
"arxiv:1810.04805",
"arxiv:1911.02116",
"arxiv:2003.11080",
"arxiv:2009.05166",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
thumbnail: https://huggingface.co/front/thumbnails/google.png
license: apache-2.0
---
MuRIL: Multilingual Representations for Indian Languages
===
MuRIL is a BERT model pre-trained on 17 Indian languages and their transliterated counterparts. We have released the pre-trained model (with the MLM layer intact, enabling masked word predictions) in this repository. We have also released the encoder on [TFHub](https://tfhub.dev/google/MuRIL/1) with an additional pre-processing module, that processes raw text into the expected input format for the encoder. You can find more details on MuRIL in this [paper](http://arxiv.org/abs/2103.10730).
## Overview
This model uses a BERT base architecture [1] pretrained from scratch using the
Wikipedia [2], Common Crawl [3], PMINDIA [4] and Dakshina [5] corpora for 17 [6]
Indian languages.
We use a training paradigm similar to multilingual bert, with a few
modifications as listed:
* We include translation and transliteration segment pairs in training as
well.
* We keep an exponent value of 0.3 and not 0.7 for upsampling, shown to
enhance low-resource performance. [7]
See the Training section for more details.
## Training
The MuRIL model is pre-trained on monolingual segments as well as parallel
segments as detailed below :
* Monolingual Data : We make use of publicly available corpora from Wikipedia
and Common Crawl for 17 Indian languages.
* Parallel Data : We have two types of parallel data :
* Translated Data : We obtain translations of the above monolingual
corpora using the Google NMT pipeline. We feed translated segment pairs
as input. We also make use of the publicly available PMINDIA corpus.
* Transliterated Data : We obtain transliterations of Wikipedia using the
IndicTrans [8] library. We feed transliterated segment pairs as input.
We also make use of the publicly available Dakshina dataset.
We keep an exponent value of 0.3 to calculate duplication multiplier values for
upsampling of lower resourced languages and set dupe factors accordingly. Note,
we limit transliterated pairs to Wikipedia only.
The model was trained using a self-supervised masked language modeling task. We
do whole word masking with a maximum of 80 predictions. The model was trained
for 1000K steps, with a batch size of 4096, and a max sequence length of 512.
### Trainable parameters
All parameters in the module are trainable, and fine-tuning all parameters is
the recommended practice.
## Uses & Limitations
This model is intended to be used for a variety of downstream NLP tasks for
Indian languages. This model is trained on transliterated data as well, a
phenomomenon commonly observed in the Indian context. This model is not expected
to perform well on languages other than the ones used in pretraining, i.e. 17
Indian languages.
## Evaluation
We provide the results of fine-tuning this model on a set of downstream tasks.<br/>
We choose these tasks from the XTREME benchmark, with evaluation done on Indian language test-sets.<br/>
We also transliterate the test-sets and evaluate on the same.<br/>
We use the same fine-tuning setting as is used by [9], except for TyDiQA, where we use additional SQuAD v1.1 English training data, similar to [10].<br/>
For Tatoeba, we do not fine-tune the model, and use the pooled_output of the last layer as the sentence embedding.<br/>
All results are computed in a zero-shot setting, with English being the high resource training set language.
* Shown below are results on datasets from the XTREME benchmark (in %)
<br/>
PANX (F1) | ml | ta | te | en | bn | hi | mr | ur | Average
:-------- | ----: | ----: | ----: | ----: | ----: | ----: | ----: | ----: | ------:
mBERT | 54.77 | 51.24 | 50.16 | 84.40 | 68.59 | 65.13 | 58.44 | 31.36 | 58.01
MuRIL | 75.74 | 71.86 | 64.99 | 84.43 | 85.97 | 78.09 | 74.63 | 85.07 | 77.60
<br/>
UDPOS (F1) | en | hi | mr | ta | te | ur | Average
:--------- | ----: | ----: | ----: | ----: | ----: | ----: | ------:
mBERT | 95.35 | 66.09 | 71.27 | 59.58 | 76.98 | 57.85 | 71.19
MuRIL | 95.55 | 64.47 | 82.95 | 62.57 | 85.63 | 58.93 | 75.02
<br/>
XNLI (Accuracy) | en | hi | ur | Average
:-------------- | ----: | ----: | ----: | ------:
mBERT | 81.72 | 60.52 | 58.20 | 66.81
MuRIL | 83.85 | 70.66 | 67.70 | 74.07
<br/>
Tatoeba (Accuracy) | ml | ta | te | bn | hi | mr | ur | Average
:----------------- | ----: | ----: | ----: | ----: | ----: | ----: | ----: | ------:
mBERT | 20.23 | 12.38 | 14.96 | 12.80 | 27.80 | 18.00 | 22.70 | 18.41
MuRIL | 26.35 | 36.81 | 17.52 | 20.20 | 31.50 | 26.60 | 17.10 | 25.15
<br/>
XQUAD (F1/EM) | en | hi | Average
:------------ | ----------: | ----------: | ----------:
mBERT | 83.85/72.86 | 58.46/43.53 | 71.15/58.19
MuRIL | 84.31/72.94 | 73.93/58.32 | 79.12/65.63
<br/>
MLQA (F1/EM) | en | hi | Average
:----------- | ----------: | ----------: | ----------:
mBERT | 80.39/67.30 | 50.28/35.18 | 65.34/51.24
MuRIL | 80.28/67.37 | 67.34/50.22 | 73.81/58.80
<br/>
TyDiQA (F1/EM) | en | bn | te | Average
:---------------- | ----------: | ----------: | ----------: | ----------:
mBERT | 75.21/65.00 | 60.62/45.13 | 53.55/44.54 | 63.13/51.66
MuRIL | 74.10/64.55 | 78.03/66.37 | 73.95/46.94 | 75.36/59.28
* Shown below are results on the transliterated versions of the above
test-sets.
PANX (F1) | ml_tr | ta_tr | te_tr | bn_tr | hi_tr | mr_tr | ur_tr | Average
:-------- | ----: | ----: | ----: | ----: | ----: | ----: | ----: | ------:
mBERT | 7.53 | 1.04 | 8.24 | 41.77 | 25.46 | 8.34 | 7.30 | 14.24
MuRIL | 63.39 | 7.00 | 53.62 | 72.94 | 69.75 | 68.77 | 68.41 | 57.70
<br/>
UDPOS (F1) | hi_tr | mr_tr | ta_tr | te_tr | ur_tr | Average
:--------- | ----: | ----: | ----: | ----: | ----: | ------:
mBERT | 25.00 | 33.67 | 24.02 | 36.21 | 22.07 | 28.20
MuRIL | 63.09 | 67.19 | 58.40 | 65.30 | 56.49 | 62.09
<br/>
XNLI (Accuracy) | hi_tr | ur_tr | Average
:-------------- | ----: | ----: | ------:
mBERT | 39.6 | 38.86 | 39.23
MuRIL | 68.24 | 61.16 | 64.70
<br/>
Tatoeba (Accuracy) | ml_tr | ta_tr | te_tr | bn_tr | hi_tr | mr_tr | ur_tr | Average
:----------------- | ----: | ----: | ----: | ----: | ----: | ----: | ----: | ------:
mBERT | 2.18 | 1.95 | 5.13 | 1.80 | 3.00 | 2.40 | 2.30 | 2.68
MuRIL | 10.33 | 11.07 | 11.54 | 8.10 | 14.90 | 7.20 | 13.70 | 10.98
<br/>
## References
\[1]: Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova. [BERT:
Pre-training of Deep Bidirectional Transformers for Language
Understanding](https://arxiv.org/abs/1810.04805). arXiv preprint
arXiv:1810.04805, 2018.
\[2]: [Wikipedia](https://www.tensorflow.org/datasets/catalog/wikipedia)
\[3]: [Common Crawl](http://commoncrawl.org/the-data/)
\[4]:
[PMINDIA](http://lotus.kuee.kyoto-u.ac.jp/WAT/indic-multilingual/index.html)
\[5]: [Dakshina](https://github.com/google-research-datasets/dakshina)
\[6]: Assamese (as), Bengali (bn), English (en), Gujarati (gu), Hindi (hi),
Kannada (kn), Kashmiri (ks), Malayalam (ml), Marathi (mr), Nepali (ne), Oriya
(or), Punjabi (pa), Sanskrit (sa), Sindhi (sd), Tamil (ta), Telugu (te) and Urdu
(ur).
\[7]: Conneau, Alexis, et al.
[Unsupervised cross-lingual representation learning at scale](https://arxiv.org/pdf/1911.02116.pdf).
arXiv preprint arXiv:1911.02116 (2019).
\[8]: [IndicTrans](https://github.com/libindic/indic-trans)
\[9]: Hu, J., Ruder, S., Siddhant, A., Neubig, G., Firat, O., & Johnson, M.
(2020). [Xtreme: A massively multilingual multi-task benchmark for evaluating
cross-lingual generalization.](https://arxiv.org/pdf/2003.11080.pdf) arXiv
preprint arXiv:2003.11080.
\[10]: Fang, Y., Wang, S., Gan, Z., Sun, S., & Liu, J. (2020).
[FILTER: An Enhanced Fusion Method for Cross-lingual Language Understanding.](https://arxiv.org/pdf/2009.05166.pdf)
arXiv preprint arXiv:2009.05166.
## Citation
If you find MuRIL useful in your applications, please cite the following paper:
```
@misc{khanuja2021muril,
title={MuRIL: Multilingual Representations for Indian Languages},
author={Simran Khanuja and Diksha Bansal and Sarvesh Mehtani and Savya Khosla and Atreyee Dey and Balaji Gopalan and Dilip Kumar Margam and Pooja Aggarwal and Rajiv Teja Nagipogu and Shachi Dave and Shruti Gupta and Subhash Chandra Bose Gali and Vish Subramanian and Partha Talukdar},
year={2021},
eprint={2103.10730},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Contact
Please mail your queries/feedback to muril-contact@google.com.
|
ahmeddbahaa/mt5-base-finetuned-wikilingua-ar
|
ahmeddbahaa
| 2022-06-10T13:00:43Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"ar",
"abstractive summarization",
"generated_from_trainer",
"dataset:wiki_lingua",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-06-10T02:40:53Z |
---
license: apache-2.0
tags:
- summarization
- mt5
- ar
- abstractive summarization
- generated_from_trainer
datasets:
- wiki_lingua
model-index:
- name: mt5-base-finetuned-wikilingua-ar
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-wikilingua-ar
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the wiki_lingua dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4936
- Rouge-1: 20.79
- Rouge-2: 7.6
- Rouge-l: 18.81
- Gen Len: 18.73
- Bertscore: 70.87
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 8
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
adi1494/distilbert-base-uncased-finetuned-squad
|
adi1494
| 2022-06-10T12:39:00Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-06-10T06:38:11Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: adi1494/distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# adi1494/distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.5671
- Validation Loss: 1.2217
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 5532, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.5671 | 1.2217 | 0 |
### Framework versions
- Transformers 4.19.3
- TensorFlow 2.8.2
- Datasets 2.2.2
- Tokenizers 0.12.1
|
FabianWillner/distilbert-base-uncased-finetuned-squad-finetuned-triviaqa
|
FabianWillner
| 2022-06-10T11:54:41Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-06-10T09:44:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad-finetuned-triviaqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad-finetuned-triviaqa
This model is a fine-tuned version of [FabianWillner/distilbert-base-uncased-finetuned-squad](https://huggingface.co/FabianWillner/distilbert-base-uncased-finetuned-squad) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9583
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.9722 | 1.0 | 11195 | 0.9665 |
| 0.7558 | 2.0 | 22390 | 0.9583 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
stig/distilbert-base-uncased-finetuned
|
stig
| 2022-06-10T10:59:39Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-06-10T09:59:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0255 | 1.0 | 2312 | 1.9202 |
| 1.7483 | 2.0 | 4624 | 1.8437 |
| 1.5733 | 3.0 | 6936 | 1.8627 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
danieladejumo/q-FrozenLake-v1-4x4-noSlippery
|
danieladejumo
| 2022-06-10T10:25:31Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-10T10:25:23Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="danieladejumo/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
TurkuNLP/bert-large-finnish-cased-v1
|
TurkuNLP
| 2022-06-10T08:46:17Z | 152 | 2 |
transformers
|
[
"transformers",
"pytorch",
"fi",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-06-10T07:53:16Z |
---
license: apache-2.0
language: fi
---
This is the large variant of FinBERT (TurkuNLP/bert-base-finnish-cased-v1). The training data is exactly the same.
|
flood/distilbert-base-uncased-distilled-clinc
|
flood
| 2022-06-10T08:03:08Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-10T07:59:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9309677419354838
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0389
- Accuracy: 0.9310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6206 | 1.0 | 318 | 0.3251 | 0.6610 |
| 0.2571 | 2.0 | 636 | 0.1366 | 0.8584 |
| 0.1392 | 3.0 | 954 | 0.0813 | 0.9081 |
| 0.0967 | 4.0 | 1272 | 0.0598 | 0.9152 |
| 0.0779 | 5.0 | 1590 | 0.0503 | 0.9229 |
| 0.0675 | 6.0 | 1908 | 0.0451 | 0.9271 |
| 0.0615 | 7.0 | 2226 | 0.0425 | 0.9326 |
| 0.058 | 8.0 | 2544 | 0.0403 | 0.9316 |
| 0.0557 | 9.0 | 2862 | 0.0393 | 0.9306 |
| 0.0544 | 10.0 | 3180 | 0.0389 | 0.9310 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
juns/imdb_finetuned_distilbert-base-uncased-finetuned-sst-2-english
|
juns
| 2022-06-10T07:37:10Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-18T07:05:06Z |
imdb_finetuned_distilbert-base-uncased-finetuned-sst-2-english for boostcamp ai tech 3
|
flood/pegasus-samsum
|
flood
| 2022-06-10T07:00:06Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-10T06:24:51Z |
---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4814
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7052 | 0.54 | 500 | 1.4814 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/macarena_olona
|
huggingtweets
| 2022-06-10T06:32:02Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-10T06:10:00Z |
---
language: en
thumbnail: http://www.huggingtweets.com/macarena_olona/1654842717478/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1535020786007916545/po7DO1ln_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Macarena Olona</div>
<div style="text-align: center; font-size: 14px;">@macarena_olona</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Macarena Olona.
| Data | Macarena Olona |
| --- | --- |
| Tweets downloaded | 3245 |
| Retweets | 1797 |
| Short tweets | 225 |
| Tweets kept | 1223 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1yx7hguo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @macarena_olona's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2i64c9y6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2i64c9y6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/macarena_olona')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/ralee85
|
huggingtweets
| 2022-06-10T06:27:59Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-10T06:27:51Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/964497068424249345/Y6ce6atF_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Rob Lee</div>
<div style="text-align: center; font-size: 14px;">@ralee85</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Rob Lee.
| Data | Rob Lee |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 22 |
| Short tweets | 1590 |
| Tweets kept | 1638 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/164xyalb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ralee85's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3pc7ca11) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3pc7ca11/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ralee85')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
RuiqianLi/wav2vec2-xls-r-300m_Mrbrown_finetune1
|
RuiqianLi
| 2022-06-10T03:17:06Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:uob_singlish",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-09T10:16:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- uob_singlish
model-index:
- name: wav2vec2-xls-r-300m_Mrbrown_finetune1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m_Mrbrown_finetune1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the uob_singlish dataset.
## This time use self-made dataset(cut the audio of "https://www.youtube.com/watch?v=a2ZOTD3R7JI" into slices and write the corresponding transcript, totally 4 mins), don't know why the word-error-rate keep 1. But can know that much be the problem of dataset, because last time use the same pre-trained model and standard singlish corpus fine-tune get nice result. (can find it at:RuiqianLi/wav2vec2-large-xls-r-300m-singlish-colab)
It achieves the following results on the evaluation set:
- Loss: 3.0927
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 3.7943 | 20.0 | 200 | 3.0597 | 1.0 |
| 2.9902 | 40.0 | 400 | 3.1604 | 1.0 |
| 2.9696 | 60.0 | 600 | 3.1112 | 1.0 |
| 2.8885 | 80.0 | 800 | 3.0234 | 1.0 |
| 2.8154 | 100.0 | 1000 | 3.0927 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
YeRyeongLee/bert-base-cased-finetuned-filtered-0609
|
YeRyeongLee
| 2022-06-10T02:29:16Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-10T00:30:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bert-base-cased-finetuned-filtered-0609
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-filtered-0609
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2410
- Accuracy: 0.9748
- Precision: 0.9751
- Recall: 0.9748
- F1: 0.9749
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.2028 | 1.0 | 3180 | 0.2405 | 0.9535 | 0.9561 | 0.9535 | 0.9538 |
| 0.1632 | 2.0 | 6360 | 0.1686 | 0.9660 | 0.9664 | 0.9660 | 0.9661 |
| 0.1203 | 3.0 | 9540 | 0.1625 | 0.9648 | 0.9655 | 0.9648 | 0.9648 |
| 0.1233 | 4.0 | 12720 | 0.1510 | 0.9698 | 0.9702 | 0.9698 | 0.9699 |
| 0.0823 | 5.0 | 15900 | 0.1600 | 0.9730 | 0.9732 | 0.9730 | 0.9730 |
| 0.0453 | 6.0 | 19080 | 0.1953 | 0.9723 | 0.9724 | 0.9723 | 0.9723 |
| 0.031 | 7.0 | 22260 | 0.1754 | 0.9755 | 0.9755 | 0.9755 | 0.9755 |
| 0.0166 | 8.0 | 25440 | 0.2155 | 0.9739 | 0.9740 | 0.9739 | 0.9739 |
| 0.0036 | 9.0 | 28620 | 0.2519 | 0.9730 | 0.9733 | 0.9730 | 0.9730 |
| 0.0035 | 10.0 | 31800 | 0.2410 | 0.9748 | 0.9751 | 0.9748 | 0.9749 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.9.1+cu111
- Datasets 1.16.1
- Tokenizers 0.12.1
|
huggingtweets/loganpaul
|
huggingtweets
| 2022-06-10T02:29:07Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-10T02:27:26Z |
---
language: en
thumbnail: http://www.huggingtweets.com/loganpaul/1654828143127/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1401837042934468611/okzqIoMb_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Logan Paul</div>
<div style="text-align: center; font-size: 14px;">@loganpaul</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Logan Paul.
| Data | Logan Paul |
| --- | --- |
| Tweets downloaded | 3245 |
| Retweets | 170 |
| Short tweets | 318 |
| Tweets kept | 2757 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/wj9pph5f/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @loganpaul's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1sqzuxgo) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1sqzuxgo/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/loganpaul')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
RuiqianLi/malaya-speech_Mrbrown_finetune1
|
RuiqianLi
| 2022-06-10T02:23:06Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:uob_singlish",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-09T09:01:56Z |
---
tags:
- generated_from_trainer
datasets:
- uob_singlish
model-index:
- name: malaya-speech_Mrbrown_finetune1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# malaya-speech_Mrbrown_finetune1
This model is a fine-tuned version of [malay-huggingface/wav2vec2-xls-r-300m-mixed](https://huggingface.co/malay-huggingface/wav2vec2-xls-r-300m-mixed) on the uob_singlish dataset.
## This time use self-made dataset(cut the audio of "https://www.youtube.com/watch?v=a2ZOTD3R7JI" into slices and write the corresponding transcript, totally 4 mins), get really bad fine-tuning result, that may mean the training/fine-tuning dataset must be high quality/at least several hours? Or maybe is because the learning rate is set too high(0.01) ? Still searching for the important factors.
It achieves the following results on the evaluation set:
- Loss: 3.8458
- Wer: 1.01
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:----:|
| 0.3186 | 20.0 | 200 | 4.2225 | 1.13 |
| 0.4911 | 40.0 | 400 | 4.0427 | 0.99 |
| 0.9014 | 60.0 | 600 | 5.3285 | 1.04 |
| 1.0955 | 80.0 | 800 | 3.6922 | 1.02 |
| 0.7533 | 100.0 | 1000 | 3.8458 | 1.01 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
HrayrM/distilbert-base-uncased-finetuned-clinc
|
HrayrM
| 2022-06-10T01:17:59Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-10T00:50:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9135483870967742
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7771
- Accuracy: 0.9135
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2843 | 1.0 | 318 | 3.2793 | 0.7448 |
| 2.6208 | 2.0 | 636 | 1.8750 | 0.8297 |
| 1.5453 | 3.0 | 954 | 1.1565 | 0.8919 |
| 1.0141 | 4.0 | 1272 | 0.8628 | 0.9090 |
| 0.795 | 5.0 | 1590 | 0.7771 | 0.9135 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0
- Datasets 2.2.2
- Tokenizers 0.10.3
|
ExusAI/SRWNN
|
ExusAI
| 2022-06-10T00:54:14Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-06-10T00:45:58Z |
---
license: mit
---
Super resolution model for anime and illustrations based on vgg11 and waifu2x. This model was trained on around 10k high resolution images (at least HD)
https://github.com/Exusai/SuperResolutionWaifuNN
|
nestoralvaro/mt5-base-finetuned-xsum-data_prep_2021_12_26___t1_7.csv___topic_text_google_mt5_base
|
nestoralvaro
| 2022-06-10T00:52:35Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-09T23:49:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-finetuned-xsum-data_prep_2021_12_26___t1_7.csv___topic_text_google_mt5_base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-xsum-data_prep_2021_12_26___t1_7.csv___topic_text_google_mt5_base
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 2.8146
- Rouge2: 0.6707
- Rougel: 2.8187
- Rougelsum: 2.8098
- Gen Len: 6.4901
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.0 | 1.0 | 3869 | nan | 2.8146 | 0.6707 | 2.8187 | 2.8098 | 6.4901 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
UBC-NLP/turjuman
|
UBC-NLP
| 2022-06-10T00:24:37Z | 32 | 7 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"arxiv:2206.03933",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-10T22:07:50Z |
<p align="center">
<br>
<img src="https://github.com/UBC-NLP/turjuman/raw/master//images/turjuman_logo.png"/>
<br>
<p>
<img src="https://github.com/UBC-NLP/turjuman/raw/master/images/turjuman.png" alt="AraT5" width="50%" height="50%" align="right"/>
Turjuman is a neural machine translation toolkit. It translates from 20 languages into Modern Standard Arabic (MSA). Turjuman is described in this paper:
[**TURJUMAN: A Public Toolkit for Neural Arabic Machine Translation**](https://arxiv.org/abs/2206.03933).
Turjuman exploits our [AraT5 model](https://github.com/UBC-NLP/araT5). This endows Turjuman with a powerful ability to decode into Arabic. The toolkit offers the possibility of employing a number of diverse decoding methods, making it suited for acquiring paraphrases for the MSA translations as an added value.
**Github**: [https://github.com/UBC-NLP/turjuman](https://github.com/UBC-NLP/turjuman)
**Demo**: [https://demos.dlnlp.ai/turjuman](https://demos.dlnlp.ai/turjuman)
**Paper**: [https://arxiv.org/abs/2206.03933](https://arxiv.org/abs/2206.03933)
## License
turjuman(-py) is Apache-2.0 licensed. The license applies to the pre-trained models as well.
## Citation
If you use TURJUMAN toolkit or the pre-trained models for your scientific publication, or if you find the resources in this repository useful, please cite our paper as follows (to be updated):
```
@inproceedings{nagoudi-osact5-2022-turjuman,
title={TURJUMAN: A Public Toolkit for Neural Arabic Machine Translation},
author={Nagoudi, El Moatez Billah and Elmadany, AbdelRahim and Abdul-Mageed, Muhammad},
booktitle = "Proceedings of the 5th Workshop on Open-Source Arabic Corpora and Processing Tools (OSACT5)",
month = "June",
year = "2022",
address = "Marseille, France",
publisher = "European Language Resource Association",
}
```
|
kjunelee/distilbert-base-uncased-finetuned-emotion
|
kjunelee
| 2022-06-10T00:24:32Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-10T00:03:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.931
- name: F1
type: f1
value: 0.9313235272564213
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1595
- Accuracy: 0.931
- F1: 0.9313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 125 | 0.1873 | 0.924 | 0.9234 |
| 0.1992 | 2.0 | 250 | 0.1649 | 0.929 | 0.9293 |
| 0.1992 | 3.0 | 375 | 0.1595 | 0.931 | 0.9313 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.3.dev0
- Tokenizers 0.12.1
|
nthakur/contriever-base-msmarco
|
nthakur
| 2022-06-09T22:01:51Z | 1,072 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-06-09T21:50:15Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# nthakur/contriever-base-msmarco
This is a port of the [Contriever MSMARCO Model](https://huggingface.co/facebook/contriever-msmarco) to [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('nthakur/contriever-base-msmarco')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('nthakur/contriever-base-msmarco')
model = AutoModel.from_pretrained('nthakur/contriever-base-msmarco')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=nthakur/contriever-base-msmarco)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 509, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
Have a look at: [Contriever Model](https://github.com/facebookresearch/contriever).
<!--- Describe where people can find more information -->
|
Birb80/Bird
|
Birb80
| 2022-06-09T21:17:59Z | 0 | 0 | null |
[
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2022-06-09T21:17:59Z |
---
license: bigscience-bloom-rail-1.0
---
|
q2-jlbar/segformer-b0-finetuned-brooks-or-dunn
|
q2-jlbar
| 2022-06-09T19:47:36Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2022-06-09T18:20:04Z |
---
license: apache-2.0
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b0-finetuned-brooks-or-dunn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-brooks-or-dunn
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the q2-jlbar/BrooksOrDunn dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1158
- Mean Iou: nan
- Mean Accuracy: nan
- Overall Accuracy: nan
- Per Category Iou: [nan, nan]
- Per Category Accuracy: [nan, nan]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:----------------:|:---------------------:|
| 0.5153 | 4.0 | 20 | 0.5276 | nan | nan | nan | [nan, nan] | [nan, nan] |
| 0.4082 | 8.0 | 40 | 0.3333 | nan | nan | nan | [nan, nan] | [nan, nan] |
| 0.3157 | 12.0 | 60 | 0.2773 | nan | nan | nan | [nan, nan] | [nan, nan] |
| 0.2911 | 16.0 | 80 | 0.2389 | nan | nan | nan | [nan, nan] | [nan, nan] |
| 0.2395 | 20.0 | 100 | 0.1982 | nan | nan | nan | [nan, nan] | [nan, nan] |
| 0.2284 | 24.0 | 120 | 0.1745 | nan | nan | nan | [nan, nan] | [nan, nan] |
| 0.1818 | 28.0 | 140 | 0.1595 | nan | nan | nan | [nan, nan] | [nan, nan] |
| 0.1549 | 32.0 | 160 | 0.1556 | nan | nan | nan | [nan, nan] | [nan, nan] |
| 0.1351 | 36.0 | 180 | 0.1387 | nan | nan | nan | [nan, nan] | [nan, nan] |
| 0.1254 | 40.0 | 200 | 0.1263 | nan | nan | nan | [nan, nan] | [nan, nan] |
| 0.1412 | 44.0 | 220 | 0.1190 | nan | nan | nan | [nan, nan] | [nan, nan] |
| 0.1179 | 48.0 | 240 | 0.1158 | nan | nan | nan | [nan, nan] | [nan, nan] |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/midudev
|
huggingtweets
| 2022-06-09T18:48:30Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-09T18:33:17Z |
---
language: en
thumbnail: http://www.huggingtweets.com/midudev/1654800505422/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1526668354609680384/r85fytOs_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">🔴 EN DIRECTO twitch.tv/midudev</div>
<div style="text-align: center; font-size: 14px;">@midudev</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 🔴 EN DIRECTO twitch.tv/midudev.
| Data | 🔴 EN DIRECTO twitch.tv/midudev |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 824 |
| Short tweets | 163 |
| Tweets kept | 2259 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/11iwoc6b/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @midudev's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/s48ktc1m) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/s48ktc1m/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/midudev')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
bookpanda/wangchanberta-base-att-spm-uncased-finetuned-imdb
|
bookpanda
| 2022-06-09T18:17:16Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"camembert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-05-28T08:22:04Z |
---
tags:
- generated_from_trainer
model-index:
- name: wangchanberta-base-att-spm-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wangchanberta-base-att-spm-uncased-finetuned-imdb
This model is a fine-tuned version of [airesearch/wangchanberta-base-att-spm-uncased](https://huggingface.co/airesearch/wangchanberta-base-att-spm-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0810
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.1831 | 1.0 | 4826 | 0.1542 |
| 0.1 | 2.0 | 9652 | 0.1075 |
| 0.0946 | 3.0 | 14478 | 0.0443 |
| 0.0618 | 4.0 | 19304 | 0.0830 |
| 0.0783 | 5.0 | 24130 | 0.0810 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.11.0+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
|
kabelomalapane/En-Ts
|
kabelomalapane
| 2022-06-09T17:33:20Z | 69 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-06-09T16:33:13Z |
---
license: apache-2.0
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: En-Ts
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# En-Ts
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ts](https://huggingface.co/Helsinki-NLP/opus-mt-en-ts) on the None dataset.
It achieves the following results on the evaluation set:
Before training:
- Loss: 3.17
- Bleu: 14.513
After Training
- Loss: 1.3320
- Bleu: 36.7687
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 1.7082 | 1.0 | 5929 | 1.6902 | 32.1311 |
| 1.4606 | 2.0 | 11858 | 1.4996 | 34.1129 |
| 1.3182 | 3.0 | 17787 | 1.4107 | 35.7428 |
| 1.2543 | 4.0 | 23716 | 1.3631 | 36.2009 |
| 1.2116 | 5.0 | 29645 | 1.3389 | 36.5876 |
| 1.1723 | 6.0 | 35574 | 1.3320 | 36.7481 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ksabeh/bert-base-uncased-attribute-correction-mlm
|
ksabeh
| 2022-06-09T17:23:14Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-06-09T09:08:11Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ksabeh/bert-base-uncased-mlm-electronics-attribute-correction
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ksabeh/bert-base-uncased-mlm-electronics-attribute-correction
This model is a fine-tuned version of [ksabeh/bert-base-uncased-mlm-electronics](https://huggingface.co/ksabeh/bert-base-uncased-mlm-electronics) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0524
- Validation Loss: 0.0520
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 36848, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1459 | 0.0678 | 0 |
| 0.0524 | 0.0520 | 1 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
tclong/wav2vec2-base-vios-commonvoice
|
tclong
| 2022-06-09T17:17:08Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-08T18:03:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-vios-commonvoice
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-vios-commonvoice
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3823
- Wer: 0.2401
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.2268 | 0.66 | 500 | 0.8746 | 0.5939 |
| 0.8728 | 1.32 | 1000 | 0.6435 | 0.4554 |
| 0.6899 | 1.99 | 1500 | 0.5655 | 0.3995 |
| 0.5842 | 2.65 | 2000 | 0.5267 | 0.3694 |
| 0.5371 | 3.31 | 2500 | 0.4980 | 0.3431 |
| 0.4921 | 3.97 | 3000 | 0.4781 | 0.3276 |
| 0.4508 | 4.64 | 3500 | 0.4434 | 0.3134 |
| 0.433 | 5.3 | 4000 | 0.4348 | 0.2963 |
| 0.404 | 5.96 | 4500 | 0.4248 | 0.2874 |
| 0.3834 | 6.62 | 5000 | 0.4163 | 0.2775 |
| 0.3784 | 7.28 | 5500 | 0.4104 | 0.2751 |
| 0.3669 | 7.95 | 6000 | 0.4143 | 0.2724 |
| 0.3462 | 8.61 | 6500 | 0.4131 | 0.2699 |
| 0.3364 | 9.27 | 7000 | 0.4070 | 0.2617 |
| 0.3249 | 9.93 | 7500 | 0.4076 | 0.2603 |
| 0.3154 | 10.6 | 8000 | 0.3998 | 0.2577 |
| 0.3117 | 11.26 | 8500 | 0.3930 | 0.2505 |
| 0.3101 | 11.92 | 9000 | 0.4003 | 0.2492 |
| 0.298 | 12.58 | 9500 | 0.3960 | 0.2496 |
| 0.2968 | 13.24 | 10000 | 0.3877 | 0.2469 |
| 0.29 | 13.91 | 10500 | 0.3870 | 0.2456 |
| 0.2921 | 14.57 | 11000 | 0.3823 | 0.2401 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ajtamayoh/NLP-CIC-WFU_Clinical_Cases_NER_Sents_Tokenized_bertin_roberta_base_spanish_fine_tuned
|
ajtamayoh
| 2022-06-09T17:15:48Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-09T16:33:08Z |
---
license: cc-by-4.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: NLP-CIC-WFU_Clinical_Cases_NER_Sents_Tokenized_bertin_roberta_base_spanish_fine_tuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLP-CIC-WFU_Clinical_Cases_NER_Sents_Tokenized_bertin_roberta_base_spanish_fine_tuned
This model is a fine-tuned version of [bertin-project/bertin-roberta-base-spanish](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0973
- Precision: 0.9012
- Recall: 0.6942
- F1: 0.7842
- Accuracy: 0.9857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0605 | 1.0 | 2568 | 0.0625 | 0.9400 | 0.6322 | 0.7560 | 0.9836 |
| 0.0475 | 2.0 | 5136 | 0.0622 | 0.9533 | 0.6572 | 0.7781 | 0.9849 |
| 0.0374 | 3.0 | 7704 | 0.0552 | 0.9261 | 0.6784 | 0.7831 | 0.9855 |
| 0.0246 | 4.0 | 10272 | 0.0693 | 0.9381 | 0.6658 | 0.7788 | 0.9849 |
| 0.0126 | 5.0 | 12840 | 0.0974 | 0.8918 | 0.6830 | 0.7735 | 0.9849 |
| 0.0061 | 6.0 | 15408 | 0.0886 | 0.8771 | 0.7099 | 0.7847 | 0.9850 |
| 0.0031 | 7.0 | 17976 | 0.0973 | 0.9012 | 0.6942 | 0.7842 | 0.9857 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
GioReg/notiBERTo
|
GioReg
| 2022-06-09T17:08:29Z | 160 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-04-07T14:24:36Z |
language:
- it
Si è creato un modello, chiamato notiBERTo, svolgendo la fase di addestramento e utilizzando per la creazione e il tuning dei pesi del modello l’algoritmo non supervisionato di masked-language modeling (MLM); questo non richiede l’utilizzo di testo con etichettatura. L’idea e stata quella di ottenere un modello BERT-based per la lingua italiana focalizzato sul linguaggio tipico utilizzato nei contesti dell’informazione giornalistica online che quindi potesse ricalcare lo stile, il lessico della stampa.
Per i dati in input sono stati utilizzati database disponibili pubblicamente online organizzati dal portale “Wortschatz Leipzig” dell’universita di Lipsia. Il portale offre l’accesso ai “corpora collection Leipzig” dove si trovano 900 collezioni testuali divise per lingua - le lingue presenti sono 250 - e argomento, ottenuti principalmente attraverso data crawling dei siti internet. In particolare sono stati scelti database di collezioni di notizie ottenute attraverso feeds RSS rac colte su base giornaliera e database ottenuti attraverso crawling dai principali siti internet di notizie italiane, suddivisi in sottodatabase in base agli anni di raccolta. Per la creazione di “notiBERTo” sono stati utilizzati database relativi agli anni 2018, 2019, 2020 per un totale di circa 700MB.
|
YaYaB/SpaceInvadersNoFrameskip-v4-1
|
YaYaB
| 2022-06-09T16:24:57Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-09T16:23:40Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 511.00 +/- 164.98
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga YaYaB -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga YaYaB
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
YeRyeongLee/roberta-base-finetuned-filtered-0609
|
YeRyeongLee
| 2022-06-09T16:20:12Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-09T14:14:27Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: roberta-base-finetuned-filtered-0609
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-filtered-0609
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1343
- Accuracy: 0.9824
- Precision: 0.9824
- Recall: 0.9824
- F1: 0.9824
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.1817 | 1.0 | 3180 | 0.1883 | 0.9651 | 0.9654 | 0.9651 | 0.9651 |
| 0.1647 | 2.0 | 6360 | 0.1264 | 0.9777 | 0.9778 | 0.9777 | 0.9777 |
| 0.1295 | 3.0 | 9540 | 0.1514 | 0.9723 | 0.9724 | 0.9723 | 0.9723 |
| 0.0991 | 4.0 | 12720 | 0.1487 | 0.9761 | 0.9763 | 0.9761 | 0.9761 |
| 0.0749 | 5.0 | 15900 | 0.1119 | 0.9802 | 0.9802 | 0.9802 | 0.9802 |
| 0.0532 | 6.0 | 19080 | 0.1357 | 0.9789 | 0.9790 | 0.9789 | 0.9789 |
| 0.0471 | 7.0 | 22260 | 0.1397 | 0.9780 | 0.9782 | 0.9780 | 0.9780 |
| 0.0153 | 8.0 | 25440 | 0.1568 | 0.9777 | 0.9778 | 0.9777 | 0.9777 |
| 0.0147 | 9.0 | 28620 | 0.1274 | 0.9824 | 0.9824 | 0.9824 | 0.9824 |
| 0.0135 | 10.0 | 31800 | 0.1343 | 0.9824 | 0.9824 | 0.9824 | 0.9824 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.9.1+cu111
- Datasets 1.16.1
- Tokenizers 0.12.1
|
huggingtweets/elrichmc
|
huggingtweets
| 2022-06-09T16:04:04Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-09T16:01:27Z |
---
language: en
thumbnail: http://www.huggingtweets.com/elrichmc/1654790629445/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1484686785812832263/Beh-qGPk_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ElRichMC</div>
<div style="text-align: center; font-size: 14px;">@elrichmc</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ElRichMC.
| Data | ElRichMC |
| --- | --- |
| Tweets downloaded | 3245 |
| Retweets | 203 |
| Short tweets | 618 |
| Tweets kept | 2424 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1jeok5aq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elrichmc's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/28fmqsme) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/28fmqsme/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/elrichmc')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
buio/attention_mil_classification
|
buio
| 2022-06-09T15:10:38Z | 0 | 0 |
keras
|
[
"keras",
"tensorboard",
"tf-keras",
"computer-vision",
"classification",
"multiple-instance-learning ",
"region:us"
] | null | 2022-06-09T14:46:43Z |
---
library_name: keras
tags:
- computer-vision
- classification
- 'multiple-instance-learning '
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 0.001, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
## Training Metrics
| Epochs | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy |
|--- |--- |--- |--- |--- |
| 1| 0.315| 0.915| 0.066| 0.983|
| 2| 0.089| 0.982| 0.049| 0.99|
| 3| 0.078| 0.987| 0.084| 0.983|
| 4| 0.059| 0.983| 0.033| 0.993|
| 5| 0.042| 0.99| 0.053| 0.99|
| 6| 0.042| 0.996| 0.019| 0.993|
| 7| 0.013| 0.999| 0.067| 0.987|
| 8| 0.055| 0.988| 0.049| 0.99|
| 9| 0.005| 1.0| 0.039| 0.993|
| 10| 0.005| 1.0| 0.038| 0.99|
| 11| 0.039| 0.995| 0.214| 0.97|
| 12| 0.008| 1.0| 0.039| 0.99|
| 13| 0.002| 1.0| 0.047| 0.993|
| 14| 0.016| 0.999| 0.057| 0.99|
| 15| 0.046| 0.993| 0.026| 0.997|
| 16| 0.002| 1.0| 0.06| 0.99|
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
buio/vq-vae
|
buio
| 2022-06-09T15:06:33Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"computer-vision",
"generative",
"variational-autoencoder",
"vq-vae",
"region:us"
] | null | 2022-06-09T15:04:32Z |
---
library_name: keras
tags:
- computer-vision
- generative
- variational-autoencoder
- vq-vae
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training Metrics
Model history needed
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
veb/twitch-roberta-base-sentiment-latest
|
veb
| 2022-06-09T14:34:50Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-09T05:14:29Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: veb/twitch-roberta-base-sentiment-latest
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# veb/twitch-roberta-base-sentiment-latest
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0941
- Train Sparse Categorical Accuracy: 0.375
- Validation Loss: 1.0186
- Validation Sparse Categorical Accuracy: 0.3333
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch |
|:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:|
| 1.1272 | 0.3281 | 1.0190 | 0.3333 | 0 |
| 1.1254 | 0.2969 | 1.1164 | 0.0 | 1 |
| 1.0941 | 0.375 | 1.0186 | 0.3333 | 2 |
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.7.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
fusing/ddim-celeba-hq_copy
|
fusing
| 2022-06-09T14:11:04Z | 2 | 0 |
transformers
|
[
"transformers",
"ddim_diffusion",
"arxiv:2010.02502",
"endpoints_compatible",
"region:us"
] | null | 2022-06-09T14:07:12Z |
---
tags:
- ddim_diffusion
---
# Denoising Diffusion Implicit Models (DDIM)
**Paper**: [Denoising Diffusion Implicit Models](https://arxiv.org/abs/2010.02502)
**Abstract**:
*Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space.*
**Explanation on `eta` and `num_inference_steps`**
- `num_inference_steps` is called *S* in the following table
- `eta` is called *η* in the following table

## Usage
```python
# !pip install diffusers
from diffusers import DiffusionPipeline
import PIL.Image
import numpy as np
model_id = "fusing/ddim-celeba-hq"
# load model and scheduler
ddpm = DiffusionPipeline.from_pretrained(model_id)
# run pipeline in inference (sample random noise and denoise)
image = ddpm(eta=0.0, num_inference_steps=50)
# process image to PIL
image_processed = image.cpu().permute(0, 2, 3, 1)
image_processed = (image_processed + 1.0) * 127.5
image_processed = image_processed.numpy().astype(np.uint8)
image_pil = PIL.Image.fromarray(image_processed[0])
# save image
image_pil.save("test.png")
```
## Samples
1. 
2. 
3. 
4. 
|
i8pxgd2s/q-Taxi-v3
|
i8pxgd2s
| 2022-06-09T13:26:49Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-09T13:26:40Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="i8pxgd2s/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
victorlee071200/bert-base-cased-finetuned-squad_v2
|
victorlee071200
| 2022-06-09T13:16:06Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-06-08T17:41:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: bert-base-cased-finetuned-squad_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-squad_v2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3226
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.03 | 1.0 | 8255 | 1.1334 |
| 0.7511 | 2.0 | 16510 | 1.1299 |
| 0.5376 | 3.0 | 24765 | 1.3226 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
qualitydatalab/autotrain-car-review-project-966432121
|
qualitydatalab
| 2022-06-09T13:04:21Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain",
"en",
"dataset:qualitydatalab/autotrain-data-car-review-project",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-09T12:30:26Z |
---
tags: autotrain
language: en
widget:
- text: "I love driving this car"
datasets:
- qualitydatalab/autotrain-data-car-review-project
co2_eq_emissions: 0.21529888368377176
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 966432121
- CO2 Emissions (in grams): 0.21529888368377176
## Validation Metrics
- Loss: 0.6013365983963013
- Accuracy: 0.737791286727457
- Macro F1: 0.729171012281939
- Micro F1: 0.737791286727457
- Weighted F1: 0.729171012281939
- Macro Precision: 0.7313770127538427
- Micro Precision: 0.737791286727457
- Weighted Precision: 0.7313770127538428
- Macro Recall: 0.737791286727457
- Micro Recall: 0.737791286727457
- Weighted Recall: 0.737791286727457
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love driving this car"}' https://api-inference.huggingface.co/models/qualitydatalab/autotrain-car-review-project-966432121
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("qualitydatalab/autotrain-car-review-project-966432121", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("qualitydatalab/autotrain-car-review-project-966432121", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
huggingtweets/zaidalyafeai
|
huggingtweets
| 2022-06-09T13:03:12Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-09T13:02:27Z |
---
language: en
thumbnail: http://www.huggingtweets.com/zaidalyafeai/1654779787447/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1521723273922461696/m8_zotM4_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Zaid زيد</div>
<div style="text-align: center; font-size: 14px;">@zaidalyafeai</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Zaid زيد.
| Data | Zaid زيد |
| --- | --- |
| Tweets downloaded | 2295 |
| Retweets | 74 |
| Short tweets | 217 |
| Tweets kept | 2004 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/39e5cxbb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @zaidalyafeai's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2uc681wq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2uc681wq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/zaidalyafeai')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
RalphX1/q-Taxi-v3
|
RalphX1
| 2022-06-09T12:44:26Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-09T12:21:02Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="RalphX1/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Dewone/wav2vec2-base-timit-demo-google-colab
|
Dewone
| 2022-06-09T12:37:08Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-09T10:36:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5182
- Wer: 0.3329
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5177 | 1.0 | 500 | 1.8932 | 0.9837 |
| 0.854 | 2.01 | 1000 | 0.5295 | 0.5266 |
| 0.4205 | 3.01 | 1500 | 0.4299 | 0.4453 |
| 0.2934 | 4.02 | 2000 | 0.3940 | 0.4180 |
| 0.2272 | 5.02 | 2500 | 0.4269 | 0.4149 |
| 0.1856 | 6.02 | 3000 | 0.4277 | 0.4335 |
| 0.1668 | 7.03 | 3500 | 0.4214 | 0.3852 |
| 0.1388 | 8.03 | 4000 | 0.4410 | 0.3805 |
| 0.1254 | 9.04 | 4500 | 0.4152 | 0.3716 |
| 0.1073 | 10.04 | 5000 | 0.4257 | 0.3726 |
| 0.1 | 11.04 | 5500 | 0.4405 | 0.3642 |
| 0.0928 | 12.05 | 6000 | 0.4823 | 0.3708 |
| 0.0829 | 13.05 | 6500 | 0.4636 | 0.3548 |
| 0.0682 | 14.06 | 7000 | 0.4718 | 0.3599 |
| 0.0643 | 15.06 | 7500 | 0.4965 | 0.3583 |
| 0.0609 | 16.06 | 8000 | 0.5279 | 0.3576 |
| 0.0586 | 17.07 | 8500 | 0.4869 | 0.3528 |
| 0.055 | 18.07 | 9000 | 0.4671 | 0.3567 |
| 0.0465 | 19.08 | 9500 | 0.5090 | 0.3508 |
| 0.0432 | 20.08 | 10000 | 0.5024 | 0.3543 |
| 0.0427 | 21.08 | 10500 | 0.4658 | 0.3417 |
| 0.033 | 22.09 | 11000 | 0.5276 | 0.3418 |
| 0.0297 | 23.09 | 11500 | 0.5095 | 0.3415 |
| 0.0317 | 24.1 | 12000 | 0.5061 | 0.3364 |
| 0.0262 | 25.1 | 12500 | 0.4910 | 0.3367 |
| 0.0257 | 26.1 | 13000 | 0.4869 | 0.3331 |
| 0.0237 | 27.11 | 13500 | 0.5023 | 0.3333 |
| 0.0228 | 28.11 | 14000 | 0.5131 | 0.3333 |
| 0.021 | 29.12 | 14500 | 0.5182 | 0.3329 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
qualitydatalab/autotrain-car-review-project-966432120
|
qualitydatalab
| 2022-06-09T12:36:14Z | 11 | 1 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain",
"en",
"dataset:qualitydatalab/autotrain-data-car-review-project",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-09T12:30:01Z |
---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- qualitydatalab/autotrain-data-car-review-project
co2_eq_emissions: 0.061185706621337065
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 966432120
- CO2 Emissions (in grams): 0.061185706621337065
## Validation Metrics
- Loss: 0.6066656112670898
- Accuracy: 0.724822695035461
- Macro F1: 0.7077087000886584
- Micro F1: 0.7248226950354609
- Weighted F1: 0.7077087000886584
- Macro Precision: 0.7143184427227084
- Micro Precision: 0.724822695035461
- Weighted Precision: 0.7143184427227083
- Macro Recall: 0.7248226950354609
- Micro Recall: 0.724822695035461
- Weighted Recall: 0.724822695035461
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/qualitydatalab/autotrain-car-review-project-966432120
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("qualitydatalab/autotrain-car-review-project-966432120", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("qualitydatalab/autotrain-car-review-project-966432120", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
assamim/mt5-pukulenam-summarization
|
assamim
| 2022-06-09T12:19:33Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"Summarization",
"mT5",
"dataset:csebuetnlp/xlsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-08T15:08:51Z |
---
tags:
- generated_from_keras_callback
- Summarization
- mT5
datasets:
- csebuetnlp/xlsum
model-index:
- name: assamim/mt5-pukulenam-summarization
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# assamim/mt5-pukulenam-summarization
This model is a fine-tuned version of [csebuetnlp/mT5_multilingual_XLSum](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) on an [csebuetnlp/xlsum](https://huggingface.co/datasets/csebuetnlp/xlsum) dataset
## Using this model in `transformers` (tested on 4.19.2)
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import re
news = """
Anggota Unit Perlindungan Rakyat Kurdi di kota Rabia, pada perbatasan Irak-Suriah. Pasukan Kurdi Irak dilaporkan sudah menguasai kembali kota Rabia meskipun banyak korban jatuh. Pejabat senior Kurdi Irak mengatakan pasukan Kurdi Peshmerga mencatat kemajuan lewat serangan dini hari di Rabia. Sementara itu, milisi ISIS berusaha memukul mundur pasukan Kurdi Suriah di bagian lain perbatasan. Hal ini terjadi saat koalisi pimpinan Amerika terus melanjutkan serangan udara terhadap sasaran ISIS di Suriah dan Irak. Hari Selasa (30 September) dilaporkan juga terjadi serangkaian serangan bom di ibu kota Irak, Baghdad dan kota suci Syiah, Karbala. Dalam perkembangan terpisah, sejumlah tank Turki berada di bukit di sepanjang perbatasan dekat kota Kobane, Suriah setelah sejumlah bom mengenai wilayah Turki saat terjadi bentrokan dengan milisi ISIS dan pejuang Kurdi. Pemerintah Turki diperkirakan akan menyampaikan mosi ke parlemen, agar menyetujui aksi militer terhadap ISIS di Irak dan Suriah.
"""
tokenizer = AutoTokenizer.from_pretrained("assamim/mt5-pukulenam-summarization")
model = AutoModelForSeq2SeqLM.from_pretrained("assamim/mt5-pukulenam-summarization", from_tf=True)
WHITESPACE_HANDLER = lambda k: re.sub('\s+', ' ', re.sub('\n+', ' ', k.strip()))
input_ids = tokenizer.encode(WHITESPACE_HANDLER(news1), return_tensors='pt')
summary_ids = model.generate(input_ids,
min_length=20,
max_length=200,
num_beams=7,
repetition_penalty=2.5,
length_penalty=1.0,
early_stopping=True,
no_repeat_ngram_size=2,
use_cache=True,
do_sample = True,
temperature = 0.8,
top_k = 50,
top_p = 0.95)
summary_text = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print(summary_text)
```
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.8.2
- Datasets 2.2.2
- Tokenizers 0.12.1
|
nestoralvaro/mt5-base-finetuned-xsum-data_prep_2021_12_26___t404_2980.csv___topic_text_google_mt5_base
|
nestoralvaro
| 2022-06-09T11:54:52Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-09T05:36:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-finetuned-xsum-data_prep_2021_12_26___t404_2980.csv___topic_text_google_mt5_base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-xsum-data_prep_2021_12_26___t404_2980.csv___topic_text_google_mt5_base
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 0.8441
- Rouge2: 0.0894
- Rougel: 0.8428
- Rougelsum: 0.844
- Gen Len: 6.338
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.0 | 1.0 | 89332 | nan | 0.8441 | 0.0894 | 0.8428 | 0.844 | 6.338 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
YaYaB/dqn-SpaceInvadersNoFrameskip-v4
|
YaYaB
| 2022-06-09T11:24:49Z | 7 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-09T11:24:10Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 374.00 +/- 214.89
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga YaYaB -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga YaYaB
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
FritzOS/TEdetection_distilBERT_mLM_V4
|
FritzOS
| 2022-06-09T11:12:10Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-09T11:11:56Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: TEdetection_distilBERT_mLM_V4
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# TEdetection_distilBERT_mLM_V4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0181
- Validation Loss: 0.0215
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 208018, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.0181 | 0.0215 | 0 |
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.8.2
- Datasets 2.2.2
- Tokenizers 0.12.1
|
mbazaNLP/kinyarwanda-coqui-stt-model
|
mbazaNLP
| 2022-06-09T11:09:26Z | 0 | 0 | null |
[
"tflite",
"Coqui",
"Deepspeech",
"LSTM",
"automatic-speech-recognition",
"rw",
"dataset:commonvoice",
"arxiv:1412.5567",
"license:apache-2.0",
"region:us"
] |
automatic-speech-recognition
| 2022-05-27T08:23:47Z |
---
language: "rw"
thumbnail:
pipeline_tag: automatic-speech-recognition
tags:
- Coqui
- Deepspeech
- LSTM
license: "apache-2.0"
datasets:
- commonvoice
metrics:
- wer
---
**Model card - Kinyarwanda coqui STT model**
**Model details**
- Kinyarwanda Speech to text model
- Developed by [Digital Umuganda](digitalumuganda.com)
- Model based from: Baidu Deepspeech end to end RNN model
- paper: [deepspeech end to end STT](https://arxiv.org/pdf/1412.5567.pdf)
- Documentation on model: [deepspeech documentation](https://deepspeech.readthedocs.io/)
- License: Mozilla 2.0 License
- Feedback on the model: samuel@digitalumuganda.com
**Intended use cases**
- Intended to be used for
- simple keyword spotting
- simple transcribing
- transfer learning for better kinyarwanda and african language models
- Intended to be used by:
- App developpers
- various organizations who want to transcribe kinyarwanda recordings
- ML researchers
- other researchers in Kinyarwanda and tech usage in kinyarwanda (e.g. Linguists, journalists)
- Not intended to be used as:
- a fully fledged voice assistant
- voice recognition application
- Multiple languages STT
- language detection
**Factors**
- Anti-bias: these are bias that can influence the accuracy of the model
- Gender
- accents and dialects
- age
- Voice quality: factors that can influence the accuracy of the model
- Background noise
- short sentences
- Voice format: voices must be converted to the wav format
- wav format
**Metrics**
- word error rate on the Common Voice Kinyarwanda test set
|Test Corpus|WER|
|-----------|---|
|Common Voice|39.1\%|
**Training data**
- [common voice crowdsource website](https://commonvoice.mozilla.org/en/datasets)
**Evaluation data**
- [common voice crowdsource website](https://commonvoice.mozilla.org/en/datasets)
|
i8pxgd2s/q-FrozenLake-v1-4x4-Slippery
|
i8pxgd2s
| 2022-06-09T10:29:25Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-09T10:29:18Z |
---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-Slippery
results:
- metrics:
- type: mean_reward
value: 0.75 +/- 0.43
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="i8pxgd2s/q-FrozenLake-v1-4x4-Slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
twieland/SUBTITLE_ja-en_helsinki
|
twieland
| 2022-06-09T10:23:09Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-09T07:21:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: SUBTITLE_ja-en_helsinki
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SUBTITLE_ja-en_helsinki
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ja-en](https://huggingface.co/Helsinki-NLP/opus-mt-ja-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.4097
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.025 | 0.05 | 2000 | 5.1692 |
| 2.9548 | 0.09 | 4000 | 5.7128 |
| 2.8762 | 0.14 | 6000 | 5.9297 |
| 2.821 | 0.18 | 8000 | 6.0415 |
| 2.7826 | 0.23 | 10000 | 6.0416 |
| 2.7386 | 0.27 | 12000 | 6.0069 |
| 2.7036 | 0.32 | 14000 | 6.0192 |
| 2.678 | 0.37 | 16000 | 5.9286 |
| 2.6499 | 0.41 | 18000 | 5.9587 |
| 2.6261 | 0.46 | 20000 | 5.9044 |
| 2.6032 | 0.5 | 22000 | 5.8482 |
| 2.5708 | 0.55 | 24000 | 5.7760 |
| 2.5517 | 0.59 | 26000 | 5.7546 |
| 2.5336 | 0.64 | 28000 | 5.7447 |
| 2.5196 | 0.69 | 30000 | 5.7373 |
| 2.4957 | 0.73 | 32000 | 5.6429 |
| 2.483 | 0.78 | 34000 | 5.6874 |
| 2.4599 | 0.82 | 36000 | 5.6482 |
| 2.4468 | 0.87 | 38000 | 5.5951 |
| 2.4344 | 0.92 | 40000 | 5.6355 |
| 2.4223 | 0.96 | 42000 | 5.6135 |
| 2.3878 | 1.01 | 44000 | 5.6164 |
| 2.294 | 1.05 | 46000 | 5.5802 |
| 2.2896 | 1.1 | 48000 | 5.5924 |
| 2.2815 | 1.14 | 50000 | 5.5296 |
| 2.2702 | 1.19 | 52000 | 5.5119 |
| 2.2741 | 1.24 | 54000 | 5.4775 |
| 2.2586 | 1.28 | 56000 | 5.4663 |
| 2.2492 | 1.33 | 58000 | 5.4764 |
| 2.2411 | 1.37 | 60000 | 5.4444 |
| 2.2275 | 1.42 | 62000 | 5.4566 |
| 2.218 | 1.46 | 64000 | 5.4845 |
| 2.2086 | 1.51 | 66000 | 5.4681 |
| 2.1976 | 1.56 | 68000 | 5.4775 |
| 2.1877 | 1.6 | 70000 | 5.4619 |
| 2.177 | 1.65 | 72000 | 5.4621 |
| 2.1722 | 1.69 | 74000 | 5.4322 |
| 2.1599 | 1.74 | 76000 | 5.4348 |
| 2.1475 | 1.78 | 78000 | 5.4432 |
| 2.1477 | 1.83 | 80000 | 5.4239 |
| 2.134 | 1.88 | 82000 | 5.4182 |
| 2.1302 | 1.92 | 84000 | 5.4089 |
| 2.125 | 1.97 | 86000 | 5.4097 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.