modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-11 12:33:28
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-11 12:33:10
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mmillet/rubert-tiny2_finetuned_emotion_experiment_augmented_anger_fear_no_emojis
|
mmillet
| 2022-06-06T20:28:00Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-06T18:22:23Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: rubert-tiny2_finetuned_emotion_experiment_augmented_anger_fear_no_emojis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubert-tiny2_finetuned_emotion_experiment_augmented_anger_fear_no_emojis
This model is a fine-tuned version of [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5820
- Accuracy: 0.7881
- F1: 0.7886
- Precision: 0.7906
- Recall: 0.7881
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.0996 | 1.0 | 69 | 1.0013 | 0.6879 | 0.6779 | 0.7070 | 0.6879 |
| 0.9524 | 2.0 | 138 | 0.8651 | 0.7265 | 0.7245 | 0.7322 | 0.7265 |
| 0.8345 | 3.0 | 207 | 0.7821 | 0.7422 | 0.7413 | 0.7445 | 0.7422 |
| 0.7573 | 4.0 | 276 | 0.7222 | 0.7484 | 0.7473 | 0.7482 | 0.7484 |
| 0.6923 | 5.0 | 345 | 0.6828 | 0.7568 | 0.7562 | 0.7562 | 0.7568 |
| 0.6412 | 6.0 | 414 | 0.6531 | 0.7568 | 0.7559 | 0.7556 | 0.7568 |
| 0.5982 | 7.0 | 483 | 0.6320 | 0.7610 | 0.7601 | 0.7597 | 0.7610 |
| 0.5593 | 8.0 | 552 | 0.6133 | 0.7651 | 0.7655 | 0.7664 | 0.7651 |
| 0.5183 | 9.0 | 621 | 0.6036 | 0.7714 | 0.7708 | 0.7709 | 0.7714 |
| 0.5042 | 10.0 | 690 | 0.5951 | 0.7756 | 0.7755 | 0.7760 | 0.7756 |
| 0.483 | 11.0 | 759 | 0.5878 | 0.7766 | 0.7768 | 0.7774 | 0.7766 |
| 0.4531 | 12.0 | 828 | 0.5855 | 0.7850 | 0.7841 | 0.7839 | 0.7850 |
| 0.4386 | 13.0 | 897 | 0.5828 | 0.7797 | 0.7790 | 0.7786 | 0.7797 |
| 0.4238 | 14.0 | 966 | 0.5788 | 0.7777 | 0.7780 | 0.7786 | 0.7777 |
| 0.4018 | 15.0 | 1035 | 0.5793 | 0.7839 | 0.7842 | 0.7855 | 0.7839 |
| 0.3998 | 16.0 | 1104 | 0.5801 | 0.7850 | 0.7844 | 0.7841 | 0.7850 |
| 0.3747 | 17.0 | 1173 | 0.5791 | 0.7839 | 0.7836 | 0.7833 | 0.7839 |
| 0.3595 | 18.0 | 1242 | 0.5799 | 0.7891 | 0.7891 | 0.7894 | 0.7891 |
| 0.3575 | 19.0 | 1311 | 0.5820 | 0.7881 | 0.7886 | 0.7906 | 0.7881 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ett1112/amazon_sentiment_sample_of_1900_with_summary
|
ett1112
| 2022-06-06T19:06:55Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-06T18:56:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: amazon_sentiment_sample_of_1900_with_summary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# amazon_sentiment_sample_of_1900_with_summary
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1062
- Accuracy: 0.9581
- F1: 0.9579
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/russellriesjr
|
huggingtweets
| 2022-06-06T18:54:21Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-06T18:47:54Z |
---
language: en
thumbnail: http://www.huggingtweets.com/russellriesjr/1654541578565/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1468670117357789192/sStrLB1i_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Russell Ries Jr.</div>
<div style="text-align: center; font-size: 14px;">@russellriesjr</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Russell Ries Jr..
| Data | Russell Ries Jr. |
| --- | --- |
| Tweets downloaded | 3236 |
| Retweets | 2163 |
| Short tweets | 135 |
| Tweets kept | 938 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1hn2gsci/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @russellriesjr's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/378xjgs6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/378xjgs6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/russellriesjr')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ett1112/amazon_sentiment_sample_of_1900
|
ett1112
| 2022-06-06T18:29:16Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-06T18:19:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: amazon_sentiment_sample_of_1900
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# amazon_sentiment_sample_of_1900
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2185
- Accuracy: 0.9162
- F1: 0.9192
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/nonewthing
|
huggingtweets
| 2022-06-06T17:50:00Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-06T17:49:54Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1532336212412977152/TWPqTO8d_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">AI</div>
<div style="text-align: center; font-size: 14px;">@nonewthing</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from AI.
| Data | AI |
| --- | --- |
| Tweets downloaded | 3247 |
| Retweets | 100 |
| Short tweets | 234 |
| Tweets kept | 2913 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/bf84hrrd/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nonewthing's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/169zdg1z) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/169zdg1z/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/nonewthing')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
yanekyuk/berturk-uncased-keyword-extractor
|
yanekyuk
| 2022-06-06T17:09:28Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"tr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-06T09:33:44Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- accuracy
- f1
language:
- tr
widget:
- text: "İngiltere'de düzenlenen Avrupa Tekvando ve Para Tekvando Şampiyonası’nda millî tekvandocular 5 altın, 2 gümüş ve 4 bronz olmak üzere 11, millî para tekvandocular ise 4 altın, 3 gümüş ve 1 bronz olmak üzere 8 madalya kazanarak takım halinde Avrupa şampiyonu oldu."
- text: "Füme somon dedik ama aslında lox salamuralanmış somon anlamına geliyor, füme etme opsiyonel. Lox bagel, 1930'larda Eggs Benedict furyasında New Yorklu Yahudi cemaati tarafından koşer bir alternatif olarak çıkan bir lezzet. Günümüzde benim hangover yüreğim dâhil dünyanın birçok yerinde enfes bir kahvaltı sandviçi."
- text: "Türkiye'de son aylarda sıklıkla tartışılan konut satışı karşılığında yabancılara vatandaşlık verilmesi konusunu beyin göçü kapsamında ele almak mümkün. Daha önce 250 bin dolar olan vatandaşlık bedeli yükselen tepkiler üzerine 400 bin dolara çıkarılmıştı. Türkiye'den göç eden iyi eğitimli kişilerin , gittikleri ülkelerde 250 bin dolar tutarında yabancı yatırıma denk olduğu göz önüne alındığında nitelikli insan gücünün yabancılara konut karşılığında satılan vatandaşlık bedelin eş olduğunu görüyoruz. Yurt dışına giden her bir vatandaşın yüksek teknolojili katma değer üreten sektörlere yapacağı katkılar göz önünde bulundurulduğunda bu açığın inşaat sektörüyle kapatıldığını da görüyoruz. Beyin göçü konusunda sadece ekonomik perspektiften bakıldığında bile kısa vadeli döviz kaynağı yaratmak için kullanılan vatandaşlık satışı yerine beyin göçünü önleyecek önlemler alınmasının ülkemize çok daha faydalı olacağı sonucunu çıkarıyoruz."
- text: "Türkiye’de resmî verilere göre, 15 ve daha yukarı yaştaki kişilerde mevsim etkisinden arındırılmış işsiz sayısı, bu yılın ilk çeyreğinde bir önceki çeyreğe göre 50 bin kişi artarak 3 milyon 845 bin kişi oldu. Mevsim etkisinden arındırılmış işsizlik oranı ise 0,1 puanlık artışla %11,4 seviyesinde gerçekleşti. İşsizlik oranı, ilk çeyrekte geçen yılın aynı çeyreğine göre 1,7 puan azaldı."
- text: "Boeing’in insansız uzay aracı Starliner, birtakım sorunlara rağmen Uluslararası Uzay İstasyonuna (ISS) ulaşarak ilk kez başarılı bir şekilde kenetlendi. Aracın ISS’te beş gün kalmasını takiben sorunsuz bir şekilde New Mexico’ya inmesi halinde Boeing, sonbaharda astronotları yörüngeye göndermek için Starliner’ı kullanabilir.\n\nNeden önemli? NASA’nın personal aracı üretmeyi durdurmasından kaynaklı olarak görevli astronotlar ve kozmonotlar, ISS’te Rusya’nın ürettiği uzay araçları ile taşınıyordu. Starliner’ın kendini kanıtlaması ise bu konuda Rusya’ya olan bağımlılığın potansiyel olarak ortadan kalkabileceği anlamına geliyor."
model-index:
- name: berturk-uncased-keyword-extractor
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# berturk-uncased-keyword-extractor
This model is a fine-tuned version of [dbmdz/bert-base-turkish-uncased](https://huggingface.co/dbmdz/bert-base-turkish-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3931
- Precision: 0.6631
- Recall: 0.6728
- Accuracy: 0.9188
- F1: 0.6679
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:--------:|:------:|
| 0.1779 | 1.0 | 1875 | 0.1862 | 0.6199 | 0.6356 | 0.9192 | 0.6276 |
| 0.1327 | 2.0 | 3750 | 0.1890 | 0.6328 | 0.6917 | 0.9206 | 0.6610 |
| 0.1008 | 3.0 | 5625 | 0.2188 | 0.6322 | 0.7037 | 0.9189 | 0.6660 |
| 0.0755 | 4.0 | 7500 | 0.2539 | 0.6395 | 0.7030 | 0.9181 | 0.6697 |
| 0.0574 | 5.0 | 9375 | 0.2882 | 0.6556 | 0.6868 | 0.9197 | 0.6709 |
| 0.0433 | 6.0 | 11250 | 0.3425 | 0.6565 | 0.6851 | 0.9189 | 0.6705 |
| 0.0352 | 7.0 | 13125 | 0.3703 | 0.6616 | 0.6776 | 0.9191 | 0.6695 |
| 0.0288 | 8.0 | 15000 | 0.3931 | 0.6631 | 0.6728 | 0.9188 | 0.6679 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
twieland/VN_ja_to_en
|
twieland
| 2022-06-06T17:04:40Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-06T12:09:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: VN_ja_to_en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# VN_ja_to_en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ja-en](https://huggingface.co/Helsinki-NLP/opus-mt-ja-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0411
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 2.0489 | 1.0 | 10276 | 2.0716 |
| 1.9028 | 2.0 | 20552 | 2.0646 |
| 1.812 | 3.0 | 30828 | 2.0525 |
| 1.7531 | 4.0 | 41104 | 2.0487 |
| 1.7083 | 5.0 | 51380 | 2.0375 |
| 1.6717 | 6.0 | 61656 | 2.0415 |
| 1.6354 | 7.0 | 71932 | 2.0398 |
| 1.6146 | 8.0 | 82208 | 2.0390 |
| 1.5972 | 9.0 | 92484 | 2.0391 |
| 1.582 | 10.0 | 102760 | 2.0411 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
vjeansel/q-Taxi-v3
|
vjeansel
| 2022-06-06T17:02:53Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-06T17:02:48Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="vjeansel/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
dipesh/Intent-Classification-Bert-Base-Cased
|
dipesh
| 2022-06-06T16:43:41Z | 4 | 1 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-27T13:27:06Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: Intent-Classification-Bert-Base-Cased
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Intent-Classification-Bert-Base-Cased
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.9.1
- Datasets 2.2.2
- Tokenizers 0.10.3
|
garutyunov/meme-bert
|
garutyunov
| 2022-06-06T16:26:56Z | 10 | 1 |
pytorch
|
[
"pytorch",
"distilbert",
"meme classification",
"en",
"license:mit",
"region:us"
] | null | 2022-06-06T15:58:40Z |
---
language:
- en
license: mit
library_name: pytorch
tags:
- meme classification
metrics:
- accuracy
---
# MemeBERT
Bert model fine-tined with [Memes dataset](https://github.com/mrsndmn/memes-dataset)
|
spy24/autotrain-expand-928531583
|
spy24
| 2022-06-06T16:04:02Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain",
"en",
"dataset:spy24/autotrain-data-expand",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-06T10:07:53Z |
---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- spy24/autotrain-data-expand
co2_eq_emissions: 3.4552892403407167
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 928531583
- CO2 Emissions (in grams): 3.4552892403407167
## Validation Metrics
- Loss: 2.1122372150421143
- Rouge1: 68.7226
- Rouge2: 50.1638
- RougeL: 59.7235
- RougeLsum: 62.3458
- Gen Len: 63.2505
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/spy24/autotrain-expand-928531583
```
|
ubiqtuitin/deeprltutorial1
|
ubiqtuitin
| 2022-06-06T14:30:31Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-06T14:30:07Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -205.33 +/- 79.29
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
q2-jlbar/swin-tiny-patch4-window7-224-finetuned-eurosat
|
q2-jlbar
| 2022-06-06T14:24:15Z | 80 | 0 |
transformers
|
[
"transformers",
"pytorch",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:image_folder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-06-01T21:36:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9618518518518518
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1199
- Accuracy: 0.9619
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3627 | 0.99 | 47 | 0.1988 | 0.9389 |
| 0.2202 | 1.99 | 94 | 0.1280 | 0.9604 |
| 0.1948 | 2.99 | 141 | 0.1199 | 0.9619 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
limsc/reqroberta-tapt-epoch50
|
limsc
| 2022-06-06T14:19:05Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"fill-mask",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-05T23:40:45Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: reqroberta-tapt-epoch50
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# reqroberta-tapt-epoch50
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 37100, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.8.2
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/briangrimmett
|
huggingtweets
| 2022-06-06T14:15:11Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-06T13:51:18Z |
---
language: en
thumbnail: http://www.huggingtweets.com/briangrimmett/1654524569583/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1335009788212748291/X5EyBri8_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Brian Grimmett</div>
<div style="text-align: center; font-size: 14px;">@briangrimmett</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Brian Grimmett.
| Data | Brian Grimmett |
| --- | --- |
| Tweets downloaded | 3248 |
| Retweets | 1502 |
| Short tweets | 129 |
| Tweets kept | 1617 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3nan0dmd/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @briangrimmett's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1mpmndjc) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1mpmndjc/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/briangrimmett')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
miyagawaorj/distilbert-base-uncased-finetuned-clinc
|
miyagawaorj
| 2022-06-06T14:04:54Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-20T16:51:11Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9474193548387096
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2454
- Accuracy: 0.9474
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.496 | 1.0 | 954 | 1.8019 | 0.8306 |
| 1.0663 | 2.0 | 1908 | 0.5690 | 0.9174 |
| 0.3267 | 3.0 | 2862 | 0.3128 | 0.9406 |
| 0.1397 | 4.0 | 3816 | 0.2567 | 0.9445 |
| 0.0846 | 5.0 | 4770 | 0.2454 | 0.9474 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.12.1
|
0xrushi/q-Taxi-v3
|
0xrushi
| 2022-06-06T13:49:28Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-06T13:48:55Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="rushic24/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
huggingtweets/bigmanbakar
|
huggingtweets
| 2022-06-06T13:49:15Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-06T13:48:15Z |
---
language: en
thumbnail: http://www.huggingtweets.com/bigmanbakar/1654523350313/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1459686915498819587/cYF4VOWO_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">AbuBakar Siddiq</div>
<div style="text-align: center; font-size: 14px;">@bigmanbakar</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from AbuBakar Siddiq.
| Data | AbuBakar Siddiq |
| --- | --- |
| Tweets downloaded | 3244 |
| Retweets | 452 |
| Short tweets | 769 |
| Tweets kept | 2023 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ggb85vg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bigmanbakar's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1qafbtox) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1qafbtox/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/bigmanbakar')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
eunjin/kobart_gyeongsang_translator
|
eunjin
| 2022-06-06T13:45:06Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-06T05:13:42Z |
Korean Dialect Translator: Standard > Gyeongsang
- Used Data : AI hub 한국어 방언 발화(경상도)
- Used Model : SKT-KoBART
- https://github.com/SKT-AI/KoBART
- Reference Code
- https://github.com/seujung/KoBART-translation
|
eunjin/kobart_jeju_translator
|
eunjin
| 2022-06-06T13:44:49Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-06T05:23:56Z |
Korean Dialect Translator: Standard > Jeju
- Used Data : AI hub 한국어 방언 발화(제주도)
- Used Model : SKT-KoBART
- https://github.com/SKT-AI/KoBART
- Reference Code
- https://github.com/seujung/KoBART-translation
|
eunjin/kobart_jeju_to_standard_translator
|
eunjin
| 2022-06-06T13:44:31Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-06T05:27:20Z |
Korean Dialect Translator: Jeju > Standard
- Used Data : AI hub 한국어 방언 발화(제주도)
- Used Model : SKT-KoBART
- https://github.com/SKT-AI/KoBART
- Reference Code
- https://github.com/seujung/KoBART-translation
|
Nitika/distilbert-base-uncased-finetuned-cola
|
Nitika
| 2022-06-06T13:26:34Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-06T13:23:16Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Nitika/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Nitika/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1924
- Validation Loss: 0.4890
- Train Matthews Correlation: 0.5406
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2670, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5210 | 0.4510 | 0.4918 | 0 |
| 0.3327 | 0.4885 | 0.5156 | 1 |
| 0.1924 | 0.4890 | 0.5406 | 2 |
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.8.2
- Datasets 2.2.2
- Tokenizers 0.12.1
|
SimulSt/distilbert-base-uncased-finetuned-emotion
|
SimulSt
| 2022-06-06T13:24:23Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-02T20:04:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.9250238763128368
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2202
- Accuracy: 0.925
- F1: 0.9250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8285 | 1.0 | 250 | 0.3203 | 0.905 | 0.9008 |
| 0.2544 | 2.0 | 500 | 0.2202 | 0.925 | 0.9250 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
kabelomalapane/Af-En
|
kabelomalapane
| 2022-06-06T13:14:27Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-06-06T07:54:17Z |
---
license: apache-2.0
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: En-Af
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# En-Af
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-af-en](https://huggingface.co/Helsinki-NLP/opus-mt-en-af) on the None dataset.
It achieves the following results on the evaluation set:
Before training:
- 'eval_bleu': 46.1522519
- 'eval_loss': 2.5693612
After training:
- Loss: 1.7516168
- Bleu: 55.3924697
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
jppaolim/v56_Large_2E
|
jppaolim
| 2022-06-06T12:17:56Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-06T11:30:00Z |
# My Story model
{'top_p': 0.9, 'top_k': 50, 'temperature': 1, 'repetition_penalty': 1}
Arthur goes to the beach. Arthur is in love with his girlfriend. They go to the beach together. Arthur falls off the beach. Arthur needs medical attention. Arthur gets help at the beach.
Arthur goes to the beach. Arthur is feeling bored. He looks on the internet for something exciting. Arthur looks in the paper for something interesting. He sees that there is going to be a beach nearby. Arthur heads to the beach and meets some people there.
Arthur goes to the beach. Arthur always had a lot of fun at the beach. However, one day he decided to go swimming. Arthur had been there for hours and it was getting dark. Finally, he decided to go back home. Arthur went home and was happy.
Arthur goes to the beach. Arthur has never been to the beach. His friends tell him that it is very hot. He finally gets the courage to go. He spends his first day at the beach. Arthur cannot wait to come back.
Arthur goes to the beach. Arthur is so bored one day. He decides to go to the beach. He sees a bunch of people playing. He decides to join in. Arthur plays in the ocean with his friends.
{'top_p': 0.9, 'top_k': 50, 'temperature': 1, 'repetition_penalty': 1.05}
Arthur goes to the beach. Arthur was excited about having his yearly family trip. He had booked a hotel for two in a beautiful beach. The day before his trip he went to the beach to go swimming. He loved the sand and the sun very much. Arthur spent the rest of his trip relaxing and relaxing.
Arthur goes to the beach. Arthur is a lonely man. He hasn't been out in Years. He decides to head out to the ocean for a walk. He walks all day long and has a wonderful time. After he gets home he is glad he went to the beach.
Arthur goes to the beach. Arthur was swimming at the beach. He swam into the deep water. A large wave hit Arthur. It carried him into the ocean. Arthur couldn't get back out of the water.
Arthur goes to the beach. Arthur loves the beach. He decided to go to the beach one day. At the beach he jumped in the ocean. As he jumped he hit his head. Arthur is glad he jumped in the ocean.
Arthur goes to the beach. Arthur was at the beach. He decided to jump into the water. Arthur wasn't wearing his sunscreen. Arthur got very burned on the beach. Arthur had to go home and change his clothes.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.8, 'repetition_penalty': 1.1}
Arthur goes to the beach. Arthur went to the beach with his friend Rob. They played in the sand for an hour. Rob told Arthur that it was hot out. Arthur and Rob ran back home to put on sunscreen. Arthur went back to the beach without playing in the sand.
Arthur goes to the beach. Arthur is on vacation. He decides he would like to go to the beach. He goes to the beach. His friend takes him to eat at a seafood restaurant. They both have fun at the beach.
Arthur goes to the beach. Arthur had never been to the ocean before. His friends took him to the beach one day. He played in the water for an hour. Arthur then went home and rested. Arthur felt very happy and refreshed after that.
Arthur goes to the beach. Arthur went to the beach on vacation. He was bored and wanted some fun activity. He looked around for something fun. Arthur saw a friend of his at the beach. Arthur and his friend had fun playing together.
Arthur goes to the beach. Arthur is a lonely man. He decides he needs some company. Arthur gets on his boat and heads to the ocean. While at the beach, Arthur falls in love with a beautiful woman. Now Arthur has company.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.6, 'repetition_penalty': 1.15}
Arthur goes to the beach. Arthur is a very active person. He loves going to the beach. One day he finds a spot on the sand where he can play in peace. The water is so calm and so peaceful that Arthur cannot help but swim. Now Arthur is a world renown ocean swimmer.
Arthur goes to the beach. Arthur is on a vacation with his family. He decides to go to the beach. He gets all his friends together and they play in the sand. Afterwards, Arthur has fun at the beach. Now he is ready for his next adventure.
Arthur goes to the beach. Arthur is a little boy who loves going to the beach. He spends all his time playing in the sand and sun. One day he notices that it has started raining very hard. Arthur rushes home to take cover. Arthur gets soaked by the rain so he can go play again.
Arthur goes to the beach. Arthur is bored on a sunny day. He decides to go to the beach. Arthur gets his towel and sandals ready. He drives to the beach. Arthur spends the rest of the day at the beach.
Arthur goes to the beach. Arthur is feeling lonely one day. He decides to go on a trip to the beach. At the beach he has a blast. However, he sees an injured turtle. He rescues the turtle and returns it to its home.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.4, 'repetition_penalty': 1.2}
Arthur goes to the beach. Arthur is a very lonely boy. He wants to meet new people but he doesn't know where to go. One day his friend tells him about going to the beach. The next day Arthur gets ready and leaves for the beach. At the beach, Arthur meets lots of nice people and makes many friends!
Arthur goes to the beach. Arthur is going on vacation with his family. He has never been to the beach before. His parents tell him he needs a towel first. So Arthur gets a towel and puts it in the sand box. The next morning, Arthur takes a dip in the ocean.
Arthur goes to the beach. Arthur is bored on a weekend afternoon. He decides he would like to go to the beach. He gets his towel and sunscreen. Then he drives to the beach. Finally, Arthur has fun at the beach.
Arthur goes to the beach. Arthur is on vacation in Florida with his family. His family decides that they want to go to the beach. They all pack their towels and sunscreen. When Arthur gets there, he sees a lot of people at the beach. He spends most of his time playing in the sand instead of swimming.
Arthur goes to the beach. Arthur is bored at home. He decides to go out to the ocean. Arthur gets in his car and drives to the beach. At the beach he plays in the sand. Arthur has a great time on the beach.
|
zakria/repo_name
|
zakria
| 2022-06-06T11:23:23Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-06T11:09:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: repo_name
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# repo_name
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingartists/rammstein
|
huggingartists
| 2022-06-06T11:14:46Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/rammstein",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- huggingartists/rammstein
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/29cedf8dd30a7458f4fca47d1c0f0eab.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Rammstein</div>
<a href="https://genius.com/artists/rammstein">
<div style="text-align: center; font-size: 14px;">@rammstein</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Rammstein.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/rammstein).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/rammstein")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/qt3qa1x1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Rammstein's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/2yyigjzv) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/2yyigjzv/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/rammstein')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/rammstein")
model = AutoModelWithLMHead.from_pretrained("huggingartists/rammstein")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
facebook/data2vec-audio-large-960h
|
facebook
| 2022-06-06T10:36:59Z | 805 | 7 |
transformers
|
[
"transformers",
"pytorch",
"data2vec-audio",
"automatic-speech-recognition",
"speech",
"hf-asr-leaderboard",
"en",
"dataset:librispeech_asr",
"arxiv:2202.03555",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-04-02T16:01:11Z |
---
language: en
datasets:
- librispeech_asr
tags:
- speech
- hf-asr-leaderboard
license: apache-2.0
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: data2vec-audio-large-960h
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 1.89
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 4.07
---
# Data2Vec-Audio-Large-960h
[Facebook's Data2Vec](https://ai.facebook.com/research/data2vec-a-general-framework-for-self-supervised-learning-in-speech-vision-and-language/)
The large model pretrained and fine-tuned on 960 hours of Librispeech on 16kHz sampled speech audio. When using the model
make sure that your speech input is also sampled at 16Khz.
[Paper](https://arxiv.org/abs/2202.03555)
Authors: Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli
**Abstract**
While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind. To get us closer to general self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech, NLP or computer vision. The core idea is to predict latent representations of the full input data based on a masked view of the input in a self-distillation setup using a standard Transformer architecture. Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which are local in nature, data2vec predicts contextualized latent representations that contain information from the entire input. Experiments on the major benchmarks of speech recognition, image classification, and natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches.
The original model can be found under https://github.com/pytorch/fairseq/tree/main/examples/data2vec .
# Pre-Training method

For more information, please take a look at the [official paper](https://arxiv.org/abs/2202.03555).
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Data2VecAudioForCTC
from datasets import load_dataset
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("facebook/data2vec-audio-large-960h")
model = Data2VecAudioForCTC.from_pretrained("facebook/data2vec-audio-large-960h")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_values = processor(ds[0]["audio"]["array"],, return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
## Evaluation
This code snippet shows how to evaluate **facebook/data2vec-audio-large-960h** on LibriSpeech's "clean" and "other" test data.
```python
from transformers import Wav2Vec2Processor, Data2VecAudioForCTC
from datasets import load_dataset
import torch
from jiwer import wer
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("facebook/data2vec-audio-large-960h").to("cuda")
model = Data2VecAudioForCTC.from_pretrained("facebook/data2vec-audio-large-960h")
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
def map_to_pred(batch):
input_values = processor(batch["audio"]["array"], return_tensors="pt", padding="longest").input_values
with torch.no_grad():
logits = model(input_values.to("cuda")).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 1.89 | 4.07 |
|
sayakpramanik/distilbert-base-uncased-finetuned-emotion
|
sayakpramanik
| 2022-06-06T10:12:03Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-06T09:52:27Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9228534433920637
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2166
- Accuracy: 0.923
- F1: 0.9229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8472 | 1.0 | 250 | 0.3169 | 0.912 | 0.9105 |
| 0.2475 | 2.0 | 500 | 0.2166 | 0.923 | 0.9229 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Nawaphong-zax/wangchanberta-base-att-spm-uncased-finetuned-cosme
|
Nawaphong-zax
| 2022-06-06T08:52:29Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"camembert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-06T07:12:12Z |
---
tags:
- generated_from_trainer
model-index:
- name: wangchanberta-base-att-spm-uncased-finetuned-cosme
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wangchanberta-base-att-spm-uncased-finetuned-cosme
This model is a fine-tuned version of [airesearch/wangchanberta-base-att-spm-uncased](https://huggingface.co/airesearch/wangchanberta-base-att-spm-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1386 | 1.0 | 391 | 1.9939 |
| 2.1301 | 2.0 | 782 | 1.9974 |
| 2.1296 | 3.0 | 1173 | 2.0013 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.11.0+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
|
RuiqianLi/wav2vec2-large-xls-r-300m-chinese-taiwan-colab
|
RuiqianLi
| 2022-06-06T08:12:49Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-17T15:38:10Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-chinese-taiwan-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-chinese-taiwan-colab
!!!this model has just been trained with very high learning rate and small epochs, please do not use this to do the speech to text.
!!!It's just a test, I'll retrain this model with more time later when I have time.
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.1
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
anlausch/aq_bert_ibm
|
anlausch
| 2022-06-06T08:10:46Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-06T07:28:24Z |
---
license: mit
---
Model trained on IBMArgRank30k for 2 epochs with a learning rate of 3e-5 (optimised via grid search) in a similar way as in Lauscher et al. 2020 (see below). The original model was Tensorflow-based. This model corresponds to a reimplementation with Transformers & PyTorch.
```
@inproceedings{lauscher-etal-2020-rhetoric,
title = "Rhetoric, Logic, and Dialectic: Advancing Theory-based Argument Quality Assessment in Natural Language Processing",
author = "Lauscher, Anne and
Ng, Lily and
Napoles, Courtney and
Tetreault, Joel",
booktitle = "Proceedings of the 28th International Conference on Computational Linguistics",
month = dec,
year = "2020",
address = "Barcelona, Spain (Online)",
publisher = "International Committee on Computational Linguistics",
url = "https://aclanthology.org/2020.coling-main.402",
doi = "10.18653/v1/2020.coling-main.402",
pages = "4563--4574",
abstract = "Though preceding work in computational argument quality (AQ) mostly focuses on assessing overall AQ, researchers agree that writers would benefit from feedback targeting individual dimensions of argumentation theory. However, a large-scale theory-based corpus and corresponding computational models are missing. We fill this gap by conducting an extensive analysis covering three diverse domains of online argumentative writing and presenting GAQCorpus: the first large-scale English multi-domain (community Q{\&}A forums, debate forums, review forums) corpus annotated with theory-based AQ scores. We then propose the first computational approaches to theory-based assessment, which can serve as strong baselines for future work. We demonstrate the feasibility of large-scale AQ annotation, show that exploiting relations between dimensions yields performance improvements, and explore the synergies between theory-based prediction and practical AQ assessment.",
}
```
|
anlausch/aq_bert_gaq_mt
|
anlausch
| 2022-06-06T08:09:38Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-06-06T07:41:55Z |
---
license: mit
---
Multi-task learning model (flat architecture) trained on GAQCorpus for 4 epochs with a learning rate of 2e-5 (optimised via grid search) in a similar way as in Lauscher et al. 2020 (see below). The original model was Tensorflow-based. This model corresponds to a reimplementation with Transformers & PyTorch.
```
@inproceedings{lauscher-etal-2020-rhetoric,
title = "Rhetoric, Logic, and Dialectic: Advancing Theory-based Argument Quality Assessment in Natural Language Processing",
author = "Lauscher, Anne and
Ng, Lily and
Napoles, Courtney and
Tetreault, Joel",
booktitle = "Proceedings of the 28th International Conference on Computational Linguistics",
month = dec,
year = "2020",
address = "Barcelona, Spain (Online)",
publisher = "International Committee on Computational Linguistics",
url = "https://aclanthology.org/2020.coling-main.402",
doi = "10.18653/v1/2020.coling-main.402",
pages = "4563--4574",
abstract = "Though preceding work in computational argument quality (AQ) mostly focuses on assessing overall AQ, researchers agree that writers would benefit from feedback targeting individual dimensions of argumentation theory. However, a large-scale theory-based corpus and corresponding computational models are missing. We fill this gap by conducting an extensive analysis covering three diverse domains of online argumentative writing and presenting GAQCorpus: the first large-scale English multi-domain (community Q{\&}A forums, debate forums, review forums) corpus annotated with theory-based AQ scores. We then propose the first computational approaches to theory-based assessment, which can serve as strong baselines for future work. We demonstrate the feasibility of large-scale AQ annotation, show that exploiting relations between dimensions yields performance improvements, and explore the synergies between theory-based prediction and practical AQ assessment.",
}
```
|
olpa/xml-roberta-base-finetuned-panx-fr
|
olpa
| 2022-06-06T06:41:16Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-05T08:27:35Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xml-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8393729984830608
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xml-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2691
- F1: 0.8394
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 191 | 0.3150 | 0.7993 |
| No log | 2.0 | 382 | 0.2799 | 0.8213 |
| No log | 3.0 | 573 | 0.2691 | 0.8394 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Ripsky/Poop
|
Ripsky
| 2022-06-06T06:39:37Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-06-06T06:32:10Z |
---
title: Poop
emoji: 🥑
colorFrom: yellow
colorTo: green
sdk: static
pinned: True
license: apache-2.0
---
|
Evelyn18/legalectra-base-spanish-finetuned-squad
|
Evelyn18
| 2022-06-06T06:22:06Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"question-answering",
"generated_from_trainer",
"dataset:squad_es",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-06-06T04:29:40Z |
---
tags:
- generated_from_trainer
datasets:
- squad_es
model-index:
- name: legalectra-base-spanish-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# legalectra-base-spanish-finetuned-squad
This model is a fine-tuned version of [mrm8488/legalectra-base-spanish](https://huggingface.co/mrm8488/legalectra-base-spanish) on the squad_es dataset.
It achieves the following results on the evaluation set:
- Loss: 5.9506
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 3 | 5.9506 |
| No log | 2.0 | 6 | 5.9506 |
| No log | 3.0 | 9 | 5.9506 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
nestoralvaro/mT5_multilingual_XLSum-finetuned-xsum-mlsum___summary_text
|
nestoralvaro
| 2022-06-06T03:26:11Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"dataset:mlsum",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-05T17:24:08Z |
---
tags:
- generated_from_trainer
datasets:
- mlsum
metrics:
- rouge
model-index:
- name: mT5_multilingual_XLSum-finetuned-xsum-mlsum___summary_text
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: mlsum
type: mlsum
args: es
metrics:
- name: Rouge1
type: rouge
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mT5_multilingual_XLSum-finetuned-xsum-mlsum___summary_text
This model is a fine-tuned version of [csebuetnlp/mT5_multilingual_XLSum](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) on the mlsum dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 0.0
- Rouge2: 0.0
- Rougel: 0.0
- Rougelsum: 0.0
- Gen Len: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.0 | 1.0 | 66592 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Chetan1997/layoutlmv2-finetuned-funsd-test
|
Chetan1997
| 2022-06-06T03:20:00Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-06T02:23:11Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2-finetuned-funsd-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-finetuned-funsd-test
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.8.0+cu101
- Datasets 2.2.2
- Tokenizers 0.12.1
|
mindwrapped/ppo-BipedalWalker-v3
|
mindwrapped
| 2022-06-06T03:16:20Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"BipedalWalker-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-10T23:46:10Z |
---
library_name: stable-baselines3
tags:
- BipedalWalker-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 185.82 +/- 92.04
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalker-v3
type: BipedalWalker-v3
---
# **PPO** Agent playing **BipedalWalker-v3**
This is a trained model of a **PPO** agent playing **BipedalWalker-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
bondi/bert-semaphore-prediction-w8
|
bondi
| 2022-06-06T02:36:31Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-06T02:35:32Z |
---
tags:
- generated_from_trainer
model-index:
- name: bert-semaphore-prediction-w8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-semaphore-prediction-w8
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
bondi/bert-semaphore-prediction-w4
|
bondi
| 2022-06-06T02:35:24Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-06T02:34:23Z |
---
tags:
- generated_from_trainer
model-index:
- name: bert-semaphore-prediction-w4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-semaphore-prediction-w4
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
bondi/bert-semaphore-prediction-w2
|
bondi
| 2022-06-06T02:34:15Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-06T02:33:16Z |
---
tags:
- generated_from_trainer
model-index:
- name: bert-semaphore-prediction-w2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-semaphore-prediction-w2
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
bondi/bert-semaphore-prediction-w0
|
bondi
| 2022-06-06T02:33:08Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-06T02:32:08Z |
---
tags:
- generated_from_trainer
model-index:
- name: bert-semaphore-prediction-w0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-semaphore-prediction-w0
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
jppaolim/v55_Large_2E
|
jppaolim
| 2022-06-06T01:24:38Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-06T00:33:50Z |
# My Story model
{'top_p': 0.9, 'top_k': 50, 'temperature': 1, 'repetition_penalty': 1}
Arthur goes to the beach. Arthur is bored and wanted to go the beach. His friends suggest he drive to the beach. Arthur gets a ride and they take off. Arthur takes a nap and has a good time. He has so much fun at the beach he doesn't want to leave.
Arthur goes to the beach. Arthur is feeling very hungry. He decides to go to the beach. Arthur gets some food. Arthur puts his food in his cooler. Arthur goes home and doesn't feel hungry any more.
Arthur goes to the beach. Arthur always wanted to go to the beach. He saved up money so he could take his dream trip. Finally he went to the beach and it was so beautiful. He loved his trip to the beach and decided he would go again. Arthur packed his bags and went to the beach.
Arthur goes to the beach. Arthur went to the beach last weekend. He swam on the sand and looked at the ocean. He saw several people walking around on the beach. Arthur stopped to talk to them. Arthur went home and told his mother about his trip.
Arthur goes to the beach. Arthur is so excited for the weekend. He knows he needs a new bathing suit. He finds the perfect one at the beach. He spends the day relaxing and exploring the shore. Arthur cannot wait for the next trip to the beach.
{'top_p': 0.9, 'top_k': 50, 'temperature': 1, 'repetition_penalty': 1.05}
Arthur goes to the beach. Arthur is playing with his friends in the sand at the beach. His friend Tom comes by and invites him to join them. Arthur loves the beach. Arthur spends the afternoon playing in the sand. Arthur and Tom have a great day at the beach.
Arthur goes to the beach. Arthur was going to the beach. He packed his towel and his sunscreen. He drove his car to the beach. Arthur swam in the ocean. Arthur had fun at the beach.
Arthur goes to the beach. Arthur is bored one day and decides he wants to go to the beach. He packs up his surfboard, towel, and sunscreen. Arthur goes to the ocean and spends the day there. He goes home and tells his mom about his day. Arthur is happy that he took a trip to the beach.
Arthur goes to the beach. Arthur loved the beach. He got his towel and sandals. He went out into the ocean. Arthur was shocked by the cold ocean. He decided he needed to go back home.
Arthur goes to the beach. Arthur really wants to go to the beach. His friend tells him it is too hot out. Arthur convinces his friend to come with him. They drive to the beach. Arthur spends the day playing in the ocean.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.8, 'repetition_penalty': 1.1}
Arthur goes to the beach. Arthur is going to the beach. He has packed his beach towel and sunscreen. Once he gets to the beach he finds a spot to sit down. He relaxes for a while and then swims in the water. Arthur loves the beach!
Arthur goes to the beach. Arthur is very bored. He decides to head to the beach. At the beach he relaxes on the sand. Then he gets out of his car and checks out. Arthur has spent the day at the beach.
Arthur goes to the beach. Arthur had always wanted to visit the ocean. He has saved his money for many Years. Finally he saves up enough money. Arthur takes a trip to the beach. He spends the whole day in the ocean.
Arthur goes to the beach. Arthur was so excited that he had packed his swimming trunks. He was going to the beach and he couldn't wait to swim! When he got to the beach, he saw it was closed for cleaning. He asked his mom if she would take him to the beach anyway. She said yes, but Arthur could have a picnic instead.
Arthur goes to the beach. Arthur is going to the beach with his friends today. He needs a bathing suit but doesn't have one. He decides to go without a bathing suit. When he gets there, he sees that they have a long line. Arthur finally finds a nice one and swims in the water.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.6, 'repetition_penalty': 1.15}
Arthur goes to the beach. Arthur is going on vacation with his family. He asks if they want to go to the beach. They agree and he drives them there. When they get to the beach, Arthur falls in love with a beautiful girl. Arthur and his family spend the rest of their trip together.
Arthur goes to the beach. Arthur is very bored on a hot day. He decides he needs something to do. He heads down to the local beach. He spends all day playing in the sand and sun. Arthur is happy that he no longer feels bored.
Arthur goes to the beach. Arthur was bored one day. He decided to go to the beach. Arthur packed a towel and sunscreen. Then, he went out into the ocean. Arthur had fun at the beach.
Arthur goes to the beach. Arthur was bored at home one day. He decided he would go to the beach. Arthur packed up his car and drove to the beach. Arthur laid on the sand enjoying the sun. Afterwards, Arthur went back home.
Arthur goes to the beach. Arthur was bored one afternoon so he decided to go to the beach. He packed his cooler and drove to the beach. Arthur found a spot on the sand that looked nice. He laid out his towel and sunblock and went for a swim. Arthur had such a great time at the beach!
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.4, 'repetition_penalty': 1.2}
Arthur goes to the beach. Arthur was bored one day and wanted something to do. He decided to go to the beach. At the beach he played in the sand. Then he went swimming in the ocean. Finally, he came back home exhausted but happy.
Arthur goes to the beach. Arthur is bored one day and wants something to do. He decides he would like to go to the beach. Arthur packs up his car and drives to the beach. Once there, he spends a few hours playing in the sand. Afterwards, Arthur has a good time at the beach.
Arthur goes to the beach. Arthur is bored one day and decides to go to the beach. He packs up his towel, swims in the ocean, and gets out of his car. When he arrives at the beach it's very sunny and nice. Arthur spends all day playing in the water. Afterwards, he comes home and rests for a bit.
Arthur goes to the beach. Arthur is bored one day. He decides he needs something to do. He calls his friend Steve and asks if they want to go to the beach. Steve tells Arthur that it's not a good idea to go to the beach. Now Arthur knows that he should have asked Steve for advice.
Arthur goes to the beach. Arthur is bored at home one day. He decides he needs something to do. He heads out to the local beach and plays in the sand. At the beach, Arthur sees many beautiful people. Arthur feels happy that he no longer feels bored.
|
TinySuitStarfish/q-FrozenLake-v1-4x4-noSlippery
|
TinySuitStarfish
| 2022-06-06T00:13:33Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-06T00:13:27Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="TinySuitStarfish/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Xibanya/City
|
Xibanya
| 2022-06-05T22:06:56Z | 0 | 2 | null |
[
"PyTorch",
"Transformers",
"text-to-image",
"ru",
"en",
"license:cc-by-nc-4.0",
"region:us"
] |
text-to-image
| 2022-06-05T07:02:45Z |
---
license: cc-by-nc-4.0
pipeline_tag: text-to-image
tags:
- PyTorch
- Transformers
language:
- ru
- en
---
# ruDALL-E Cities
### Generate illustrations of cities from image prompts and/or text
Finetuned from [Malevich XL](https://huggingface.co/sberbank-ai/rudalle-Malevich) on thousands of anime screenshots of cities. For more information on training, see https://github.com/Xibanya/ru-dalle
<img src="https://huggingface.co/Xibanya/City/resolve/main/Pics/city-examples-sm.png" width="1024" height="1024">
|
poltoran/RL-course-1-unit-ppo-LunarLander-v2-v1
|
poltoran
| 2022-06-05T21:33:23Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-05T21:16:23Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 284.67 +/- 17.68
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Jherb/finetuning-sentiment-model-3000-samples
|
Jherb
| 2022-06-05T21:21:18Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-05T21:00:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8666666666666667
- name: F1
type: f1
value: 0.8666666666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3063
- Accuracy: 0.8667
- F1: 0.8667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
AlphaZetta/finetuning-sentiment-model-finefood
|
AlphaZetta
| 2022-06-05T20:08:23Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-05T19:53:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-finefood
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-finefood
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2269
- Accuracy: 0.95
- F1: 0.9696
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
AlphaZetta/finetuning-sentiment-model-3000-samples
|
AlphaZetta
| 2022-06-05T19:32:45Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-04T18:00:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4338
- Accuracy: 0.85
- F1: 0.9189
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
limsc/reqscibert-tapt-epoch20
|
limsc
| 2022-06-05T18:10:49Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-05T18:10:36Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: reqscibert-tapt-epoch20
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# reqscibert-tapt-epoch20
This model is a fine-tuned version of [allenai/scibert_scivocab_cased](https://huggingface.co/allenai/scibert_scivocab_cased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 34950, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.8.2
- Datasets 2.2.2
- Tokenizers 0.12.1
|
x574chen/wav2vec2-common_voice-tr-demo
|
x574chen
| 2022-06-05T17:14:10Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"tr",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-02T02:04:34Z |
---
language:
- tr
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-common_voice-tr-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-common_voice-tr-demo
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3815
- Wer: 0.3493
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.92 | 100 | 3.5559 | 1.0 |
| No log | 1.83 | 200 | 3.0161 | 0.9999 |
| No log | 2.75 | 300 | 0.8587 | 0.7443 |
| No log | 3.67 | 400 | 0.5855 | 0.6121 |
| 3.1095 | 4.59 | 500 | 0.4841 | 0.5204 |
| 3.1095 | 5.5 | 600 | 0.4533 | 0.4923 |
| 3.1095 | 6.42 | 700 | 0.4157 | 0.4342 |
| 3.1095 | 7.34 | 800 | 0.4304 | 0.4334 |
| 3.1095 | 8.26 | 900 | 0.4097 | 0.4068 |
| 0.2249 | 9.17 | 1000 | 0.4049 | 0.3881 |
| 0.2249 | 10.09 | 1100 | 0.3993 | 0.3809 |
| 0.2249 | 11.01 | 1200 | 0.3855 | 0.3782 |
| 0.2249 | 11.93 | 1300 | 0.3923 | 0.3713 |
| 0.2249 | 12.84 | 1400 | 0.3833 | 0.3591 |
| 0.1029 | 13.76 | 1500 | 0.3811 | 0.3570 |
| 0.1029 | 14.68 | 1600 | 0.3834 | 0.3499 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.12.0a0+2c916ef
- Datasets 2.2.2
- Tokenizers 0.10.3
|
nestoralvaro/mT5_multilingual_XLSum-finetuned-xsum-mlsum
|
nestoralvaro
| 2022-06-05T16:47:06Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"dataset:mlsum",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-05T09:56:26Z |
---
tags:
- generated_from_trainer
datasets:
- mlsum
metrics:
- rouge
model-index:
- name: mT5_multilingual_XLSum-finetuned-xsum-mlsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: mlsum
type: mlsum
args: es
metrics:
- name: Rouge1
type: rouge
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mT5_multilingual_XLSum-finetuned-xsum-mlsum
This model is a fine-tuned version of [csebuetnlp/mT5_multilingual_XLSum](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) on the mlsum dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 0.0
- Rouge2: 0.0
- Rougel: 0.0
- Rougelsum: 0.0
- Gen Len: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.0 | 1.0 | 66592 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Edric111/distilbert-base-uncased-finetuned-ner
|
Edric111
| 2022-06-05T16:32:56Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-03T17:07:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9273854328093868
- name: Recall
type: recall
value: 0.9372413021590782
- name: F1
type: f1
value: 0.9322873198686918
- name: Accuracy
type: accuracy
value: 0.9840341874910639
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0599
- Precision: 0.9274
- Recall: 0.9372
- F1: 0.9323
- Accuracy: 0.9840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2378 | 1.0 | 878 | 0.0719 | 0.9107 | 0.9200 | 0.9154 | 0.9801 |
| 0.0509 | 2.0 | 1756 | 0.0620 | 0.9156 | 0.9311 | 0.9233 | 0.9821 |
| 0.0307 | 3.0 | 2634 | 0.0599 | 0.9274 | 0.9372 | 0.9323 | 0.9840 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Rgl73/xlm-roberta-base-finetuned-panx-de
|
Rgl73
| 2022-06-05T15:51:50Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-05T10:46:31Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8608532209375177
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1446
- F1: 0.8609
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2623 | 1.0 | 787 | 0.1756 | 0.8132 |
| 0.1321 | 2.0 | 1574 | 0.1497 | 0.8458 |
| 0.0856 | 3.0 | 2361 | 0.1446 | 0.8609 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
jppaolim/v53_Large_AdaMW
|
jppaolim
| 2022-06-05T14:20:25Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-05T13:40:58Z |
# My Story model
{'top_p': 0.9, 'top_k': 50, 'temperature': 1, 'repetition_penalty': 1}
Arthur goes to the beach. Arthur was driving his dog on the highway. He felt a breeze approaching him. He took his dog out and let her out. The dog jumped and got lost. Arthur found her under a tree for her.
Arthur goes to the beach. Arthur decides to go to the beach. First he has to go to the sand. Then he has to walk the beach. Finally he will swim in the ocean. Finally he will swim in the ocean.
Arthur goes to the beach. Arthur is the type of person. He wanted to be the one who didn't want to be alone. He joined the swim team in high school. They invited him to swim in a bikini race. He didn't feel comfortable that he was alone.
Arthur goes to the beach. Arthur is going to the beach. He heads to the ocean. When he gets to the beach he realizes he forgot his sunscreen. He gets in big trouble for being late. The police write him up.
Arthur goes to the beach. Arthur was so sad. The weather was so cold and cold. He took his friends over to watch a movie. After the movie they played volleyball. Arthur was happy that his friends were there to play.
{'top_p': 0.9, 'top_k': 50, 'temperature': 1, 'repetition_penalty': 1.05}
Arthur goes to the beach. Arthur had always wanted to see the beach. He took his girlfriend out and drove to the beach. He parked his car in the parking lot. Arthur looked outside and saw that the beach was not looking too well. After a couple of hours the car had rolled into a hole.
Arthur goes to the beach. Arthur was going to the beach to relax on the beach. He packed his towel and his sunblock. After the sunbathes, Arthur got a tan. He felt much better afterwards. He spent the next day tanning every day with his friends.
Arthur goes to the beach. Arthur was about to leave for work. It was a cloudy day outside, but he decided to go into town. He drove home and watched the ocean. Suddenly, a big wave wave hit his car in the road. He didn't get out until days later.
Arthur goes to the beach. Arthur is an old man who always wants to go to a beach. He takes some of his friends and family go. He gets his family to drive to the beach. They drive to the beach. He goes to the beach.
Arthur goes to the beach. Arthur went to the beach with his friend. They decided to go into a surf park. Once at the beach he had fun swimming. He decided to spend the rest of the day surfing and surfing. His day was wonderful.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.8, 'repetition_penalty': 1.1}
Arthur goes to the beach. Arthur is a very lonely man. He decides he wants to meet someone who is nice to him. He finds a man with a beautiful young woman. The two men fall in love and start to date. They realize they are best friends.
Arthur goes to the beach. Arthur always wanted to see the ocean. His dad told him that it was too hot. One day Arthur asked his friend Jack if he could take her. Jack agreed to come with him and they drove to the beach. They had fun relaxing in the ocean.
Arthur goes to the beach. Arthur is going to go swimming. He has never been to a beach before. Arthur decides to take his friends out to the beach. He takes his friend out to the beach and spends time on the water. Arthur feels glad he went to the beach.
Arthur goes to the beach. Arthur is going to the beach. He decides he should go to the beach with his friends. He arrives at the beach with his friends. They spend the day playing volleyball. The kids all go home happy.
Arthur goes to the beach. Arthur is very adventurous. He decides to go to the beach. He heads to the beach with his friends. He spends two days relaxing on the beach. He is happy that he got to spend time relaxing on the beach.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.6, 'repetition_penalty': 1.15}
Arthur goes to the beach. Arthur is on vacation in Hawaii. He decides to go to the beach with his friends. At the beach he meets a nice girl named Courtney. The two of them meet up for a date and get married. Now they are both happy that he went to the beach.
Arthur goes to the beach. Arthur is a very adventurous person. He wants to go to the beach with his friends. They decide to go to the beach to relax. Then, they spend the day relaxing on the beach. Finally, he is able to relax on the beach.
Arthur goes to the beach. Arthur was very excited for a relaxing day at the beach. He took his family and friends out on the water. They swam and swam all day long. It was a relaxing day that made him happy. The beach was so relaxing he couldn't be happier.
Arthur goes to the beach. Arthur was going to the beach with his family. He had never seen a water park before. They arrived at the beach and started to explore. It was a peaceful day. Afterwards, he decided to go swimming with his friends.
Arthur goes to the beach. Arthur was going to go to the beach. He had never seen one before. He decided to drive to the beach. He drove for a couple of hours. Finally he arrived at the beach.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.4, 'repetition_penalty': 1.2}
Arthur goes to the beach. Arthur is a very adventurous person. He decides that he needs to get out of his house and go for it. He gets out on the beach with friends. They spend hours swimming in the ocean. He feels happy that he went outside.
Arthur goes to the beach. Arthur is going to the beach with his friends. He's going to be in the water for a couple of minutes. When he gets out, it begins to rainy and thunderstorms. The rain starts to pour down on him. He will not be able to swim until after that.
Arthur goes to the beach. Arthur is going to go to the beach with his friends. He gets a flat tire on the road. His friend comes over and takes him out. The car begins to drive. They all have fun at the beach.
Arthur goes to the beach. Arthur is a very adventurous man who loves swimming in the ocean. He decides that he needs to swim in the ocean at least once per week. At first, he finds it difficult to swim because of his weight. Finally he begins to enjoy the water and relax. After a few weeks, Arthur feels happy that he finally gets to swim.
Arthur goes to the beach. Arthur is going to go to the beach with his friends. They decide to play volleyball. He gets a lot of sunblock on his legs and body. The other team wins by a large margin. It's fun playing basketball in the ocean.
|
huggingtweets/philwornath
|
huggingtweets
| 2022-06-05T14:13:21Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-05T14:12:00Z |
---
language: en
thumbnail: http://www.huggingtweets.com/philwornath/1654438397344/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1496963787655716869/MJrzMo_D_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Phil Wornath 🇪🇺</div>
<div style="text-align: center; font-size: 14px;">@philwornath</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Phil Wornath 🇪🇺.
| Data | Phil Wornath 🇪🇺 |
| --- | --- |
| Tweets downloaded | 1435 |
| Retweets | 280 |
| Short tweets | 142 |
| Tweets kept | 1013 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1dbqyh6j/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @philwornath's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2f9pcn01) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2f9pcn01/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/philwornath')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
fractalego/samsumbot
|
fractalego
| 2022-06-05T13:26:08Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gptj",
"text-generation",
"arxiv:2106.09685",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-04-28T07:14:15Z |
# What is SamSum Bot?
This is a model fine-tuned on the [SamSum dataset](https://huggingface.co/datasets/samsum).
However, instead of training the system to summarize conversations, the model is trained to predict a conversation given a summary.
The prompt needs to be in the following form
```python
A partial summary of the conversation is:
{summary}
With the dialogue being:
{dialogue}
```
where *{summary}* is a text as in
```python
John went out to buy groceries. He meets Jane on the way and they talk about the weather.
```
and the *{dialogue}* needs to be structured with speaking lines preceded by the speaking character
```python
John: Oh hi Jane.
Jane: Nice to see you?
John: The weather looks nice today
Jane: [PREDICTION]
```
The system is based on the GPTJ-6B by EleutherAI, [quantized by Hivemind](https://huggingface.co/hivemind/gpt-j-6B-8bit). It has been fine-tuned according to the [LoRa method](https://arxiv.org/abs/2106.09685).
A simple back-end is available in [this repo](https://github.com/fractalego/samsum-bot), where the model is served using Torchserve.
A terminal-like front-end interface is available [here](https://github.com/fractalego/samsumbot_client).
This interface is the one used in my website [http://fractalego.io](http://fractalego.io).
|
haritzpuerto/TinyBERT_General_4L_312D-squad
|
haritzpuerto
| 2022-06-05T13:16:50Z | 2,952 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-06-05T12:50:36Z |
---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: TinyBERT_General_4L_312D-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TinyBERT_General_4L_312D-squad
This model is a fine-tuned version of [huawei-noah/TinyBERT_General_4L_312D](https://huggingface.co/huawei-noah/TinyBERT_General_4L_312D) on the squad dataset.
It achieves the following results on the evaluation set:
- exact_match: 33.301797540208135
- f1: 45.03886349847048
- Loss: 2.5477
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7498 | 1.0 | 4380 | 2.5477 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
rotvderme/q-FrozenLake-v1-4x4-noSlippery
|
rotvderme
| 2022-06-05T13:07:04Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-05T12:58:30Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="rotvderme/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
shikhar1997/q-FrozenLake-v1-4x4-noSlippery
|
shikhar1997
| 2022-06-05T12:55:57Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-05T12:55:50Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="shikhar1997/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
MadFace/t5-cnn
|
MadFace
| 2022-06-05T06:11:02Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-01T22:51:48Z |
---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-cnn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-cnn
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4562
- Rouge1: 25.1836
- Rouge2: 12.0806
- Rougel: 20.818
- Rougelsum: 23.6868
- Gen Len: 18.9986
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:------:|:---------:|:-------:|
| 1.4286 | 1.0 | 50000 | 1.4562 | 25.1836 | 12.0806 | 20.818 | 23.6868 | 18.9986 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ITESM/st_demo_6
|
ITESM
| 2022-06-05T05:06:07Z | 7 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"es",
"dataset:hackathon-pln-es/nli-es",
"arxiv:1908.10084",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-06-05T05:05:57Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
language:
- es
datasets:
- hackathon-pln-es/nli-es
widget:
- text: "A ver si nos tenemos que poner todos en huelga hasta cobrar lo que queramos."
- text: "La huelga es el método de lucha más eficaz para conseguir mejoras en el salario."
- text: "Tendremos que optar por hacer una huelga para cobrar lo que queremos."
- text: "Queda descartada la huelga aunque no cobremos lo que queramos."
---
# bertin-roberta-base-finetuning-esnli
This is a [sentence-transformers](https://www.SBERT.net) model trained on a
collection of NLI tasks for Spanish. It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
Based around the siamese networks approach from [this paper](https://arxiv.org/pdf/1908.10084.pdf).
<!--- Describe your model here -->
You can see a demo for this model [here](https://huggingface.co/spaces/hackathon-pln-es/Sentence-Embedding-Bertin).
You can find our other model, **paraphrase-spanish-distilroberta** [here](https://huggingface.co/hackathon-pln-es/paraphrase-spanish-distilroberta) and its demo [here](https://huggingface.co/spaces/hackathon-pln-es/Paraphrase-Bertin).
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Este es un ejemplo", "Cada oración es transformada"]
model = SentenceTransformer('hackathon-pln-es/bertin-roberta-base-finetuning-esnli')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('hackathon-pln-es/bertin-roberta-base-finetuning-esnli')
model = AutoModel.from_pretrained('hackathon-pln-es/bertin-roberta-base-finetuning-esnli')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
Our model was evaluated on the task of Semantic Textual Similarity using the [SemEval-2015 Task](https://alt.qcri.org/semeval2015/task2/) for [Spanish](http://alt.qcri.org/semeval2015/task2/data/uploads/sts2015-es-test.zip). We measure
| | [BETO STS](https://huggingface.co/espejelomar/sentece-embeddings-BETO) | BERTIN STS (this model) | Relative improvement |
|-------------------:|---------:|-----------:|---------------------:|
| cosine_pearson | 0.609803 | 0.683188 | +12.03 |
| cosine_spearman | 0.528776 | 0.615916 | +16.48 |
| euclidean_pearson | 0.590613 | 0.672601 | +13.88 |
| euclidean_spearman | 0.526529 | 0.611539 | +16.15 |
| manhattan_pearson | 0.589108 | 0.672040 | +14.08 |
| manhattan_spearman | 0.525910 | 0.610517 | +16.09 |
| dot_pearson | 0.544078 | 0.600517 | +10.37 |
| dot_spearman | 0.460427 | 0.521260 | +13.21 |
## Training
The model was trained with the parameters:
**Dataset**
We used a collection of datasets of Natural Language Inference as training data:
- [ESXNLI](https://raw.githubusercontent.com/artetxem/esxnli/master/esxnli.tsv), only the part in spanish
- [SNLI](https://nlp.stanford.edu/projects/snli/), automatically translated
- [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/), automatically translated
The whole dataset used is available [here](https://huggingface.co/datasets/hackathon-pln-es/nli-es).
Here we leave the trick we used to increase the amount of data for training here:
```
for row in reader:
if row['language'] == 'es':
sent1 = row['sentence1'].strip()
sent2 = row['sentence2'].strip()
add_to_samples(sent1, sent2, row['gold_label'])
add_to_samples(sent2, sent1, row['gold_label']) #Also add the opposite
```
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader`
of length 1818 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 909,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 514, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Authors
[Anibal Pérez](https://huggingface.co/Anarpego),
[Emilio Tomás Ariza](https://huggingface.co/medardodt),
[Lautaro Gesuelli](https://huggingface.co/Lgesuelli) y
[Mauricio Mazuecos](https://huggingface.co/mmazuecos).
|
wogkr810/mnm
|
wogkr810
| 2022-06-05T05:03:24Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"question-answering",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-06-03T04:33:50Z |
# Model
---
## [KLUE MRC 대회](https://github.com/boostcampaitech3/level2-mrc-level2-nlp-09)에서 사용한 Reader SOTA모델을 사용하여 파인튜닝을 진행했습니다.
- 데이터셋 : 태깅 이후 구축한 데이터셋에서 전처리 및 Augmentation 적용
- [Huggingface : MRC Reader SOTA 모델](https://huggingface.co/Nonegom/roberta_finetune_twice)
- [Github Issue : MRC Redaer SOTA 모델 설명](https://github.com/boostcampaitech3/level2-mrc-level2-nlp-09/issues/38)
|
ITESM/st_demo_5
|
ITESM
| 2022-06-05T04:55:46Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"en",
"arxiv:1904.06472",
"arxiv:2102.07033",
"arxiv:2104.08727",
"arxiv:1704.05179",
"arxiv:1810.09305",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-06-05T04:55:40Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
language: en
license: apache-2.0
---
# all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-MiniLM-L6-v2)
------
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 256 word pieces is truncated.
## Training procedure
### Pre-training
We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
#### Hyper parameters
We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
#### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| **Total** | | **1,170,060,424** |
|
ITESM/st_demo_2
|
ITESM
| 2022-06-05T04:38:02Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"en",
"arxiv:1904.06472",
"arxiv:2102.07033",
"arxiv:2104.08727",
"arxiv:1704.05179",
"arxiv:1810.09305",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-06-05T04:37:57Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
language: en
license: apache-2.0
---
# all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-MiniLM-L6-v2)
------
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 256 word pieces is truncated.
## Training procedure
### Pre-training
We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
#### Hyper parameters
We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
#### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| **Total** | | **1,170,060,424** |
|
limsc/reqbert-tapt-epoch30
|
limsc
| 2022-06-05T03:50:49Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-05T03:50:32Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: reqbert-tapt-epoch30
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# reqbert-tapt-epoch30
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 21740, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.8.2
- Datasets 2.2.2
- Tokenizers 0.12.1
|
nitishkumargundapu793/autotrain-chat-bot-responses-949231426
|
nitishkumargundapu793
| 2022-06-05T03:16:21Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"en",
"dataset:nitishkumargundapu793/autotrain-data-chat-bot-responses",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-05T03:13:45Z |
---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- nitishkumargundapu793/autotrain-data-chat-bot-responses
co2_eq_emissions: 0.01123534537751425
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 949231426
- CO2 Emissions (in grams): 0.01123534537751425
## Validation Metrics
- Loss: 0.26922607421875
- Accuracy: 1.0
- Macro F1: 1.0
- Micro F1: 1.0
- Weighted F1: 1.0
- Macro Precision: 1.0
- Micro Precision: 1.0
- Weighted Precision: 1.0
- Macro Recall: 1.0
- Micro Recall: 1.0
- Weighted Recall: 1.0
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/nitishkumargundapu793/autotrain-chat-bot-responses-949231426
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("nitishkumargundapu793/autotrain-chat-bot-responses-949231426", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("nitishkumargundapu793/autotrain-chat-bot-responses-949231426", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
sayanmandal/t5-small_6_3-en-hi_en_LinCE
|
sayanmandal
| 2022-06-05T00:31:38Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-06-04T21:54:56Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: t5-small_6_3-en-hi_en_LinCE
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small_6_3-en-hi_en_LinCE
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2034
- Bleu: 7.8135
- Gen Len: 39.5564
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 0.99 | 94 | 3.5424 | 0.9187 | 16.7437 |
| No log | 1.99 | 188 | 3.1434 | 1.2886 | 16.8158 |
| No log | 2.99 | 282 | 2.9494 | 1.4577 | 16.7824 |
| No log | 3.99 | 376 | 2.8233 | 1.4745 | 16.8879 |
| No log | 4.99 | 470 | 2.7300 | 1.7116 | 16.6636 |
| 3.6303 | 5.99 | 564 | 2.6589 | 1.7857 | 16.6302 |
| 3.6303 | 6.99 | 658 | 2.6005 | 1.8572 | 16.4553 |
| 3.6303 | 7.99 | 752 | 2.5456 | 2.139 | 16.3925 |
| 3.6303 | 8.99 | 846 | 2.5023 | 2.3835 | 16.2911 |
| 3.6303 | 9.99 | 940 | 2.4725 | 2.5607 | 16.3271 |
| 2.9087 | 10.99 | 1034 | 2.4272 | 2.6614 | 16.3138 |
| 2.9087 | 11.99 | 1128 | 2.3977 | 2.9623 | 16.3338 |
| 2.9087 | 12.99 | 1222 | 2.3686 | 3.1248 | 16.2443 |
| 2.9087 | 13.99 | 1316 | 2.3438 | 3.3294 | 16.3458 |
| 2.9087 | 14.99 | 1410 | 2.3253 | 3.3885 | 16.3591 |
| 2.6588 | 15.99 | 1504 | 2.3028 | 3.3985 | 16.3124 |
| 2.6588 | 16.99 | 1598 | 2.2839 | 3.3772 | 16.3858 |
| 2.6588 | 17.99 | 1692 | 2.2704 | 3.5804 | 16.3872 |
| 2.6588 | 18.99 | 1786 | 2.2533 | 3.8751 | 16.2697 |
| 2.6588 | 19.99 | 1880 | 2.2378 | 4.0003 | 16.3271 |
| 2.6588 | 20.99 | 1974 | 2.2233 | 4.0271 | 16.3031 |
| 2.5079 | 21.99 | 2068 | 2.2160 | 4.1898 | 16.3057 |
| 2.5079 | 22.99 | 2162 | 2.2010 | 4.1216 | 16.3031 |
| 2.5079 | 23.99 | 2256 | 2.1935 | 4.1311 | 16.2644 |
| 2.5079 | 24.99 | 2350 | 2.1833 | 4.1373 | 16.3138 |
| 2.5079 | 25.99 | 2444 | 2.1725 | 4.3471 | 16.3057 |
| 2.4027 | 26.99 | 2538 | 2.1657 | 4.183 | 16.3298 |
| 2.4027 | 27.99 | 2632 | 2.1611 | 4.2867 | 16.3351 |
| 2.4027 | 28.99 | 2726 | 2.1531 | 4.2689 | 16.2737 |
| 2.4027 | 29.99 | 2820 | 2.1482 | 4.4802 | 16.2644 |
| 2.4027 | 30.99 | 2914 | 2.1443 | 4.469 | 16.231 |
| 2.3251 | 31.99 | 3008 | 2.1375 | 4.5295 | 16.227 |
| 2.3251 | 32.99 | 3102 | 2.1330 | 4.4799 | 16.2243 |
| 2.3251 | 33.99 | 3196 | 2.1307 | 4.7124 | 16.2417 |
| 2.3251 | 34.99 | 3290 | 2.1248 | 4.5954 | 16.3004 |
| 2.3251 | 35.99 | 3384 | 2.1215 | 4.7455 | 16.215 |
| 2.3251 | 36.99 | 3478 | 2.1166 | 4.6233 | 16.2016 |
| 2.2818 | 37.99 | 3572 | 2.1147 | 4.6843 | 16.219 |
| 2.2818 | 38.99 | 3666 | 2.1112 | 4.7068 | 16.2163 |
| 2.2818 | 39.99 | 3760 | 2.1071 | 4.684 | 16.223 |
| 2.2818 | 40.99 | 3854 | 2.1034 | 4.7323 | 16.2523 |
| 2.2818 | 41.99 | 3948 | 2.0998 | 4.6406 | 16.2016 |
| 2.2392 | 42.99 | 4042 | 2.1017 | 4.7609 | 16.1976 |
| 2.2392 | 43.99 | 4136 | 2.1021 | 4.7634 | 16.2069 |
| 2.2392 | 44.99 | 4230 | 2.0994 | 4.7854 | 16.1976 |
| 2.2392 | 45.99 | 4324 | 2.0980 | 4.7562 | 16.2243 |
| 2.2392 | 46.99 | 4418 | 2.0964 | 4.7921 | 16.219 |
| 2.2192 | 47.99 | 4512 | 2.0970 | 4.8029 | 16.2377 |
| 2.2192 | 48.99 | 4606 | 2.0967 | 4.7953 | 16.2176 |
| 2.2192 | 49.99 | 4700 | 2.0968 | 4.819 | 16.2457 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
elgeish/wav2vec2-large-xlsr-53-arabic
|
elgeish
| 2022-06-04T23:37:05Z | 2,693 | 15 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"hf-asr-leaderboard",
"ar",
"dataset:arabic_speech_corpus",
"dataset:mozilla-foundation/common_voice_6_1",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: ar
datasets:
- arabic_speech_corpus
- mozilla-foundation/common_voice_6_1
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
- hf-asr-leaderboard
license: apache-2.0
model-index:
- name: elgeish-wav2vec2-large-xlsr-53-arabic
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 6.1 (Arabic)
type: mozilla-foundation/common_voice_6_1
config: ar
split: test
args:
language: ar
metrics:
- name: Test WER
type: wer
value: 26.55
- name: Validation WER
type: wer
value: 23.39
---
# Wav2Vec2-Large-XLSR-53-Arabic
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on Arabic using the `train` splits of [Common Voice](https://huggingface.co/datasets/common_voice)
and [Arabic Speech Corpus](https://huggingface.co/datasets/arabic_speech_corpus).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from lang_trans.arabic import buckwalter
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
dataset = load_dataset("common_voice", "ar", split="test[:10]")
resamplers = { # all three sampling rates exist in test split
48000: torchaudio.transforms.Resample(48000, 16000),
44100: torchaudio.transforms.Resample(44100, 16000),
32000: torchaudio.transforms.Resample(32000, 16000),
}
def prepare_example(example):
speech, sampling_rate = torchaudio.load(example["path"])
example["speech"] = resamplers[sampling_rate](speech).squeeze().numpy()
return example
dataset = dataset.map(prepare_example)
processor = Wav2Vec2Processor.from_pretrained("elgeish/wav2vec2-large-xlsr-53-arabic")
model = Wav2Vec2ForCTC.from_pretrained("elgeish/wav2vec2-large-xlsr-53-arabic").eval()
def predict(batch):
inputs = processor(batch["speech"], sampling_rate=16000, return_tensors="pt", padding=True)
with torch.no_grad():
predicted = torch.argmax(model(inputs.input_values).logits, dim=-1)
predicted[predicted == -100] = processor.tokenizer.pad_token_id # see fine-tuning script
batch["predicted"] = processor.tokenizer.batch_decode(predicted)
return batch
dataset = dataset.map(predict, batched=True, batch_size=1, remove_columns=["speech"])
for reference, predicted in zip(dataset["sentence"], dataset["predicted"]):
print("reference:", reference)
print("predicted:", buckwalter.untrans(predicted))
print("--")
```
Here's the output:
```
reference: ألديك قلم ؟
predicted: هلديك قالر
--
reference: ليست هناك مسافة على هذه الأرض أبعد من يوم أمس.
predicted: ليست نالك مسافة على هذه الأرض أبعد من يوم أمس
--
reference: إنك تكبر المشكلة.
predicted: إنك تكبر المشكلة
--
reference: يرغب أن يلتقي بك.
predicted: يرغب أن يلتقي بك
--
reference: إنهم لا يعرفون لماذا حتى.
predicted: إنهم لا يعرفون لماذا حتى
--
reference: سيسعدني مساعدتك أي وقت تحب.
predicted: سيسئدني مساعد سكرأي وقت تحب
--
reference: أَحَبُّ نظريّة علمية إليّ هي أن حلقات زحل مكونة بالكامل من الأمتعة المفقودة.
predicted: أحب ناضريةً علمية إلي هي أنحل قتزح المكونا بالكامل من الأمت عن المفقودة
--
reference: سأشتري له قلماً.
predicted: سأشتري له قلما
--
reference: أين المشكلة ؟
predicted: أين المشكل
--
reference: وَلِلَّهِ يَسْجُدُ مَا فِي السَّمَاوَاتِ وَمَا فِي الْأَرْضِ مِنْ دَابَّةٍ وَالْمَلَائِكَةُ وَهُمْ لَا يَسْتَكْبِرُونَ
predicted: ولله يسجد ما في السماوات وما في الأرض من دابة والملائكة وهم لا يستكبرون
--
```
## Evaluation
The model can be evaluated as follows on the Arabic test data of Common Voice:
```python
import jiwer
import torch
import torchaudio
from datasets import load_dataset
from lang_trans.arabic import buckwalter
from transformers import set_seed, Wav2Vec2ForCTC, Wav2Vec2Processor
set_seed(42)
test_split = load_dataset("common_voice", "ar", split="test")
resamplers = { # all three sampling rates exist in test split
48000: torchaudio.transforms.Resample(48000, 16000),
44100: torchaudio.transforms.Resample(44100, 16000),
32000: torchaudio.transforms.Resample(32000, 16000),
}
def prepare_example(example):
speech, sampling_rate = torchaudio.load(example["path"])
example["speech"] = resamplers[sampling_rate](speech).squeeze().numpy()
return example
test_split = test_split.map(prepare_example)
processor = Wav2Vec2Processor.from_pretrained("elgeish/wav2vec2-large-xlsr-53-arabic")
model = Wav2Vec2ForCTC.from_pretrained("elgeish/wav2vec2-large-xlsr-53-arabic").to("cuda").eval()
def predict(batch):
inputs = processor(batch["speech"], sampling_rate=16000, return_tensors="pt", padding=True)
with torch.no_grad():
predicted = torch.argmax(model(inputs.input_values.to("cuda")).logits, dim=-1)
predicted[predicted == -100] = processor.tokenizer.pad_token_id # see fine-tuning script
batch["predicted"] = processor.batch_decode(predicted)
return batch
test_split = test_split.map(predict, batched=True, batch_size=16, remove_columns=["speech"])
transformation = jiwer.Compose([
# normalize some diacritics, remove punctuation, and replace Persian letters with Arabic ones
jiwer.SubstituteRegexes({
r'[auiFNKo\~_،؟»\?;:\-,\.؛«!"]': "", "\u06D6": "",
r"[\|\{]": "A", "p": "h", "ک": "k", "ی": "y"}),
# default transformation below
jiwer.RemoveMultipleSpaces(),
jiwer.Strip(),
jiwer.SentencesToListOfWords(),
jiwer.RemoveEmptyStrings(),
])
metrics = jiwer.compute_measures(
truth=[buckwalter.trans(s) for s in test_split["sentence"]], # Buckwalter transliteration
hypothesis=test_split["predicted"],
truth_transform=transformation,
hypothesis_transform=transformation,
)
print(f"WER: {metrics['wer']:.2%}")
```
**Test Result**: 26.55%
## Training
For more details, see [Fine-Tuning with Arabic Speech Corpus](https://github.com/huggingface/transformers/tree/1c06240e1b3477728129bb58e7b6c7734bb5074e/examples/research_projects/wav2vec2#fine-tuning-with-arabic-speech-corpus).
This model represents Arabic in a format called [Buckwalter transliteration](https://en.wikipedia.org/wiki/Buckwalter_transliteration).
The Buckwalter format only includes ASCII characters, some of which are non-alpha (e.g., `">"` maps to `"أ"`).
The [lang-trans](https://github.com/kariminf/lang-trans) package is used to convert (transliterate) Arabic abjad.
[This script](https://github.com/huggingface/transformers/blob/1c06240e1b3477728129bb58e7b6c7734bb5074e/examples/research_projects/wav2vec2/finetune_large_xlsr_53_arabic_speech_corpus.sh)
was used to first fine-tune [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on the `train` split of the [Arabic Speech Corpus](https://huggingface.co/datasets/arabic_speech_corpus) dataset;
the `test` split was used for model selection; the resulting model at this point is saved as [elgeish/wav2vec2-large-xlsr-53-levantine-arabic](https://huggingface.co/elgeish/wav2vec2-large-xlsr-53-levantine-arabic).
Training was then resumed using the `train` split of the [Common Voice](https://huggingface.co/datasets/common_voice) dataset;
the `validation` split was used for model selection;
training was stopped to meet the deadline of [Fine-Tune-XLSR Week](https://github.com/huggingface/transformers/blob/700229f8a4003c4f71f29275e0874b5ba58cd39d/examples/research_projects/wav2vec2/FINE_TUNE_XLSR_WAV2VEC2.md):
this model is the checkpoint at 100k steps and a validation WER of **23.39%**.
<img src="https://huggingface.co/elgeish/wav2vec2-large-xlsr-53-arabic/raw/main/validation_wer.png" alt="Validation WER" width="100%" />
It's worth noting that validation WER is trending down, indicating the potential of further training (resuming the decaying learning rate at 7e-6).
## Future Work
One area to explore is using `attention_mask` in model input, which is recommended [here](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2).
Also, exploring data augmentation using datasets used to train models listed [here](https://paperswithcode.com/sota/speech-recognition-on-common-voice-arabic).
|
jianyang/LunarLander-v2
|
jianyang
| 2022-06-04T22:39:46Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-04T21:57:52Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 182.82 +/- 79.11
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nutjung/TEST2ppo-LunarLander-v2-4
|
nutjung
| 2022-06-04T22:08:31Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-04T22:08:07Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 273.94 +/- 14.80
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
atoivat/distilbert-base-uncased-finetuned-squad
|
atoivat
| 2022-06-04T21:13:36Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-06-04T18:10:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1504
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2086 | 1.0 | 5533 | 1.1565 |
| 0.9515 | 2.0 | 11066 | 1.1225 |
| 0.7478 | 3.0 | 16599 | 1.1504 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
yanekyuk/camembert-keyword-discriminator
|
yanekyuk
| 2022-06-04T21:08:51Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"camembert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-04T20:23:44Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- accuracy
- f1
model-index:
- name: camembert-keyword-discriminator
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert-keyword-discriminator
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2180
- Precision: 0.6646
- Recall: 0.7047
- Accuracy: 0.9344
- F1: 0.6841
- Ent/precision: 0.7185
- Ent/accuracy: 0.8157
- Ent/f1: 0.7640
- Con/precision: 0.5324
- Con/accuracy: 0.4860
- Con/f1: 0.5082
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Accuracy | F1 | Ent/precision | Ent/accuracy | Ent/f1 | Con/precision | Con/accuracy | Con/f1 |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:--------:|:------:|:-------------:|:------------:|:------:|:-------------:|:------------:|:------:|
| 0.2016 | 1.0 | 1875 | 0.1910 | 0.5947 | 0.7125 | 0.9243 | 0.6483 | 0.6372 | 0.8809 | 0.7395 | 0.4560 | 0.3806 | 0.4149 |
| 0.1454 | 2.0 | 3750 | 0.1632 | 0.6381 | 0.7056 | 0.9324 | 0.6701 | 0.6887 | 0.8291 | 0.7524 | 0.5064 | 0.4621 | 0.4833 |
| 0.1211 | 3.0 | 5625 | 0.1702 | 0.6703 | 0.6678 | 0.9343 | 0.6690 | 0.7120 | 0.7988 | 0.7529 | 0.5471 | 0.4094 | 0.4684 |
| 0.1021 | 4.0 | 7500 | 0.1745 | 0.6777 | 0.6708 | 0.9351 | 0.6742 | 0.7206 | 0.7956 | 0.7562 | 0.5557 | 0.4248 | 0.4815 |
| 0.0886 | 5.0 | 9375 | 0.1913 | 0.6540 | 0.7184 | 0.9340 | 0.6847 | 0.7022 | 0.8396 | 0.7648 | 0.5288 | 0.4795 | 0.5030 |
| 0.0781 | 6.0 | 11250 | 0.2021 | 0.6605 | 0.7132 | 0.9344 | 0.6858 | 0.7139 | 0.8258 | 0.7658 | 0.5293 | 0.4913 | 0.5096 |
| 0.0686 | 7.0 | 13125 | 0.2127 | 0.6539 | 0.7132 | 0.9337 | 0.6822 | 0.7170 | 0.8172 | 0.7638 | 0.5112 | 0.5083 | 0.5098 |
| 0.0667 | 8.0 | 15000 | 0.2180 | 0.6646 | 0.7047 | 0.9344 | 0.6841 | 0.7185 | 0.8157 | 0.7640 | 0.5324 | 0.4860 | 0.5082 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
edwinhung/bird_classifier
|
edwinhung
| 2022-06-04T20:52:15Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2022-06-04T19:43:58Z |
---
tags:
- fastai
---
# Model card
## Model description
A neural network model trained with fastai and timm to classify 400 bird species in an image.
## Intended uses & limitations
This bird classifier is used to predict bird species in a given image. The Image fed should have only one bird. This is a multi-class classification which will output a class even if there is no bird in the image.
## Training and evaluation data
Pre-trained model used is Efficient net from timm library, specifically *efficientnet_b3a*. The dataset trained is from Kaggle [BIRDS 400 - SPECIES IMAGE CLASSIFICATION](https://www.kaggle.com/datasets/gpiosenka/100-bird-species). Evaluation accuracy score on the given test set from Kaggle is 99.4%. Please note this is likely not representative of real world performance, as mentioned by dataset provider that the test set is hand picked as the best images.
|
vjeansel/RI
|
vjeansel
| 2022-06-04T19:06:55Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-04T18:21:37Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO-v1
results:
- metrics:
- type: mean_reward
value: 252.71 +/- 23.21
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO-v1** Agent playing **LunarLander-v2**
This is a trained model of a **PPO-v1** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
etmckinley/BOTHALTEROUT
|
etmckinley
| 2022-06-04T18:26:24Z | 4 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-02T18:32:15Z |
---
license: mit
tags:
model-index:
- name: BERFALTER
results: []
widget:
- text: "Gregg Berhalter"
- text: "The USMNT won't win the World Cup"
- text: "The Soccer Media in this country"
- text: "Ball don't"
- text: "This lineup"
---
# BOTHALTEROUT
This model is a fine-tuned version of [GPT-2](https://huggingface.co/gpt2) using 21,832 tweets from 12 twitter users with very strong opinions about the United States Men's National Team.
## Limitations and bias
The model has all [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
Additionally, BOTHALTEROUT can create some problematic results based upon the tweets used to generate the model.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001372
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
## About
*Built by [Eliot McKinley](https://twitter.com/etmckinley) based upon [HuggingTweets](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb) by Boris Dayama*
|
mcditoos/q-Taxi-v3
|
mcditoos
| 2022-06-04T17:14:13Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-04T17:14:07Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
mcditoos/q-FrozenLake-v1-4x4-noSlippery
|
mcditoos
| 2022-06-04T17:09:47Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-04T17:09:40Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
huggingtweets/orc_nft
|
huggingtweets
| 2022-06-04T16:13:13Z | 3 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-04T16:12:40Z |
---
language: en
thumbnail: http://www.huggingtweets.com/orc_nft/1654359188989/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1510438749154549764/sar63AXD_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ORC.A ⍬</div>
<div style="text-align: center; font-size: 14px;">@orc_nft</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ORC.A ⍬.
| Data | ORC.A ⍬ |
| --- | --- |
| Tweets downloaded | 1675 |
| Retweets | 113 |
| Short tweets | 544 |
| Tweets kept | 1018 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/wwc37qkh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @orc_nft's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/debtzj0e) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/debtzj0e/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/orc_nft')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
tclong/wav2vec2-base-vios
|
tclong
| 2022-06-04T16:09:51Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:vivos_dataset",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-30T12:00:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- vivos_dataset
model-index:
- name: wav2vec2-base-vios
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-vios
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the vivos_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3729
- Wer: 0.2427
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.4755 | 1.37 | 500 | 0.7991 | 0.5957 |
| 0.5424 | 2.75 | 1000 | 0.4290 | 0.3653 |
| 0.3586 | 4.12 | 1500 | 0.3809 | 0.2890 |
| 0.2824 | 5.49 | 2000 | 0.3808 | 0.2749 |
| 0.2249 | 6.87 | 2500 | 0.3467 | 0.2389 |
| 0.1745 | 8.24 | 3000 | 0.3688 | 0.2384 |
| 0.1459 | 9.61 | 3500 | 0.3729 | 0.2427 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
yanekyuk/camembert-keyword-extractor
|
yanekyuk
| 2022-06-04T10:28:45Z | 108,995 | 13 |
transformers
|
[
"transformers",
"pytorch",
"camembert",
"token-classification",
"generated_from_trainer",
"fr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-04T02:03:03Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- accuracy
- f1
language:
- fr
widget:
- text: "Le président de la République appelle en outre les Français à faire le choix d'une \"majorité stable et sérieuse pour les protéger face aux crises et pour agir pour l'avenir\". \"Je vois dans le projet de Jean-Luc Mélenchon ou de Madame Le Pen un projet de désordre et de soumission. Ils expliquent qu'il faut sortir de nos alliances, de l'Europe, et bâtir des alliances stratégiques avec la Russie. C'est la soumission à la Russie\", assure-t-il."
- text: "Top départ à l’ouverture des bureaux de vote. La Polynésie et les Français résidant à l'étranger, dont certains ont déjà pu voter en ligne, sont invités aux urnes ce week-end pour le premier tour des législatives, samedi 4 juin pour le continent américain et les Caraïbes, et dimanche 5 juin pour le reste du monde. En France métropolitaine, les premier et second tours auront lieu les 12 et 19 juin."
- text: "Le ministère a aussi indiqué que des missiles russes ont frappé un centre d'entraînement d'artillerie dans la région de Soumy où travaillaient des instructeurs étrangers. Il a jouté qu'une autre frappe avait détruit une position de \"mercenaires étrangers\" dans la région d'Odessa."
- text: "Le malaise est profond et ressemble à une crise existentielle. Fait rarissime au Quai d’Orsay, six syndicats et un collectif de 500 jeunes diplomates du ministère des Affaires étrangères ont appelé à la grève, jeudi 2 juin, pour protester contre la réforme de la haute fonction publique qui, à terme, entraînera la disparition des deux corps historiques de la diplomatie française : celui de ministre plénipotentiaire (ambassadeur) et celui de conseiller des affaires étrangères."
- text: "Ils se font passer pour des recruteurs de Lockheed Martin ou du géant britannique de la défense et de l’aérospatial BAE Systems. Ces soi-disant chasseurs de tête font miroiter des perspectives lucratives de carrière et des postes à responsabilité. Mais ce n’est que du vent. En réalité, il s’agit de cyberespions nord-coréens cherchant à voler des secrets industriels de groupes de défense ou du secteur de l’aérospatial, révèle Eset, une société slovaque de sécurité informatique, dans un rapport publié mardi 31 mai."
model-index:
- name: camembert-keyword-extractor
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert-keyword-extractor
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2199
- Precision: 0.6743
- Recall: 0.6979
- Accuracy: 0.9346
- F1: 0.6859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:--------:|:------:|
| 0.1747 | 1.0 | 1875 | 0.1780 | 0.5935 | 0.7116 | 0.9258 | 0.6472 |
| 0.1375 | 2.0 | 3750 | 0.1588 | 0.6505 | 0.7032 | 0.9334 | 0.6759 |
| 0.1147 | 3.0 | 5625 | 0.1727 | 0.6825 | 0.6689 | 0.9355 | 0.6756 |
| 0.0969 | 4.0 | 7500 | 0.1759 | 0.6886 | 0.6621 | 0.9350 | 0.6751 |
| 0.0837 | 5.0 | 9375 | 0.1967 | 0.6688 | 0.7112 | 0.9348 | 0.6893 |
| 0.0746 | 6.0 | 11250 | 0.2088 | 0.6646 | 0.7114 | 0.9334 | 0.6872 |
| 0.0666 | 7.0 | 13125 | 0.2169 | 0.6713 | 0.7054 | 0.9347 | 0.6879 |
| 0.0634 | 8.0 | 15000 | 0.2199 | 0.6743 | 0.6979 | 0.9346 | 0.6859 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
awalmeida/q-Taxi-v3
|
awalmeida
| 2022-06-04T08:45:06Z | 0 | 0 | null |
[
"Taxi-v3-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-04T07:04:21Z |
---
tags:
- Taxi-v3-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 0.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3-4x4-no_slippery
type: Taxi-v3-4x4-no_slippery
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="awalmeida/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
awalmeida/q-FrozenLake-v1-4x4-noSlippery
|
awalmeida
| 2022-06-04T06:28:17Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-04T06:23:51Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
madatnlp/rob-large-krmath2
|
madatnlp
| 2022-06-04T06:08:04Z | 9 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-04T03:47:50Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: madatnlp/rob-large-krmath2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# madatnlp/rob-large-krmath2
This model is a fine-tuned version of [klue/roberta-large](https://huggingface.co/klue/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0707
- Validation Loss: 0.2571
- Epoch: 17
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'SGD', 'learning_rate': 0.01, 'decay': 0.0, 'momentum': 0.9, 'nesterov': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.7479 | 1.5215 | 0 |
| 1.1286 | 0.7513 | 1 |
| 1.2498 | 0.9242 | 2 |
| 1.0213 | 0.7140 | 3 |
| 0.8002 | 0.6080 | 4 |
| 0.5895 | 0.3721 | 5 |
| 0.4699 | 0.3392 | 6 |
| 0.3064 | 0.2599 | 7 |
| 0.2803 | 0.2360 | 8 |
| 0.2162 | 0.3075 | 9 |
| 0.1878 | 0.2652 | 10 |
| 0.1635 | 0.1618 | 11 |
| 0.1342 | 0.1061 | 12 |
| 0.1058 | 0.2906 | 13 |
| 0.0869 | 0.3535 | 14 |
| 0.0704 | 0.2090 | 15 |
| 0.0608 | 0.1777 | 16 |
| 0.0707 | 0.2571 | 17 |
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.8.2
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Zengwei/icefall-asr-librispeech-pruned-transducer-stateless4-2022-06-03
|
Zengwei
| 2022-06-04T05:36:57Z | 0 | 0 | null |
[
"tensorboard",
"region:us"
] | null | 2022-06-03T14:36:27Z |
See <https://github.com/k2-fsa/icefall/pull/344>
Note: In the uploaded files, the epoch number counts from 0.
|
ssantanag/pasajes_de_la_biblia
|
ssantanag
| 2022-06-04T04:32:36Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-04T03:56:29Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: pasajes_de_la_biblia
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# pasajes_de_la_biblia
Este modelo fue entrenado con el dataset publicado en Kaggle de los versiculos de la biblia en el siguiente enlace puede encontrar el dataset https://www.kaggle.com/datasets/camesruiz/biblia-ntv-spanish-bible-ntv.
## Training and evaluation data
la distribución de la data fue la siguiente:
- Training set: 58.20%
- Validation set: 9.65%
- Test set: 32.15%
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.8.2
- Datasets 2.2.2
- Tokenizers 0.12.1
|
nbroad/splinter-base-squad2
|
nbroad
| 2022-06-04T03:47:06Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"splinter",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-06-04T01:30:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: splinter-base-squad2_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# splinter-base-squad2_3
This model is a fine-tuned version of [tau/splinter-base-qass](https://huggingface.co/tau/splinter-base-qass) on the squad_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0a0+17540c5
- Datasets 2.2.2
- Tokenizers 0.12.1
|
baru98/bert-base-cased-finetuned-squad
|
baru98
| 2022-06-04T02:53:28Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-06-04T01:42:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-cased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 5.4212
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 7 | 5.7012 |
| No log | 2.0 | 14 | 5.5021 |
| No log | 3.0 | 21 | 5.4212 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
yanekyuk/berturk-keyword-extractor
|
yanekyuk
| 2022-06-04T01:57:03Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"tr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-04T01:02:48Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- accuracy
- f1
language:
- tr
widget:
- text: "İngiltere'de düzenlenen Avrupa Tekvando ve Para Tekvando Şampiyonası’nda millî tekvandocular 5 altın, 2 gümüş ve 4 bronz olmak üzere 11, millî para tekvandocular ise 4 altın, 3 gümüş ve 1 bronz olmak üzere 8 madalya kazanarak takım halinde Avrupa şampiyonu oldu."
- text: "Füme somon dedik ama aslında lox salamuralanmış somon anlamına geliyor, füme etme opsiyonel. Lox bagel, 1930'larda Eggs Benedict furyasında New Yorklu Yahudi cemaati tarafından koşer bir alternatif olarak çıkan bir lezzet. Günümüzde benim hangover yüreğim dâhil dünyanın birçok yerinde enfes bir kahvaltı sandviçi."
- text: "Türkiye'de son aylarda sıklıkla tartışılan konut satışı karşılığında yabancılara vatandaşlık verilmesi konusunu beyin göçü kapsamında ele almak mümkün. Daha önce 250 bin dolar olan vatandaşlık bedeli yükselen tepkiler üzerine 400 bin dolara çıkarılmıştı. Türkiye'den göç eden iyi eğitimli kişilerin , gittikleri ülkelerde 250 bin dolar tutarında yabancı yatırıma denk olduğu göz önüne alındığında nitelikli insan gücünün yabancılara konut karşılığında satılan vatandaşlık bedelin eş olduğunu görüyoruz. Yurt dışına giden her bir vatandaşın yüksek teknolojili katma değer üreten sektörlere yapacağı katkılar göz önünde bulundurulduğunda bu açığın inşaat sektörüyle kapatıldığını da görüyoruz. Beyin göçü konusunda sadece ekonomik perspektiften bakıldığında bile kısa vadeli döviz kaynağı yaratmak için kullanılan vatandaşlık satışı yerine beyin göçünü önleyecek önlemler alınmasının ülkemize çok daha faydalı olacağı sonucunu çıkarıyoruz."
- text: "Türkiye’de resmî verilere göre, 15 ve daha yukarı yaştaki kişilerde mevsim etkisinden arındırılmış işsiz sayısı, bu yılın ilk çeyreğinde bir önceki çeyreğe göre 50 bin kişi artarak 3 milyon 845 bin kişi oldu. Mevsim etkisinden arındırılmış işsizlik oranı ise 0,1 puanlık artışla %11,4 seviyesinde gerçekleşti. İşsizlik oranı, ilk çeyrekte geçen yılın aynı çeyreğine göre 1,7 puan azaldı."
- text: "Boeing’in insansız uzay aracı Starliner, birtakım sorunlara rağmen Uluslararası Uzay İstasyonuna (ISS) ulaşarak ilk kez başarılı bir şekilde kenetlendi. Aracın ISS’te beş gün kalmasını takiben sorunsuz bir şekilde New Mexico’ya inmesi halinde Boeing, sonbaharda astronotları yörüngeye göndermek için Starliner’ı kullanabilir.\n\nNeden önemli? NASA’nın personal aracı üretmeyi durdurmasından kaynaklı olarak görevli astronotlar ve kozmonotlar, ISS’te Rusya’nın ürettiği uzay araçları ile taşınıyordu. Starliner’ın kendini kanıtlaması ise bu konuda Rusya’ya olan bağımlılığın potansiyel olarak ortadan kalkabileceği anlamına geliyor."
model-index:
- name: berturk-keyword-extractor
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# berturk-keyword-extractor
This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4306
- Precision: 0.6770
- Recall: 0.6899
- Accuracy: 0.9169
- F1: 0.6834
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:--------:|:------:|
| 0.1845 | 1.0 | 1875 | 0.1964 | 0.6380 | 0.6743 | 0.9164 | 0.6557 |
| 0.1338 | 2.0 | 3750 | 0.2023 | 0.6407 | 0.7081 | 0.9169 | 0.6727 |
| 0.0978 | 3.0 | 5625 | 0.2315 | 0.6434 | 0.7309 | 0.9159 | 0.6844 |
| 0.0742 | 4.0 | 7500 | 0.2746 | 0.6592 | 0.7144 | 0.9158 | 0.6857 |
| 0.0541 | 5.0 | 9375 | 0.3290 | 0.6700 | 0.6880 | 0.9161 | 0.6789 |
| 0.0426 | 6.0 | 11250 | 0.3608 | 0.6789 | 0.6860 | 0.9171 | 0.6824 |
| 0.0332 | 7.0 | 13125 | 0.4075 | 0.6769 | 0.6924 | 0.9168 | 0.6845 |
| 0.027 | 8.0 | 15000 | 0.4306 | 0.6770 | 0.6899 | 0.9169 | 0.6834 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/katieoneuro
|
huggingtweets
| 2022-06-04T01:31:59Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-04T01:26:38Z |
---
language: en
thumbnail: http://www.huggingtweets.com/katieoneuro/1654306303616/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1000482851853340672/LhUdoFyk_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Katie O'Nell 🧠💻</div>
<div style="text-align: center; font-size: 14px;">@katieoneuro</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Katie O'Nell 🧠💻.
| Data | Katie O'Nell 🧠💻 |
| --- | --- |
| Tweets downloaded | 552 |
| Retweets | 323 |
| Short tweets | 17 |
| Tweets kept | 212 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2umesznv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @katieoneuro's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2evfy6ho) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2evfy6ho/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/katieoneuro')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
jgriffi/xlm-roberta-base-finetuned-panx-all
|
jgriffi
| 2022-06-04T01:24:48Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-04T00:52:21Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1448
- F1: 0.8881
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3029 | 1.0 | 1669 | 0.2075 | 0.7971 |
| 0.164 | 2.0 | 3338 | 0.1612 | 0.8680 |
| 0.1025 | 3.0 | 5007 | 0.1448 | 0.8881 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
jgriffi/xlm-roberta-base-finetuned-panx-fr
|
jgriffi
| 2022-06-04T00:16:41Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-03T23:57:48Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.9320766980825479
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0994
- F1: 0.9321
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5314 | 1.0 | 382 | 0.2522 | 0.8277 |
| 0.2555 | 2.0 | 764 | 0.1414 | 0.9059 |
| 0.1667 | 3.0 | 1146 | 0.0994 | 0.9321 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
jgriffi/xlm-roberta-base-finetuned-panx-de
|
jgriffi
| 2022-06-03T22:39:59Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-03T22:14:23Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8646153846153846
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1496
- F1: 0.8646
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2461 | 1.0 | 1049 | 0.1710 | 0.8351 |
| 0.1314 | 2.0 | 2098 | 0.1470 | 0.8439 |
| 0.0794 | 3.0 | 3147 | 0.1496 | 0.8646 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ulysses-camara/legal-bert-pt-br
|
ulysses-camara
| 2022-06-03T20:20:18Z | 9 | 4 |
sentence-transformers
|
[
"sentence-transformers",
"pt",
"license:mit",
"region:us"
] | null | 2022-05-31T13:30:11Z |
---
language: pt
license: mit
tags:
- sentence-transformers
---
# LegalBERTPT-br
LegalBERTPT-br is a trained sentence embedding using SimCSE, a contrastive learning framework, coupled with the Portuguese pre-trained language model named [BERTimbau](https://huggingface.co/neuralmind/bert-base-portuguese-cased).
# Corpora
– From [this site](https://www2.camara.leg.br/transparencia/servicos-ao-cidadao/participacao-popular), we used the column `Conteudo` with 215,713 comments. We removed the comments from PL 3723/2019, PEC 471/2005, and Hashtag Corpus, in order to avoid bias.
– From [this site](https://www2.camara.leg.br/transparencia/servicos-ao-cidadao/participacao-popular), we also used 147,008 bills. From these projects, we used the summary field named `txtEmenta` and the project core text named `txtExplicacaoEmenta`.
– From Political Speeches, we used 462,831 texts, specifically, we used the columns: `sumario`, `textodiscurso`, and `indexacao`.
These corpora were segmented into sentences and concatenated, producing 2,307,426 sentences.
# Citing and Authors
If you find this model helpful, feel free to cite our publication [Evaluating Topic Models in Portuguese Political Comments About Bills from Brazil’s Chamber of Deputies](https://link.springer.com/chapter/10.1007/978-3-030-91699-2_8):
```bibtex
@inproceedings{bracis,
author = {Nádia Silva and Marília Silva and Fabíola Pereira and João Tarrega and João Beinotti and Márcio Fonseca and Francisco Andrade and André Carvalho},
title = {Evaluating Topic Models in Portuguese Political Comments About Bills from Brazil’s Chamber of Deputies},
booktitle = {Anais da X Brazilian Conference on Intelligent Systems},
location = {Online},
year = {2021},
keywords = {},
issn = {0000-0000},
publisher = {SBC},
address = {Porto Alegre, RS, Brasil},
url = {https://sol.sbc.org.br/index.php/bracis/article/view/19061}
}
```
|
Eulaliefy/distilbert-base-uncased-finetuned-ner
|
Eulaliefy
| 2022-06-03T18:21:14Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-03T15:00:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9250691754288877
- name: Recall
type: recall
value: 0.9350039154267815
- name: F1
type: f1
value: 0.9300100144653389
- name: Accuracy
type: accuracy
value: 0.9836052552147044
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0620
- Precision: 0.9251
- Recall: 0.9350
- F1: 0.9300
- Accuracy: 0.9836
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2356 | 1.0 | 878 | 0.0699 | 0.9110 | 0.9225 | 0.9167 | 0.9801 |
| 0.0509 | 2.0 | 1756 | 0.0621 | 0.9180 | 0.9314 | 0.9246 | 0.9823 |
| 0.0303 | 3.0 | 2634 | 0.0620 | 0.9251 | 0.9350 | 0.9300 | 0.9836 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
hananajiyya/mt5-small-summarization
|
hananajiyya
| 2022-06-03T18:09:47Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-03T00:27:50Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: mt5-small-summarization
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mt5-small-summarization
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.9665
- Validation Loss: 2.4241
- Train Rouge1: 23.5645
- Train Rouge2: 8.2413
- Train Rougel: 19.7515
- Train Rougelsum: 19.9204
- Train Gen Len: 19.0
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch |
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:|
| 4.7187 | 2.6627 | 19.5921 | 5.9723 | 16.6769 | 16.8456 | 18.955 | 0 |
| 3.1929 | 2.4941 | 21.2334 | 6.9784 | 18.2158 | 18.2062 | 18.99 | 1 |
| 2.9665 | 2.4241 | 23.5645 | 8.2413 | 19.7515 | 19.9204 | 19.0 | 2 |
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.8.2
- Datasets 2.2.2
- Tokenizers 0.12.1
|
sinhprous/q-Taxi-v3
|
sinhprous
| 2022-06-03T17:40:36Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-03T14:12:06Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="sinhprous/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
peter2000/wav2vec2-large-xls-r-300m-kinyarwanda
|
peter2000
| 2022-06-03T14:27:44Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-27T14:03:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-kinyarwanda
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-kinyarwanda
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3917
- Wer: 0.3246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 9.0634 | 0.12 | 400 | 3.0554 | 1.0 |
| 2.8009 | 0.24 | 800 | 1.5927 | 0.9554 |
| 0.9022 | 0.36 | 1200 | 0.7328 | 0.6445 |
| 0.6213 | 0.48 | 1600 | 0.6138 | 0.5510 |
| 0.5299 | 0.6 | 2000 | 0.6072 | 0.5223 |
| 0.4999 | 0.72 | 2400 | 0.5449 | 0.4969 |
| 0.4731 | 0.84 | 2800 | 0.5261 | 0.4828 |
| 0.458 | 0.96 | 3200 | 0.5058 | 0.4607 |
| 0.4158 | 1.09 | 3600 | 0.4892 | 0.4463 |
| 0.4037 | 1.21 | 4000 | 0.4759 | 0.4429 |
| 0.4021 | 1.33 | 4400 | 0.4615 | 0.4330 |
| 0.3934 | 1.45 | 4800 | 0.4593 | 0.4315 |
| 0.3808 | 1.57 | 5200 | 0.4736 | 0.4344 |
| 0.3838 | 1.69 | 5600 | 0.4569 | 0.4249 |
| 0.3726 | 1.81 | 6000 | 0.4473 | 0.4140 |
| 0.3623 | 1.93 | 6400 | 0.4403 | 0.4097 |
| 0.3517 | 2.05 | 6800 | 0.4389 | 0.4061 |
| 0.333 | 2.17 | 7200 | 0.4383 | 0.4104 |
| 0.3354 | 2.29 | 7600 | 0.4360 | 0.3955 |
| 0.3257 | 2.41 | 8000 | 0.4226 | 0.3942 |
| 0.3275 | 2.53 | 8400 | 0.4206 | 0.4040 |
| 0.3262 | 2.65 | 8800 | 0.4172 | 0.3875 |
| 0.3206 | 2.77 | 9200 | 0.4209 | 0.3877 |
| 0.323 | 2.89 | 9600 | 0.4177 | 0.3825 |
| 0.3099 | 3.01 | 10000 | 0.4101 | 0.3691 |
| 0.3008 | 3.14 | 10400 | 0.4055 | 0.3709 |
| 0.2918 | 3.26 | 10800 | 0.4085 | 0.3800 |
| 0.292 | 3.38 | 11200 | 0.4089 | 0.3713 |
| 0.292 | 3.5 | 11600 | 0.4092 | 0.3730 |
| 0.2785 | 3.62 | 12000 | 0.4151 | 0.3687 |
| 0.2941 | 3.74 | 12400 | 0.4004 | 0.3639 |
| 0.2838 | 3.86 | 12800 | 0.4108 | 0.3703 |
| 0.2854 | 3.98 | 13200 | 0.3911 | 0.3596 |
| 0.2683 | 4.1 | 13600 | 0.3944 | 0.3575 |
| 0.2647 | 4.22 | 14000 | 0.3836 | 0.3538 |
| 0.2704 | 4.34 | 14400 | 0.4006 | 0.3540 |
| 0.2664 | 4.46 | 14800 | 0.3974 | 0.3553 |
| 0.2662 | 4.58 | 15200 | 0.3890 | 0.3470 |
| 0.2615 | 4.7 | 15600 | 0.3856 | 0.3507 |
| 0.2553 | 4.82 | 16000 | 0.3814 | 0.3497 |
| 0.2587 | 4.94 | 16400 | 0.3837 | 0.3440 |
| 0.2522 | 5.06 | 16800 | 0.3834 | 0.3486 |
| 0.2451 | 5.19 | 17200 | 0.3897 | 0.3414 |
| 0.2423 | 5.31 | 17600 | 0.3864 | 0.3481 |
| 0.2434 | 5.43 | 18000 | 0.3808 | 0.3416 |
| 0.2525 | 5.55 | 18400 | 0.3795 | 0.3408 |
| 0.2427 | 5.67 | 18800 | 0.3841 | 0.3411 |
| 0.2411 | 5.79 | 19200 | 0.3804 | 0.3366 |
| 0.2404 | 5.91 | 19600 | 0.3800 | 0.3328 |
| 0.2372 | 6.03 | 20000 | 0.3749 | 0.3335 |
| 0.2244 | 6.15 | 20400 | 0.3820 | 0.3327 |
| 0.2381 | 6.27 | 20800 | 0.3789 | 0.3325 |
| 0.2294 | 6.39 | 21200 | 0.3867 | 0.3298 |
| 0.2378 | 6.51 | 21600 | 0.3843 | 0.3281 |
| 0.2312 | 6.63 | 22000 | 0.3813 | 0.3277 |
| 0.2411 | 6.75 | 22400 | 0.3780 | 0.3268 |
| 0.2315 | 6.87 | 22800 | 0.3790 | 0.3280 |
| 0.241 | 6.99 | 23200 | 0.3776 | 0.3281 |
| 0.2313 | 7.11 | 23600 | 0.3929 | 0.3283 |
| 0.2423 | 7.24 | 24000 | 0.3905 | 0.3280 |
| 0.2337 | 7.36 | 24400 | 0.3979 | 0.3249 |
| 0.2368 | 7.48 | 24800 | 0.3980 | 0.3257 |
| 0.2409 | 7.6 | 25200 | 0.3937 | 0.3229 |
| 0.2416 | 7.72 | 25600 | 0.3867 | 0.3237 |
| 0.2364 | 7.84 | 26000 | 0.3912 | 0.3253 |
| 0.234 | 7.96 | 26400 | 0.3917 | 0.3246 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
sinhprous/q-FrozenLake-v1-4x4-noSlippery
|
sinhprous
| 2022-06-03T14:02:29Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-03T14:02:22Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="sinhprous/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.