modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
ShannonAI/ChineseBERT-large
|
ShannonAI
| 2022-06-19T12:07:31Z | 23 | 5 |
transformers
|
[
"transformers",
"pytorch",
"arxiv:2106.16038",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# ChineseBERT-large
This repository contains code, model, dataset for **ChineseBERT** at ACL2021.
paper:
**[ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information](https://arxiv.org/abs/2106.16038)**
*Zijun Sun, Xiaoya Li, Xiaofei Sun, Yuxian Meng, Xiang Ao, Qing He, Fei Wu and Jiwei Li*
code:
[ChineseBERT github link](https://github.com/ShannonAI/ChineseBert)
## Model description
We propose ChineseBERT, which incorporates both the glyph and pinyin information of Chinese
characters into language model pretraining.
First, for each Chinese character, we get three kind of embedding.
- **Char Embedding:** the same as origin BERT token embedding.
- **Glyph Embedding:** capture visual features based on different fonts of a Chinese character.
- **Pinyin Embedding:** capture phonetic feature from the pinyin sequence ot a Chinese Character.
Then, char embedding, glyph embedding and pinyin embedding
are first concatenated, and mapped to a D-dimensional embedding through a fully
connected layer to form the fusion embedding.
Finally, the fusion embedding is added with the position embedding, which is fed as input to the BERT model.
The following image shows an overview architecture of ChineseBERT model.

ChineseBERT leverages the glyph and pinyin information of Chinese
characters to enhance the model's ability of capturing
context semantics from surface character forms and
disambiguating polyphonic characters in Chinese.
|
dibsondivya/distilbert-phmtweets-sutd
|
dibsondivya
| 2022-06-19T11:40:42Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"health",
"tweet",
"dataset:custom-phm-tweets",
"arxiv:1802.09130",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-19T10:09:47Z |
---
tags:
- distilbert
- health
- tweet
datasets:
- custom-phm-tweets
metrics:
- accuracy
model-index:
- name: distilbert-phmtweets-sutd
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: custom-phm-tweets
type: labelled
metrics:
- name: Accuracy
type: accuracy
value: 0.877
---
# distilbert-phmtweets-sutd
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) for text classification to identify public health events through tweets. The project was based on an [Emory University Study on Detection of Personal Health Mentions in Social Media paper](https://arxiv.org/pdf/1802.09130v2.pdf), that worked with this [custom dataset](https://github.com/emory-irlab/PHM2017).
It achieves the following results on the evaluation set:
- Accuracy: 0.877
## Usage
```Python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("dibsondivya/distilbert-phmtweets-sutd")
model = AutoModelForSequenceClassification.from_pretrained("dibsondivya/distilbert-phmtweets-sutd")
```
### Model Evaluation Results
With Validation Set
- Accuracy: 0.8708661417322835
With Test Set
- Accuracy: 0.8772961058045555
# Reference for distilbert-base-uncased Model
```bibtex
@article{Sanh2019DistilBERTAD,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
journal={ArXiv},
year={2019},
volume={abs/1910.01108}
}
```
|
dibsondivya/ernie-phmtweets-sutd
|
dibsondivya
| 2022-06-19T11:38:29Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"ernie",
"health",
"tweet",
"dataset:custom-phm-tweets",
"arxiv:1802.09130",
"arxiv:1907.12412",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-19T11:20:14Z |
---
tags:
- ernie
- health
- tweet
datasets:
- custom-phm-tweets
metrics:
- accuracy
model-index:
- name: ernie-phmtweets-sutd
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: custom-phm-tweets
type: labelled
metrics:
- name: Accuracy
type: accuracy
value: 0.885
---
# ernie-phmtweets-sutd
This model is a fine-tuned version of [ernie-2.0-en](https://huggingface.co/nghuyong/ernie-2.0-en) for text classification to identify public health events through tweets. The project was based on an [Emory University Study on Detection of Personal Health Mentions in Social Media paper](https://arxiv.org/pdf/1802.09130v2.pdf), that worked with this [custom dataset](https://github.com/emory-irlab/PHM2017).
It achieves the following results on the evaluation set:
- Accuracy: 0.885
## Usage
```Python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("dibsondivya/ernie-phmtweets-sutd")
model = AutoModelForSequenceClassification.from_pretrained("dibsondivya/ernie-phmtweets-sutd")
```
### Model Evaluation Results
With Validation Set
- Accuracy: 0.889763779527559
With Test Set
- Accuracy: 0.884643644379133
## References for ERNIE 2.0 Model
```bibtex
@article{sun2019ernie20,
title={ERNIE 2.0: A Continual Pre-training Framework for Language Understanding},
author={Sun, Yu and Wang, Shuohuan and Li, Yukun and Feng, Shikun and Tian, Hao and Wu, Hua and Wang, Haifeng},
journal={arXiv preprint arXiv:1907.12412},
year={2019}
}
```
|
huggingtweets/rsapublic
|
huggingtweets
| 2022-06-19T11:26:27Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-19T11:14:09Z |
---
language: en
thumbnail: http://www.huggingtweets.com/rsapublic/1655637814216/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1536637048391491584/zfHd6Mha_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">bopo mofo</div>
<div style="text-align: center; font-size: 14px;">@rsapublic</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from bopo mofo.
| Data | bopo mofo |
| --- | --- |
| Tweets downloaded | 3212 |
| Retweets | 1562 |
| Short tweets | 303 |
| Tweets kept | 1347 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2qnsx0b8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @rsapublic's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/368jvjwu) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/368jvjwu/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/rsapublic')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
bookpanda/wangchanberta-base-att-spm-uncased-masking
|
bookpanda
| 2022-06-19T11:05:59Z | 19 | 0 |
transformers
|
[
"transformers",
"pytorch",
"camembert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-19T10:59:54Z |
---
tags:
- generated_from_trainer
model-index:
- name: wangchanberta-base-att-spm-uncased-masking
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wangchanberta-base-att-spm-uncased-masking
This model is a fine-tuned version of [airesearch/wangchanberta-base-att-spm-uncased](https://huggingface.co/airesearch/wangchanberta-base-att-spm-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.15.0
- Pytorch 1.11.0+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
|
zakria/NLP_Project
|
zakria
| 2022-06-19T09:55:56Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-19T07:49:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: NLP_Project
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLP_Project
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5308
- Wer: 0.3428
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5939 | 1.0 | 500 | 2.1356 | 1.0014 |
| 0.9126 | 2.01 | 1000 | 0.5469 | 0.5354 |
| 0.4491 | 3.01 | 1500 | 0.4636 | 0.4503 |
| 0.3008 | 4.02 | 2000 | 0.4269 | 0.4330 |
| 0.2229 | 5.02 | 2500 | 0.4164 | 0.4073 |
| 0.188 | 6.02 | 3000 | 0.4717 | 0.4107 |
| 0.1739 | 7.03 | 3500 | 0.4306 | 0.4031 |
| 0.159 | 8.03 | 4000 | 0.4394 | 0.3993 |
| 0.1342 | 9.04 | 4500 | 0.4462 | 0.3904 |
| 0.1093 | 10.04 | 5000 | 0.4387 | 0.3759 |
| 0.1005 | 11.04 | 5500 | 0.5033 | 0.3847 |
| 0.0857 | 12.05 | 6000 | 0.4805 | 0.3876 |
| 0.0779 | 13.05 | 6500 | 0.5269 | 0.3810 |
| 0.072 | 14.06 | 7000 | 0.5109 | 0.3710 |
| 0.0641 | 15.06 | 7500 | 0.4865 | 0.3638 |
| 0.0584 | 16.06 | 8000 | 0.5041 | 0.3646 |
| 0.0552 | 17.07 | 8500 | 0.4987 | 0.3537 |
| 0.0535 | 18.07 | 9000 | 0.4947 | 0.3586 |
| 0.0475 | 19.08 | 9500 | 0.5237 | 0.3647 |
| 0.042 | 20.08 | 10000 | 0.5338 | 0.3561 |
| 0.0416 | 21.08 | 10500 | 0.5068 | 0.3483 |
| 0.0358 | 22.09 | 11000 | 0.5126 | 0.3532 |
| 0.0334 | 23.09 | 11500 | 0.5213 | 0.3536 |
| 0.0331 | 24.1 | 12000 | 0.5378 | 0.3496 |
| 0.03 | 25.1 | 12500 | 0.5167 | 0.3470 |
| 0.0254 | 26.1 | 13000 | 0.5245 | 0.3418 |
| 0.0233 | 27.11 | 13500 | 0.5393 | 0.3456 |
| 0.0232 | 28.11 | 14000 | 0.5279 | 0.3425 |
| 0.022 | 29.12 | 14500 | 0.5308 | 0.3428 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
devetle/dqn-SpaceInvadersNoFrameskip-v4
|
devetle
| 2022-06-19T09:29:00Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-19T03:34:30Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 622.00 +/- 131.55
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga devetle -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga devetle
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1800000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
sun1638650145/q-Taxi-v3
|
sun1638650145
| 2022-06-19T09:00:38Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-19T09:00:26Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# 使用**Q-Learning**智能体来玩**Taxi-v3**
这是一个使用**Q-Learning**训练有素的模型玩**Taxi-v3**.
## 用法
```python
model = load_from_hub(repo_id='sun1638650145/q-Taxi-v3', filename='q-learning.pkl')
# 不要忘记检查是否需要添加额外的参数(例如is_slippery=False)
env = gym.make(model['env_id'])
evaluate_agent(env, model['max_steps'], model['n_eval_episodes'], model['qtable'], model['eval_seed'])
```
|
janeel/tinyroberta-squad2-finetuned-squad
|
janeel
| 2022-06-19T08:51:09Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-06-16T12:51:25Z |
---
license: cc-by-4.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: tinyroberta-squad2-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinyroberta-squad2-finetuned-squad
This model is a fine-tuned version of [deepset/tinyroberta-squad2](https://huggingface.co/deepset/tinyroberta-squad2) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1592
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.6185 | 1.0 | 8239 | 0.9460 |
| 0.4243 | 2.0 | 16478 | 1.1592 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ShannonAI/ChineseBERT-base
|
ShannonAI
| 2022-06-19T08:14:46Z | 109 | 20 |
transformers
|
[
"transformers",
"pytorch",
"arxiv:2106.16038",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# ChineseBERT-base
This repository contains code, model, dataset for **ChineseBERT** at ACL2021.
paper:
**[ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information](https://arxiv.org/abs/2106.16038)**
*Zijun Sun, Xiaoya Li, Xiaofei Sun, Yuxian Meng, Xiang Ao, Qing He, Fei Wu and Jiwei Li*
code:
[ChineseBERT github link](https://github.com/ShannonAI/ChineseBert)
## Model description
We propose ChineseBERT, which incorporates both the glyph and pinyin information of Chinese
characters into language model pretraining.
First, for each Chinese character, we get three kind of embedding.
- **Char Embedding:** the same as origin BERT token embedding.
- **Glyph Embedding:** capture visual features based on different fonts of a Chinese character.
- **Pinyin Embedding:** capture phonetic feature from the pinyin sequence ot a Chinese Character.
Then, char embedding, glyph embedding and pinyin embedding
are first concatenated, and mapped to a D-dimensional embedding through a fully
connected layer to form the fusion embedding.
Finally, the fusion embedding is added with the position embedding, which is fed as input to the BERT model.
The following image shows an overview architecture of ChineseBERT model.

ChineseBERT leverages the glyph and pinyin information of Chinese
characters to enhance the model's ability of capturing
context semantics from surface character forms and
disambiguating polyphonic characters in Chinese.
|
nestoralvaro/mt5-small-test-ged-mlsum_max_target_length_10
|
nestoralvaro
| 2022-06-19T06:39:24Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"dataset:mlsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-06-18T15:09:43Z |
---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- mlsum
metrics:
- rouge
model-index:
- name: mt5-small-test-ged-mlsum_max_target_length_10
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: mlsum
type: mlsum
args: es
metrics:
- name: Rouge1
type: rouge
value: 74.8229
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-test-ged-mlsum_max_target_length_10
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the mlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3341
- Rouge1: 74.8229
- Rouge2: 68.1808
- Rougel: 74.8297
- Rougelsum: 74.8414
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 0.5565 | 1.0 | 33296 | 0.3827 | 69.9041 | 62.821 | 69.8709 | 69.8924 |
| 0.2636 | 2.0 | 66592 | 0.3552 | 72.0701 | 65.4937 | 72.0787 | 72.091 |
| 0.2309 | 3.0 | 99888 | 0.3525 | 72.5071 | 65.8026 | 72.5132 | 72.512 |
| 0.2109 | 4.0 | 133184 | 0.3346 | 74.0842 | 67.4776 | 74.0887 | 74.0968 |
| 0.1972 | 5.0 | 166480 | 0.3398 | 74.6051 | 68.6024 | 74.6177 | 74.6365 |
| 0.1867 | 6.0 | 199776 | 0.3283 | 74.9022 | 68.2146 | 74.9023 | 74.926 |
| 0.1785 | 7.0 | 233072 | 0.3325 | 74.8631 | 68.2468 | 74.8843 | 74.9026 |
| 0.1725 | 8.0 | 266368 | 0.3341 | 74.8229 | 68.1808 | 74.8297 | 74.8414 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
botika/checkpoint-124500-finetuned-squad
|
botika
| 2022-06-19T05:53:11Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-06-17T07:41:58Z |
---
tags:
- generated_from_trainer
model-index:
- name: checkpoint-124500-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# checkpoint-124500-finetuned-squad
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 14.9594
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 3.9975 | 1.0 | 3289 | 3.8405 |
| 3.7311 | 2.0 | 6578 | 3.7114 |
| 3.5681 | 3.0 | 9867 | 3.6829 |
| 3.4101 | 4.0 | 13156 | 3.6368 |
| 3.2487 | 5.0 | 16445 | 3.6526 |
| 3.1143 | 6.0 | 19734 | 3.7567 |
| 2.9783 | 7.0 | 23023 | 3.8469 |
| 2.8295 | 8.0 | 26312 | 4.0040 |
| 2.6912 | 9.0 | 29601 | 4.1996 |
| 2.5424 | 10.0 | 32890 | 4.3387 |
| 2.4161 | 11.0 | 36179 | 4.4988 |
| 2.2713 | 12.0 | 39468 | 4.7861 |
| 2.1413 | 13.0 | 42757 | 4.9276 |
| 2.0125 | 14.0 | 46046 | 5.0598 |
| 1.8798 | 15.0 | 49335 | 5.3347 |
| 1.726 | 16.0 | 52624 | 5.5869 |
| 1.5994 | 17.0 | 55913 | 5.7161 |
| 1.4643 | 18.0 | 59202 | 6.0174 |
| 1.3237 | 19.0 | 62491 | 6.4926 |
| 1.2155 | 20.0 | 65780 | 6.4882 |
| 1.1029 | 21.0 | 69069 | 6.9922 |
| 0.9948 | 22.0 | 72358 | 7.1357 |
| 0.9038 | 23.0 | 75647 | 7.3676 |
| 0.8099 | 24.0 | 78936 | 7.4180 |
| 0.7254 | 25.0 | 82225 | 7.7753 |
| 0.6598 | 26.0 | 85514 | 7.8643 |
| 0.5723 | 27.0 | 88803 | 8.1798 |
| 0.5337 | 28.0 | 92092 | 8.3053 |
| 0.4643 | 29.0 | 95381 | 8.8597 |
| 0.4241 | 30.0 | 98670 | 8.9849 |
| 0.3763 | 31.0 | 101959 | 8.8406 |
| 0.3479 | 32.0 | 105248 | 9.1517 |
| 0.3271 | 33.0 | 108537 | 9.3659 |
| 0.2911 | 34.0 | 111826 | 9.4813 |
| 0.2836 | 35.0 | 115115 | 9.5746 |
| 0.2528 | 36.0 | 118404 | 9.7027 |
| 0.2345 | 37.0 | 121693 | 9.7515 |
| 0.2184 | 38.0 | 124982 | 9.9729 |
| 0.2067 | 39.0 | 128271 | 10.0828 |
| 0.2077 | 40.0 | 131560 | 10.0878 |
| 0.1876 | 41.0 | 134849 | 10.2974 |
| 0.1719 | 42.0 | 138138 | 10.2712 |
| 0.1637 | 43.0 | 141427 | 10.5788 |
| 0.1482 | 44.0 | 144716 | 10.7465 |
| 0.1509 | 45.0 | 148005 | 10.4603 |
| 0.1358 | 46.0 | 151294 | 10.7665 |
| 0.1316 | 47.0 | 154583 | 10.7724 |
| 0.1223 | 48.0 | 157872 | 11.1766 |
| 0.1205 | 49.0 | 161161 | 11.1870 |
| 0.1203 | 50.0 | 164450 | 11.1053 |
| 0.1081 | 51.0 | 167739 | 10.9696 |
| 0.103 | 52.0 | 171028 | 11.2010 |
| 0.0938 | 53.0 | 174317 | 11.6728 |
| 0.0924 | 54.0 | 177606 | 11.1423 |
| 0.0922 | 55.0 | 180895 | 11.7409 |
| 0.0827 | 56.0 | 184184 | 11.7850 |
| 0.0829 | 57.0 | 187473 | 11.8956 |
| 0.073 | 58.0 | 190762 | 11.8915 |
| 0.0788 | 59.0 | 194051 | 12.1617 |
| 0.0734 | 60.0 | 197340 | 12.2007 |
| 0.0729 | 61.0 | 200629 | 12.2388 |
| 0.0663 | 62.0 | 203918 | 12.2471 |
| 0.0662 | 63.0 | 207207 | 12.5830 |
| 0.064 | 64.0 | 210496 | 12.6105 |
| 0.0599 | 65.0 | 213785 | 12.3712 |
| 0.0604 | 66.0 | 217074 | 12.9249 |
| 0.0574 | 67.0 | 220363 | 12.7309 |
| 0.0538 | 68.0 | 223652 | 12.8068 |
| 0.0526 | 69.0 | 226941 | 13.4368 |
| 0.0471 | 70.0 | 230230 | 13.5148 |
| 0.0436 | 71.0 | 233519 | 13.3391 |
| 0.0448 | 72.0 | 236808 | 13.4100 |
| 0.0428 | 73.0 | 240097 | 13.5617 |
| 0.0401 | 74.0 | 243386 | 13.8674 |
| 0.035 | 75.0 | 246675 | 13.5746 |
| 0.0342 | 76.0 | 249964 | 13.5042 |
| 0.0344 | 77.0 | 253253 | 14.2085 |
| 0.0365 | 78.0 | 256542 | 13.6393 |
| 0.0306 | 79.0 | 259831 | 13.9807 |
| 0.0311 | 80.0 | 263120 | 13.9768 |
| 0.0353 | 81.0 | 266409 | 14.5245 |
| 0.0299 | 82.0 | 269698 | 13.9471 |
| 0.0263 | 83.0 | 272987 | 13.7899 |
| 0.0254 | 84.0 | 276276 | 14.3786 |
| 0.0267 | 85.0 | 279565 | 14.5611 |
| 0.022 | 86.0 | 282854 | 14.2658 |
| 0.0198 | 87.0 | 286143 | 14.9215 |
| 0.0193 | 88.0 | 289432 | 14.5650 |
| 0.0228 | 89.0 | 292721 | 14.7014 |
| 0.0184 | 90.0 | 296010 | 14.6946 |
| 0.0182 | 91.0 | 299299 | 14.6614 |
| 0.0188 | 92.0 | 302588 | 14.6915 |
| 0.0196 | 93.0 | 305877 | 14.7262 |
| 0.0138 | 94.0 | 309166 | 14.7625 |
| 0.0201 | 95.0 | 312455 | 15.0442 |
| 0.0189 | 96.0 | 315744 | 14.8832 |
| 0.0148 | 97.0 | 319033 | 14.8995 |
| 0.0129 | 98.0 | 322322 | 14.8974 |
| 0.0132 | 99.0 | 325611 | 14.9813 |
| 0.0139 | 100.0 | 328900 | 14.9594 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Klinsc/q-Taxi-v3
|
Klinsc
| 2022-06-19T05:09:11Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-19T05:09:06Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.52 +/- 2.67
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Klinsc/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Klinsc/q-FrozenLake-v1-4x4-noSlippery
|
Klinsc
| 2022-06-19T04:46:57Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-19T04:43:46Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Klinsc/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
janeel/roberta-base-finetuned-squad
|
janeel
| 2022-06-19T04:32:50Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-06-18T14:02:52Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: roberta-base-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-squad
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.8678 | 1.0 | 8239 | 0.8014 |
| 0.6423 | 2.0 | 16478 | 0.8556 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Tstarshak/q-FrozenLake-v1-4x4-noSlippery
|
Tstarshak
| 2022-06-19T04:15:20Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-19T04:15:13Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Tstarshak/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
huggingtweets/vox_akuma
|
huggingtweets
| 2022-06-19T03:26:08Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-28T15:10:18Z |
---
language: en
thumbnail: http://www.huggingtweets.com/vox_akuma/1655609164156/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1509960920449093633/c0in4uvf_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Vox Akuma 👹🧧 NIJISANJI EN</div>
<div style="text-align: center; font-size: 14px;">@vox_akuma</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Vox Akuma 👹🧧 NIJISANJI EN.
| Data | Vox Akuma 👹🧧 NIJISANJI EN |
| --- | --- |
| Tweets downloaded | 3149 |
| Retweets | 948 |
| Short tweets | 465 |
| Tweets kept | 1736 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2g4om0wh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @vox_akuma's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/qy49fjem) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/qy49fjem/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/vox_akuma')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
gary109/ai-light-dance_singing_ft_wav2vec2-large-xlsr-53-5gram-v1
|
gary109
| 2022-06-19T00:31:50Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"gary109/AI_Light_Dance",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-18T03:12:49Z |
---
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
model-index:
- name: ai-light-dance_singing_ft_wav2vec2-large-xlsr-53-5gram-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_singing_ft_wav2vec2-large-xlsr-53-5gram-v1
This model is a fine-tuned version of [gary109/ai-light-dance_singing_ft_wav2vec2-large-xlsr-53-5gram](https://huggingface.co/gary109/ai-light-dance_singing_ft_wav2vec2-large-xlsr-53-5gram) on the GARY109/AI_LIGHT_DANCE - ONSET-SINGING dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4123
- Wer: 0.1668
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2696 | 1.0 | 552 | 0.4421 | 0.2013 |
| 0.2498 | 2.0 | 1104 | 0.4389 | 0.1887 |
| 0.2387 | 3.0 | 1656 | 0.4154 | 0.1788 |
| 0.1902 | 4.0 | 2208 | 0.4143 | 0.1753 |
| 0.1896 | 5.0 | 2760 | 0.4123 | 0.1668 |
| 0.1658 | 6.0 | 3312 | 0.4366 | 0.1651 |
| 0.1312 | 7.0 | 3864 | 0.4309 | 0.1594 |
| 0.1186 | 8.0 | 4416 | 0.4432 | 0.1561 |
| 0.1476 | 9.0 | 4968 | 0.4400 | 0.1569 |
| 0.1027 | 10.0 | 5520 | 0.4389 | 0.1554 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
raedinkhaled/swin-tiny-patch4-window7-224-finetuned-mri
|
raedinkhaled
| 2022-06-19T00:13:22Z | 80 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-06-18T16:25:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-mri
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9806603773584905
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-mri
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0608
- Accuracy: 0.9807
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0592 | 1.0 | 447 | 0.0823 | 0.9695 |
| 0.0196 | 2.0 | 894 | 0.0761 | 0.9739 |
| 0.0058 | 3.0 | 1341 | 0.0608 | 0.9807 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
kjunelee/pegasus-samsum
|
kjunelee
| 2022-06-18T22:35:27Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-18T08:01:44Z |
---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.3.dev0
- Tokenizers 0.12.1
|
Hugo123/Ttgb
|
Hugo123
| 2022-06-18T22:13:53Z | 0 | 0 | null |
[
"license:bsd-3-clause-clear",
"region:us"
] | null | 2022-06-18T22:13:53Z |
---
license: bsd-3-clause-clear
---
|
MerlinTK/q-Taxi-v3
|
MerlinTK
| 2022-06-18T21:30:00Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-18T21:29:54Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="MerlinTK/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
BeardedJohn/bert-finetuned-seq-classification-fake-news
|
BeardedJohn
| 2022-06-18T21:03:42Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-17T16:58:33Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: BeardedJohn/bert-finetuned-seq-classification-fake-news
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# BeardedJohn/bert-finetuned-seq-classification-fake-news
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0719
- Validation Loss: 0.0214
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 332, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.0719 | 0.0214 | 0 |
### Framework versions
- Transformers 4.20.0
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
zakria/Project_NLP
|
zakria
| 2022-06-18T20:44:06Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-18T18:43:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Project_NLP
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Project_NLP
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5324
- Wer: 0.3355
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5697 | 1.0 | 500 | 2.1035 | 0.9979 |
| 0.8932 | 2.01 | 1000 | 0.5649 | 0.5621 |
| 0.4363 | 3.01 | 1500 | 0.4326 | 0.4612 |
| 0.3035 | 4.02 | 2000 | 0.4120 | 0.4191 |
| 0.2343 | 5.02 | 2500 | 0.4199 | 0.3985 |
| 0.1921 | 6.02 | 3000 | 0.4380 | 0.4043 |
| 0.1549 | 7.03 | 3500 | 0.4456 | 0.3925 |
| 0.1385 | 8.03 | 4000 | 0.4264 | 0.3871 |
| 0.1217 | 9.04 | 4500 | 0.4744 | 0.3774 |
| 0.1041 | 10.04 | 5000 | 0.4498 | 0.3745 |
| 0.0968 | 11.04 | 5500 | 0.4716 | 0.3628 |
| 0.0893 | 12.05 | 6000 | 0.4680 | 0.3764 |
| 0.078 | 13.05 | 6500 | 0.5100 | 0.3623 |
| 0.0704 | 14.06 | 7000 | 0.4893 | 0.3552 |
| 0.0659 | 15.06 | 7500 | 0.4956 | 0.3565 |
| 0.0578 | 16.06 | 8000 | 0.5450 | 0.3595 |
| 0.0563 | 17.07 | 8500 | 0.4891 | 0.3614 |
| 0.0557 | 18.07 | 9000 | 0.5307 | 0.3548 |
| 0.0447 | 19.08 | 9500 | 0.4923 | 0.3493 |
| 0.0456 | 20.08 | 10000 | 0.5156 | 0.3479 |
| 0.0407 | 21.08 | 10500 | 0.4979 | 0.3389 |
| 0.0354 | 22.09 | 11000 | 0.5549 | 0.3462 |
| 0.0322 | 23.09 | 11500 | 0.5601 | 0.3439 |
| 0.0342 | 24.1 | 12000 | 0.5131 | 0.3451 |
| 0.0276 | 25.1 | 12500 | 0.5206 | 0.3392 |
| 0.0245 | 26.1 | 13000 | 0.5337 | 0.3373 |
| 0.0226 | 27.11 | 13500 | 0.5311 | 0.3353 |
| 0.0229 | 28.11 | 14000 | 0.5375 | 0.3373 |
| 0.0225 | 29.12 | 14500 | 0.5324 | 0.3355 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
theojolliffe/bart-cnn-science-v3-e2-v4-e2-manual
|
theojolliffe
| 2022-06-18T18:01:15Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-18T17:39:12Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-cnn-science-v3-e2-v4-e2-manual
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-science-v3-e2-v4-e2-manual
This model is a fine-tuned version of [theojolliffe/bart-cnn-science-v3-e2](https://huggingface.co/theojolliffe/bart-cnn-science-v3-e2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9189
- Rouge1: 55.982
- Rouge2: 36.9147
- Rougel: 39.1563
- Rougelsum: 53.5959
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 42 | 0.9365 | 53.4332 | 34.0477 | 36.9735 | 51.1918 | 142.0 |
| No log | 2.0 | 84 | 0.9189 | 55.982 | 36.9147 | 39.1563 | 53.5959 | 142.0 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
vai6hav/wav2vec2-large-xls-r-300m-hindi-epochs15-colab
|
vai6hav
| 2022-06-18T17:42:12Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-18T16:56:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-hindi-epochs15-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hindi-epochs15-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5705
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 20.2764 | 5.53 | 50 | 8.1197 | 1.0 |
| 5.2964 | 11.11 | 100 | 3.5705 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
theojolliffe/bart-cnn-science-v3-e1-v4-e4-manual
|
theojolliffe
| 2022-06-18T17:13:00Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-18T16:46:47Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-cnn-science-v3-e1-v4-e4-manual
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-science-v3-e1-v4-e4-manual
This model is a fine-tuned version of [theojolliffe/bart-cnn-science-v3-e1](https://huggingface.co/theojolliffe/bart-cnn-science-v3-e1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2615
- Rouge1: 53.36
- Rouge2: 32.0237
- Rougel: 33.2835
- Rougelsum: 50.7455
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 42 | 1.0675 | 51.743 | 31.3774 | 34.1939 | 48.7234 | 142.0 |
| No log | 2.0 | 84 | 1.0669 | 49.4166 | 28.1438 | 30.188 | 46.0289 | 142.0 |
| No log | 3.0 | 126 | 1.1799 | 52.6909 | 31.0174 | 35.441 | 50.0351 | 142.0 |
| No log | 4.0 | 168 | 1.2615 | 53.36 | 32.0237 | 33.2835 | 50.7455 | 142.0 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
tclong/wav2vec2-base-vios-v4
|
tclong
| 2022-06-18T16:59:17Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:vivos_dataset",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-06T18:29:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- vivos_dataset
model-index:
- name: wav2vec2-base-vios-v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-vios-v4
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the vivos_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3198
- Wer: 0.2169
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 7.8138 | 0.69 | 500 | 3.5011 | 1.0 |
| 3.4372 | 1.37 | 1000 | 3.3447 | 1.0 |
| 1.9519 | 2.06 | 1500 | 0.8356 | 0.5944 |
| 0.8581 | 2.74 | 2000 | 0.5280 | 0.4038 |
| 0.6405 | 3.43 | 2500 | 0.4410 | 0.3410 |
| 0.5417 | 4.12 | 3000 | 0.3990 | 0.3140 |
| 0.4804 | 4.8 | 3500 | 0.3804 | 0.2973 |
| 0.4384 | 5.49 | 4000 | 0.3644 | 0.2808 |
| 0.4162 | 6.17 | 4500 | 0.3542 | 0.2648 |
| 0.3941 | 6.86 | 5000 | 0.3436 | 0.2529 |
| 0.3733 | 7.54 | 5500 | 0.3355 | 0.2520 |
| 0.3564 | 8.23 | 6000 | 0.3294 | 0.2415 |
| 0.3412 | 8.92 | 6500 | 0.3311 | 0.2332 |
| 0.3266 | 9.6 | 7000 | 0.3217 | 0.2325 |
| 0.3226 | 10.29 | 7500 | 0.3317 | 0.2303 |
| 0.3115 | 10.97 | 8000 | 0.3226 | 0.2279 |
| 0.3094 | 11.66 | 8500 | 0.3157 | 0.2236 |
| 0.2967 | 12.35 | 9000 | 0.3109 | 0.2202 |
| 0.2995 | 13.03 | 9500 | 0.3129 | 0.2156 |
| 0.2895 | 13.72 | 10000 | 0.3195 | 0.2146 |
| 0.3089 | 14.4 | 10500 | 0.3198 | 0.2169 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
Ambiwlans/PPO-1m-SpaceInvadersNoFrameskip-v4
|
Ambiwlans
| 2022-06-18T15:58:34Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-18T15:57:59Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 273.00 +/- 82.29
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **PPO** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **PPO** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo ppo --env SpaceInvadersNoFrameskip-v4 -orga Ambiwlans -f logs/
python enjoy.py --algo ppo --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ppo --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo ppo --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Ambiwlans
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('clip_range', 'lin_0.1'),
('ent_coef', 0.01),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('frame_stack', 4),
('learning_rate', 'lin_2.5e-4'),
('n_envs', 8),
('n_epochs', 4),
('n_steps', 128),
('n_timesteps', 1000000.0),
('policy', 'CnnPolicy'),
('vf_coef', 0.5),
('normalize', False)])
```
|
anibahug/mt5-small-finetuned-amazon-en-de
|
anibahug
| 2022-06-18T15:39:26Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-06-18T14:20:45Z |
---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-amazon-en-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-de
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the [Amazon reviews multi](https://huggingface.co/datasets/amazon_reviews_multi) dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2896
- Rouge1: 14.7163
- Rouge2: 6.6341
- Rougel: 14.2052
- Rougelsum: 14.2318
## Model description
the model can summarize texts for english and deutsch
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
the training was done on google colab ( using it's free GPU)
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 7.2925 | 1.0 | 1276 | 3.5751 | 13.6254 | 6.0527 | 13.109 | 13.1438 |
| 4.0677 | 2.0 | 2552 | 3.4031 | 13.5907 | 6.068 | 13.3526 | 13.2471 |
| 3.7458 | 3.0 | 3828 | 3.3434 | 14.7229 | 6.8482 | 14.1443 | 14.2218 |
| 3.5831 | 4.0 | 5104 | 3.3353 | 14.8696 | 6.6371 | 14.1342 | 14.2907 |
| 3.4841 | 5.0 | 6380 | 3.3037 | 14.233 | 6.2318 | 13.9218 | 13.9781 |
| 3.4142 | 6.0 | 7656 | 3.2914 | 13.7344 | 5.9446 | 13.5476 | 13.6362 |
| 3.3587 | 7.0 | 8932 | 3.2959 | 14.2007 | 6.1905 | 13.5255 | 13.5237 |
| 3.3448 | 8.0 | 10208 | 3.2896 | 14.7163 | 6.6341 | 14.2052 | 14.2318 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
dennis-fast/DialoGPT-ElonMusk
|
dennis-fast
| 2022-06-18T15:13:23Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-18T09:43:20Z |
---
tags:
- conversational
license: mit
---
# DialoGPT-ElonMusk: Chat with Elon Musk
This is a conversational language model of Elon Musk. The bot's conversation abilities come from Microsoft's [DialoGPT-small conversational model](https://huggingface.co/microsoft/DialoGPT-small) fine-tuned on conversation transcripts of 22 interviews with Elon Musk from [here](https://elon-musk-interviews.com/category/english/).
|
pinot/wav2vec2-large-xls-r-300m-turkish-colab
|
pinot
| 2022-06-18T15:04:03Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-18T08:16:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7642
- Wer: 0.5894
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 24.5372 | 9.76 | 400 | 5.2857 | 0.9738 |
| 4.3812 | 19.51 | 800 | 3.6782 | 0.7315 |
| 1.624 | 29.27 | 1200 | 2.7642 | 0.5894 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
zdreiosis/ff_analysis_5
|
zdreiosis
| 2022-06-18T14:54:43Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"gen_ffa",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-18T12:46:34Z |
---
license: apache-2.0
tags:
- gen_ffa
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: ff_analysis_5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ff_analysis_5
This model is a fine-tuned version of [zdreiosis/ff_analysis_5](https://huggingface.co/zdreiosis/ff_analysis_5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0824
- F1: 0.9306
- Roc Auc: 0.9483
- Accuracy: 0.8137
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 0.27 | 50 | 0.0846 | 0.9305 | 0.9476 | 0.8075 |
| No log | 0.55 | 100 | 0.1000 | 0.9070 | 0.9320 | 0.7484 |
| No log | 0.82 | 150 | 0.0945 | 0.9126 | 0.9349 | 0.7640 |
| No log | 1.1 | 200 | 0.0973 | 0.9119 | 0.9353 | 0.7764 |
| No log | 1.37 | 250 | 0.0880 | 0.9336 | 0.9504 | 0.8261 |
| No log | 1.65 | 300 | 0.0857 | 0.9246 | 0.9434 | 0.8043 |
| No log | 1.92 | 350 | 0.0844 | 0.9324 | 0.9488 | 0.8199 |
| No log | 2.2 | 400 | 0.0881 | 0.9232 | 0.9450 | 0.7888 |
| No log | 2.47 | 450 | 0.0875 | 0.9277 | 0.9462 | 0.8012 |
| 0.1226 | 2.75 | 500 | 0.0824 | 0.9306 | 0.9483 | 0.8137 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.10.3
|
theojolliffe/bart-cnn-science-v4-e6-manual
|
theojolliffe
| 2022-06-18T14:42:13Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-18T14:16:05Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-cnn-science-v4-e6-manual
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-science-v4-e6-manual
This model is a fine-tuned version of [theojolliffe/bart-cnn-science](https://huggingface.co/theojolliffe/bart-cnn-science) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0752
- Rouge1: 49.2922
- Rouge2: 27.0916
- Rougel: 29.2754
- Rougelsum: 46.4762
- Gen Len: 140.8
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 42 | 1.5703 | 45.3807 | 24.4571 | 27.901 | 42.7866 | 142.0 |
| No log | 2.0 | 84 | 1.5729 | 46.8902 | 24.2952 | 27.8304 | 43.9581 | 142.0 |
| No log | 3.0 | 126 | 1.7025 | 48.836 | 26.917 | 30.1325 | 45.7887 | 142.0 |
| No log | 4.0 | 168 | 1.8526 | 48.906 | 26.8641 | 30.4677 | 46.1825 | 139.2 |
| No log | 5.0 | 210 | 1.9818 | 51.145 | 28.834 | 30.1862 | 48.6876 | 141.8 |
| No log | 6.0 | 252 | 2.0752 | 49.2922 | 27.0916 | 29.2754 | 46.4762 | 140.8 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
nutjung/q-FrozenLake-v1-4x4-noSlippery
|
nutjung
| 2022-06-18T14:32:37Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-18T14:32:30Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="nutjung/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
shurafa16/opus-mt-ar-en-finetuned-ar-to-en
|
shurafa16
| 2022-06-18T14:11:53Z | 22 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:news_commentary",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-15T08:20:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- news_commentary
metrics:
- bleu
model-index:
- name: opus-mt-ar-en-finetuned-ar-to-en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: news_commentary
type: news_commentary
args: ar-en
metrics:
- name: Bleu
type: bleu
value: 32.8872
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ar-en-finetuned-ar-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ar-en](https://huggingface.co/Helsinki-NLP/opus-mt-ar-en) on the news_commentary dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6933
- Bleu: 32.8872
- Gen Len: 56.084
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 188 | 0.7407 | 30.7259 | 56.296 |
| No log | 2.0 | 376 | 0.6927 | 32.2038 | 58.602 |
| 0.8066 | 3.0 | 564 | 0.6898 | 33.1091 | 57.72 |
| 0.8066 | 4.0 | 752 | 0.6925 | 33.0842 | 56.574 |
| 0.8066 | 5.0 | 940 | 0.6933 | 32.8872 | 56.084 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
tuni/xlm-roberta-large-xnli-finetuned-mnli-SJP-v2
|
tuni
| 2022-06-18T13:48:56Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:swiss_judgment_prediction",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-18T11:53:32Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- swiss_judgment_prediction
metrics:
- accuracy
model-index:
- name: xlm-roberta-large-xnli-finetuned-mnli-SJP-v2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: swiss_judgment_prediction
type: swiss_judgment_prediction
args: all_languages
metrics:
- name: Accuracy
type: accuracy
value: 0.5954285714285714
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large-xnli-finetuned-mnli-SJP-v2
This model is a fine-tuned version of [joeddav/xlm-roberta-large-xnli](https://huggingface.co/joeddav/xlm-roberta-large-xnli) on the swiss_judgment_prediction dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8093
- Accuracy: 0.5954
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 5 | 0.8879 | 0.5191 |
| No log | 2.0 | 10 | 0.8093 | 0.5954 |
| No log | 3.0 | 15 | 2.4452 | 0.3176 |
| No log | 4.0 | 20 | 3.6636 | 0.3084 |
| No log | 5.0 | 25 | 3.7687 | 0.3393 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
eslamxm/mbart-finetuned-fa
|
eslamxm
| 2022-06-18T13:40:54Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"summarization",
"fa",
"Abstractive Summarization",
"generated_from_trainer",
"dataset:pn_summary",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-06-17T00:40:11Z |
---
tags:
- summarization
- fa
- mbart
- Abstractive Summarization
- generated_from_trainer
datasets:
- pn_summary
model-index:
- name: mbart-finetuned-fa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-fa
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the pn_summary dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2877
- Rouge-1: 44.07
- Rouge-2: 25.81
- Rouge-l: 38.96
- Gen Len: 41.7
- Bertscore: 78.95
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 5
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
nestoralvaro/mt5-small-test-amazon-v2
|
nestoralvaro
| 2022-06-18T13:28:50Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-06-18T12:12:41Z |
---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-test-amazon-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-test-amazon-v2
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0555
- Rouge1: 27.8124
- Rouge2: 15.3682
- Rougel: 27.8646
- Rougelsum: 27.9044
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 6.2982 | 1.0 | 1935 | 2.7890 | 23.293 | 12.7229 | 23.3183 | 23.3368 |
| 2.9801 | 2.0 | 3870 | 2.4341 | 25.9888 | 14.0833 | 26.07 | 26.0897 |
| 2.5025 | 3.0 | 5805 | 2.2611 | 26.5127 | 14.5775 | 26.5105 | 26.5442 |
| 2.2681 | 4.0 | 7740 | 2.1966 | 27.7476 | 14.9971 | 27.835 | 27.8186 |
| 2.1198 | 5.0 | 9675 | 2.1209 | 27.3796 | 15.1938 | 27.4549 | 27.4759 |
| 2.0089 | 6.0 | 11610 | 2.0856 | 27.6637 | 15.2345 | 27.7419 | 27.7608 |
| 1.9416 | 7.0 | 13545 | 2.0637 | 27.9013 | 15.3682 | 27.9621 | 27.9833 |
| 1.9034 | 8.0 | 15480 | 2.0555 | 27.8124 | 15.3682 | 27.8646 | 27.9044 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Muennighoff/SGPT-1.3B-weightedmean-nli-bitfit
|
Muennighoff
| 2022-06-18T13:04:47Z | 387 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"gpt_neo",
"feature-extraction",
"sentence-similarity",
"arxiv:2202.08904",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:04Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# SGPT-1.3B-weightedmean-nli-bitfit
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to the eval folder or our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 93941 with parameters:
```
{'batch_size': 6}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 9394,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 0.0001
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 9395,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Pooling({'word_embedding_dimension': 2048, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
|
huggingtweets/joejoinerr
|
huggingtweets
| 2022-06-18T12:02:03Z | 243 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-03T12:31:27Z |
---
language: en
thumbnail: http://www.huggingtweets.com/joejoinerr/1655553718810/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1477268531561517057/MhgifvbO_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Joe 🍞</div>
<div style="text-align: center; font-size: 14px;">@joejoinerr</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Joe 🍞.
| Data | Joe 🍞 |
| --- | --- |
| Tweets downloaded | 3176 |
| Retweets | 611 |
| Short tweets | 281 |
| Tweets kept | 2284 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3f3589ez/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @joejoinerr's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/35u823qi) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/35u823qi/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/joejoinerr')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
skpawar1305/wav2vec2-base-finetuned-ks
|
skpawar1305
| 2022-06-18T11:12:09Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:superb",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2022-06-18T03:23:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-ks
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ks
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0903
- Accuracy: 0.9834
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7264 | 1.0 | 399 | 0.6319 | 0.9351 |
| 0.2877 | 2.0 | 798 | 0.1846 | 0.9748 |
| 0.175 | 3.0 | 1197 | 0.1195 | 0.9796 |
| 0.1672 | 4.0 | 1596 | 0.0903 | 0.9834 |
| 0.1235 | 5.0 | 1995 | 0.0854 | 0.9825 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Asia-N/opus-mt-ar-en-finetuned-ar-to-en
|
Asia-N
| 2022-06-18T10:25:40Z | 61 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:news_commentary",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-15T22:12:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- news_commentary
metrics:
- bleu
model-index:
- name: opus-mt-ar-en-finetuned-ar-to-en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: news_commentary
type: news_commentary
args: ar-en
metrics:
- name: Bleu
type: bleu
value: 32.5327
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ar-en-finetuned-ar-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ar-en](https://huggingface.co/Helsinki-NLP/opus-mt-ar-en) on the news_commentary dataset.
It achieves the following results on the evaluation set:
- Loss: 10.6102
- Bleu: 32.5327
- Gen Len: 56.234
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-09
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 32 | 10.6112 | 32.5327 | 56.234 |
| No log | 2.0 | 64 | 10.6103 | 32.5327 | 56.234 |
| No log | 3.0 | 96 | 10.6102 | 32.5327 | 56.234 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Willy/bert-base-spanish-wwm-cased-finetuned-NLP-IE-2
|
Willy
| 2022-06-18T10:07:25Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-18T05:31:54Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-spanish-wwm-cased-finetuned-NLP-IE-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-spanish-wwm-cased-finetuned-NLP-IE-2
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5279
- Accuracy: 0.7836
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6008 | 1.0 | 9 | 0.5243 | 0.7836 |
| 0.6014 | 2.0 | 18 | 0.5279 | 0.7836 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
huggingnft/cyberkongz
|
huggingnft
| 2022-06-18T10:03:27Z | 42 | 3 |
transformers
|
[
"transformers",
"huggingnft",
"nft",
"huggan",
"gan",
"image",
"images",
"unconditional-image-generation",
"dataset:huggingnft/cyberkongz",
"license:mit",
"endpoints_compatible",
"region:us"
] |
unconditional-image-generation
| 2022-04-13T09:42:37Z |
---
tags:
- huggingnft
- nft
- huggan
- gan
- image
- images
- unconditional-image-generation
datasets:
- huggingnft/cyberkongz
license: mit
---
# Hugging NFT: cyberkongz
## Disclaimer
All rights belong to their owners. Models and datasets can be removed from the site at the request of the copyright
holder.
## Model description
LightWeight GAN model for unconditional generation.
NFT collection available [here](https://opensea.io/collection/cyberkongz).
Dataset is available [here](https://huggingface.co/datasets/huggingnft/cyberkongz).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
Project repository: [link](https://github.com/AlekseyKorshuk/huggingnft).
[](https://github.com/AlekseyKorshuk/huggingnft)
## Intended uses & limitations
#### How to use
Check project repository: [link](https://github.com/AlekseyKorshuk/huggingnft).
#### Limitations and bias
Check project repository: [link](https://github.com/AlekseyKorshuk/huggingnft).
## Training data
Dataset is available [here](https://huggingface.co/datasets/huggingnft/cyberkongz).
## Training procedure
Training script is available [here](https://github.com/AlekseyKorshuk/huggingnft).
## Generated Images
Check results with Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
### BibTeX entry and citation info
```bibtex
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
|
c17hawke/bert-fine-tuned-cola_2
|
c17hawke
| 2022-06-18T09:40:26Z | 6 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-18T09:20:05Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: bert-fine-tuned-cola_2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-fine-tuned-cola_2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3078
- Validation Loss: 0.4072
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.4976 | 0.4236 | 0 |
| 0.3078 | 0.4072 | 1 |
### Framework versions
- Transformers 4.20.0
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
huggingtweets/andrewdoyle_com-conceptualjames-titaniamcgrath
|
huggingtweets
| 2022-06-18T09:11:46Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-18T09:11:04Z |
---
language: en
thumbnail: http://www.huggingtweets.com/andrewdoyle_com-conceptualjames-titaniamcgrath/1655543501221/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/991329326846087169/vxothdvT_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1283787273310556161/HpOtnzmp_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1459175734602350593/cW3fs5lR_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Titania McGrath & Andrew Doyle & James Lindsay, weaponizing your mom</div>
<div style="text-align: center; font-size: 14px;">@andrewdoyle_com-conceptualjames-titaniamcgrath</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Titania McGrath & Andrew Doyle & James Lindsay, weaponizing your mom.
| Data | Titania McGrath | Andrew Doyle | James Lindsay, weaponizing your mom |
| --- | --- | --- | --- |
| Tweets downloaded | 2873 | 3232 | 3226 |
| Retweets | 220 | 781 | 1222 |
| Short tweets | 104 | 306 | 587 |
| Tweets kept | 2549 | 2145 | 1417 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1dewpz75/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @andrewdoyle_com-conceptualjames-titaniamcgrath's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3ed5g462) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3ed5g462/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/andrewdoyle_com-conceptualjames-titaniamcgrath')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
S2312dal/M4_MLM_cross
|
S2312dal
| 2022-06-18T08:48:02Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-18T08:13:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- spearmanr
model-index:
- name: M4_MLM_cross
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# M4_MLM_cross
This model is a fine-tuned version of [S2312dal/M4_MLM](https://huggingface.co/S2312dal/M4_MLM) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0222
- Pearson: 0.9472
- Spearmanr: 0.8983
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 25
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 8.0
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|
| 0.0353 | 1.0 | 131 | 0.0590 | 0.8326 | 0.8225 |
| 0.0478 | 2.0 | 262 | 0.0368 | 0.9234 | 0.8894 |
| 0.0256 | 3.0 | 393 | 0.0222 | 0.9472 | 0.8983 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Nonzerophilip/bert-finetuned-ner_swedish_test_large_set
|
Nonzerophilip
| 2022-06-18T08:36:12Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:suc3",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-17T14:12:32Z |
---
tags:
- generated_from_trainer
datasets:
- suc3
model-index:
- name: bert-finetuned-ner_swedish_test_large_set
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner_swedish_test_large_set
This model is a fine-tuned version of [KBLab/bert-base-swedish-cased-ner](https://huggingface.co/KBLab/bert-base-swedish-cased-ner) on the suc3 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0265
- eval_precision: 0.8542
- eval_recall: 0.8468
- eval_f1: 0.8505
- eval_accuracy: 0.9919
- eval_runtime: 1076.8307
- eval_samples_per_second: 10.685
- eval_steps_per_second: 1.336
- epoch: 1.0
- step: 5754
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.19.3
- Pytorch 1.7.1
- Datasets 2.2.2
- Tokenizers 0.12.1
|
c17hawke/bert-fine-tuned-cola
|
c17hawke
| 2022-06-17T21:12:23Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-17T19:51:35Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: bert-fine-tuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-fine-tuned-cola
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5013
- Validation Loss: 0.4341
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.5013 | 0.4341 | 0 |
### Framework versions
- Transformers 4.19.4
- TensorFlow 2.9.1
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Mahmoud1816Yasser/tmp_trainer
|
Mahmoud1816Yasser
| 2022-06-17T21:10:28Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2022-06-17T21:05:23Z |
---
tags:
- generated_from_trainer
model-index:
- name: tmp_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tmp_trainer
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Ryna/wav2vec2-large-xlsr-53-Enlgish-FT-ASCEND-colab
|
Ryna
| 2022-06-17T20:08:51Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:ascend",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-17T16:16:28Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- ascend
model-index:
- name: wav2vec2-large-xlsr-53-Enlgish-FT-ASCEND-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-Enlgish-FT-ASCEND-colab
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the ascend dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 10000
- total_train_batch_size: 160000
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
gemasphi/laprador
|
gemasphi
| 2022-06-17T19:42:04Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-06-17T19:41:49Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# gemasphi/laprador
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('gemasphi/laprador')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('gemasphi/laprador')
model = AutoModel.from_pretrained('gemasphi/laprador')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=gemasphi/laprador)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
skyline22/RLpinball
|
skyline22
| 2022-06-17T18:21:45Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"VideoPinball-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-17T18:20:38Z |
---
library_name: stable-baselines3
tags:
- VideoPinball-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 8997.20 +/- 6190.02
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: VideoPinball-v4
type: VideoPinball-v4
---
# **DQN** Agent playing **VideoPinball-v4**
This is a trained model of a **DQN** agent playing **VideoPinball-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env VideoPinball-v4 -orga skyline22 -f logs/
python enjoy.py --algo dqn --env VideoPinball-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env VideoPinball-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env VideoPinball-v4 -f logs/ -orga skyline22
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
huggingtweets/pdchina
|
huggingtweets
| 2022-06-17T18:03:08Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-17T18:01:23Z |
---
language: en
thumbnail: http://www.huggingtweets.com/pdchina/1655488982839/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1246469365089939456/jAjE_fKB_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">People's Daily, China</div>
<div style="text-align: center; font-size: 14px;">@pdchina</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from People's Daily, China.
| Data | People's Daily, China |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 20 |
| Short tweets | 2 |
| Tweets kept | 3228 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3b8is5jg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @pdchina's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3rg0kmkg) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3rg0kmkg/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/pdchina')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
eslamxm/mbart-finetune-en-cnn
|
eslamxm
| 2022-06-17T16:48:32Z | 21 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"summarization",
"en",
"seq2seq",
"Abstractive Summarization",
"generated_from_trainer",
"dataset:cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-06-16T15:48:16Z |
---
tags:
- summarization
- en
- seq2seq
- mbart
- Abstractive Summarization
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: mbert-finetune-en-cnn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbert-finetune-en-cnn
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5577
- Rouge-1: 37.69
- Rouge-2: 16.47
- Rouge-l: 35.53
- Gen Len: 79.93
- Bertscore: 74.92
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 5
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
RayMelius/bert-finetuned-ner
|
RayMelius
| 2022-06-17T16:06:51Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-17T15:56:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
eslamxm/MBart-finetuned-ur-xlsum
|
eslamxm
| 2022-06-17T14:59:58Z | 30 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"summarization",
"ur",
"seq2seq",
"Abstractive Summarization",
"generated_from_trainer",
"dataset:xlsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-06-16T23:41:23Z |
---
tags:
- summarization
- ur
- seq2seq
- mbart
- Abstractive Summarization
- generated_from_trainer
datasets:
- xlsum
model-index:
- name: MBart-finetuned-ur-xlsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MBart-finetuned-ur-xlsum
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2663
- Rouge-1: 40.6
- Rouge-2: 18.9
- Rouge-l: 34.39
- Gen Len: 37.88
- Bertscore: 77.06
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 5
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
joitandr/dqn-PongNoFrameskip-v4
|
joitandr
| 2022-06-17T14:55:31Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PongNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-17T14:53:22Z |
---
library_name: stable-baselines3
tags:
- PongNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 5.00 +/- 6.43
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PongNoFrameskip-v4
type: PongNoFrameskip-v4
---
# **DQN** Agent playing **PongNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **PongNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env PongNoFrameskip-v4 -orga joitandr -f logs/
python enjoy.py --algo dqn --env PongNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env PongNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env PongNoFrameskip-v4 -f logs/ -orga joitandr
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
jkhan447/sarcasm-detection-RoBerta-base-newdata
|
jkhan447
| 2022-06-17T14:34:34Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-17T08:28:23Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sarcasm-detection-RoBerta-base-newdata
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sarcasm-detection-RoBerta-base-newdata
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4844
- Accuracy: 0.7824
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
efederici/convnext-base-224-22k-1k-orig-cats-vs-dogs
|
efederici
| 2022-06-17T14:11:20Z | 56 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"convnext",
"image-classification",
"vision",
"dataset:cats_vs_dogs",
"arxiv:2201.03545",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-06-17T09:33:45Z |
---
license: apache-2.0
tags:
- image-classification
- vision
datasets:
- cats_vs_dogs
metrics:
- accuracy
model-index:
- name: convnext-base-224-22k-1k-orig-cats-vs-dogs
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: cats_vs_dogs
type: cats_vs_dogs
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9973333333333333
---
# convnext-base-224-22k-1k-orig-cats-vs-dogs
This model is a fine-tuned version of [facebook/convnext-base-224-22k-1k](https://huggingface.co/facebook/convnext-base-224-22k-1k) on the cats_vs_dogs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0103
- Accuracy: 0.9973
<p align="center">
<img src="https://files.ocula.com/anzax/09/09f77133-7740-4130-a567-84fb56736362_650_544.jpg" width="600"> </br>
Jockum Nordström, Cat Dog Cat, 2016
</p>
## Model description
The "Roaring 20s" of visual recognition began with the introduction of Vision Transformers (ViTs), which quickly superseded ConvNets as the state-of-the-art image classification model. A vanilla ViT, on the other hand, faces difficulties when applied to general computer vision tasks such as object detection and semantic segmentation. It is the hierarchical Transformers (e.g., Swin Transformers) that reintroduced several ConvNet priors, making Transformers practically viable as a generic vision backbone and demonstrating remarkable performance on a wide variety of vision tasks. However, the effectiveness of such hybrid approaches is still largely credited to the intrinsic superiority of Transformers, rather than the inherent inductive biases of convolutions. In this work, we reexamine the design spaces and test the limits of what a pure ConvNet can achieve. We gradually "modernize" a standard ResNet toward the design of a vision Transformer, and discover several key components that contribute to the performance difference along the way. The outcome of this exploration is a family of pure ConvNet models dubbed ConvNeXt. Constructed entirely from standard ConvNet modules, ConvNeXts compete favorably with Transformers in terms of accuracy and scalability, achieving 87.8% ImageNet top-1 accuracy and outperforming Swin Transformers on COCO detection and ADE20K segmentation, while maintaining the simplicity and efficiency of standard ConvNets.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2201-03545,
author = {Zhuang Liu and
Hanzi Mao and
Chao{-}Yuan Wu and
Christoph Feichtenhofer and
Trevor Darrell and
Saining Xie},
title = {A ConvNet for the 2020s},
journal = {CoRR},
volume = {abs/2201.03545},
year = {2022},
url = {https://arxiv.org/abs/2201.03545},
eprinttype = {arXiv},
eprint = {2201.03545},
timestamp = {Thu, 20 Jan 2022 14:21:35 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
classla/bcms-bertic-parlasent-bcs-bi
|
classla
| 2022-06-17T13:51:54Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"text-classification",
"sentiment-analysis",
"hr",
"arxiv:2206.00929",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-01T09:10:17Z |
---
language: "hr"
tags:
- text-classification
- sentiment-analysis
widget:
- text: "Poštovani potpredsjedničke Vlade i ministre hrvatskih branitelja, mislite li da ste zapravo iznevjerili svoje suborce s kojima ste 555 dana prosvjedovali u šatoru protiv tadašnjih dužnosnika jer ste zapravo donijeli zakon koji je neprovediv, a birali ste si suradnike koji nemaju etički integritet."
---
# bcms-bertic-parlasent-bcs-bi
Binary text classification model based on [`classla/bcms-bertic`](https://huggingface.co/classla/bcms-bertic) and fine-tuned on the BCS Political Sentiment dataset (sentence-level data).
This classifier classifies text into only two categories: Negative vs. Other. For the ternary classifier (Negative, Neutral, Positive) check [this model](https://huggingface.co/classla/bcms-bertic-parlasent-bcs-ter).
For details on the dataset and the finetuning procedure, please see [this paper](https://arxiv.org/abs/2206.00929).
## Fine-tuning hyperparameters
Fine-tuning was performed with `simpletransformers`. Beforehand a brief sweep for the optimal number of epochs was performed and the presumed best value was 9. Other arguments were kept default.
```python
model_args = {
"num_train_epochs": 9
}
```
## Performance in comparison with ternary classifier
| model | average macro F1 |
|-------------------------------------------|------------------|
| bcms-bertic-parlasent-bcs-ter | 0.7941 ± 0.0101 |
| bcms-bertic-parlasent-bcs-bi (this model) | 0.8999 ± 0.012 |
## Use example with `simpletransformers==0.63.7`
```python
from simpletransformers.classification import ClassificationModel
model = ClassificationModel("electra", "classla/bcms-bertic-parlasent-bcs-bi")
predictions, logits = model.predict([
"Đački autobusi moraju da voze svaki dan",
"Vi niste normalni"
]
)
predictions
# Output: array([1, 0])
[model.config.id2label[i] for i in predictions]
# Output: ['Other', 'Negative']
```
## Citation
If you use the model, please cite the following paper on which the original model is based:
```
@inproceedings{ljubesic-lauc-2021-bertic,
title = "{BERT}i{\'c} - The Transformer Language Model for {B}osnian, {C}roatian, {M}ontenegrin and {S}erbian",
author = "Ljube{\v{s}}i{\'c}, Nikola and Lauc, Davor",
booktitle = "Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing",
month = apr,
year = "2021",
address = "Kiyv, Ukraine",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.bsnlp-1.5",
pages = "37--42",
}
```
and the paper describing the dataset and methods for the current finetuning:
```
@misc{https://doi.org/10.48550/arxiv.2206.00929,
doi = {10.48550/ARXIV.2206.00929},
url = {https://arxiv.org/abs/2206.00929},
author = {Mochtak, Michal and Rupnik, Peter and Ljubešič, Nikola},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {The ParlaSent-BCS dataset of sentiment-annotated parliamentary debates from Bosnia-Herzegovina, Croatia, and Serbia},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution Share Alike 4.0 International}
}
```
|
Guillaume63/q-Taxi-v3
|
Guillaume63
| 2022-06-17T12:24:35Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-17T12:24:27Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Guillaume63/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
waboucay/camembert-large-finetuned-xnli_fr
|
waboucay
| 2022-06-17T11:59:21Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"camembert",
"text-classification",
"nli",
"fr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-17T11:38:30Z |
---
language:
- fr
tags:
- nli
metrics:
- f1
---
## Eval results
We obtain the following results on ```validation``` and ```test``` sets:
| Set | F1<sub>micro</sub> | F1<sub>macro</sub> |
|------------|--------------------|--------------------|
| validation | 92.9 | 92.1 |
| test | 91.7 | 90.7 |
|
rajendra-ml/q-FrozenLake-v1-4x4-noSlippery
|
rajendra-ml
| 2022-06-17T09:15:24Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-17T09:15:13Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="rajendra-ml/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
huggingtweets/iantdr
|
huggingtweets
| 2022-06-17T09:09:33Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-17T09:09:26Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1365703183/YT_Croydon_Flyer_twitter_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ian anderson</div>
<div style="text-align: center; font-size: 14px;">@iantdr</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ian anderson.
| Data | ian anderson |
| --- | --- |
| Tweets downloaded | 3201 |
| Retweets | 2052 |
| Short tweets | 316 |
| Tweets kept | 833 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1bopfm9o/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @iantdr's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1papgk0r) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1papgk0r/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/iantdr')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
S2312dal/M7_MLM
|
S2312dal
| 2022-06-17T08:49:00Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-17T08:40:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: M7_MLM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# M7_MLM
This model is a fine-tuned version of [sentence-transformers/all-distilroberta-v1](https://huggingface.co/sentence-transformers/all-distilroberta-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 8.2304
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 9.2227 | 1.0 | 25 | 8.6091 |
| 8.6536 | 2.0 | 50 | 8.2492 |
| 8.5065 | 3.0 | 75 | 8.3056 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
S2312dal/M6_MLM
|
S2312dal
| 2022-06-17T08:38:50Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-17T08:27:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: M6_MLM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# M6_MLM
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0237
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4015 | 1.0 | 25 | 2.1511 |
| 2.2207 | 2.0 | 50 | 2.1268 |
| 2.168 | 3.0 | 75 | 2.0796 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jcrbsa/pt-gpt2vit
|
jcrbsa
| 2022-06-17T08:36:13Z | 3 | 2 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"vision-encoder-decoder",
"image-text-to-text",
"pt",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2022-06-14T03:16:18Z |
---
language:
- pt
---
Image Captioning in Portuguese trained with ViT and GPT2
[DEMO](https://huggingface.co/spaces/adalbertojunior/image_captioning_portuguese)
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC)
|
S2312dal/M5_MLM
|
S2312dal
| 2022-06-17T08:25:48Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-17T08:02:01Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: M5_MLM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# M5_MLM
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 7.0447
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 9.8279 | 1.0 | 62 | 7.9889 |
| 7.7536 | 2.0 | 124 | 7.3750 |
| 7.2065 | 3.0 | 186 | 6.8625 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ArthurZ/tiny-random-bert-sharded
|
ArthurZ
| 2022-06-17T08:07:42Z | 5,243 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"feature-extraction",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-06-17T07:49:01Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: tiny-random-bert-sharded
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tiny-random-bert-sharded
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.21.0.dev0
- TensorFlow 2.9.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
jkhan447/sarcasm-detection-Bert-base-uncased-newdata
|
jkhan447
| 2022-06-17T07:56:08Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-15T09:29:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sarcasm-detection-Bert-base-uncased-newdata
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sarcasm-detection-Bert-base-uncased-newdata
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5383
- Accuracy: 0.7766
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
aditya22/bert-finetuned-ner
|
aditya22
| 2022-06-17T07:18:13Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-17T07:01:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.936018564561578
- name: Recall
type: recall
value: 0.9503534163581285
- name: F1
type: f1
value: 0.9431315240083508
- name: Accuracy
type: accuracy
value: 0.9859598516512628
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0642
- Precision: 0.9360
- Recall: 0.9504
- F1: 0.9431
- Accuracy: 0.9860
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0855 | 1.0 | 1756 | 0.0642 | 0.9108 | 0.9387 | 0.9246 | 0.9834 |
| 0.0414 | 2.0 | 3512 | 0.0619 | 0.9331 | 0.9502 | 0.9415 | 0.9853 |
| 0.0181 | 3.0 | 5268 | 0.0642 | 0.9360 | 0.9504 | 0.9431 | 0.9860 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Sanjeev49/marian-finetuned-kde4-en-to-fr
|
Sanjeev49
| 2022-06-17T06:31:10Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"marian",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-15T12:07:09Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Sanjeev49/marian-finetuned-kde4-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Sanjeev49/marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0601
- Validation Loss: 0.8952
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 5912, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.0601 | 0.8952 | 0 |
### Framework versions
- Transformers 4.20.0
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
anithapappu/wav2vec2-base-timit-google-colab
|
anithapappu
| 2022-06-17T03:05:35Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-23T19:00:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5506
- Wer: 0.3355
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.4326 | 1.0 | 500 | 1.5832 | 1.0063 |
| 0.8235 | 2.01 | 1000 | 0.5310 | 0.5134 |
| 0.4224 | 3.01 | 1500 | 0.4488 | 0.4461 |
| 0.2978 | 4.02 | 2000 | 0.4243 | 0.4191 |
| 0.232 | 5.02 | 2500 | 0.4532 | 0.4149 |
| 0.1902 | 6.02 | 3000 | 0.4732 | 0.3912 |
| 0.1628 | 7.03 | 3500 | 0.4807 | 0.3868 |
| 0.1437 | 8.03 | 4000 | 0.5295 | 0.3670 |
| 0.1241 | 9.04 | 4500 | 0.4602 | 0.3810 |
| 0.1206 | 10.04 | 5000 | 0.4691 | 0.3783 |
| 0.0984 | 11.04 | 5500 | 0.4500 | 0.3710 |
| 0.0929 | 12.05 | 6000 | 0.5247 | 0.3550 |
| 0.0914 | 13.05 | 6500 | 0.5546 | 0.3821 |
| 0.0742 | 14.06 | 7000 | 0.4874 | 0.3646 |
| 0.0729 | 15.06 | 7500 | 0.5327 | 0.3934 |
| 0.0663 | 16.06 | 8000 | 0.5769 | 0.3661 |
| 0.0575 | 17.07 | 8500 | 0.5191 | 0.3524 |
| 0.0588 | 18.07 | 9000 | 0.5155 | 0.3360 |
| 0.0456 | 19.08 | 9500 | 0.5135 | 0.3539 |
| 0.0444 | 20.08 | 10000 | 0.5380 | 0.3603 |
| 0.0419 | 21.08 | 10500 | 0.5275 | 0.3467 |
| 0.0366 | 22.09 | 11000 | 0.5072 | 0.3487 |
| 0.0331 | 23.09 | 11500 | 0.5450 | 0.3437 |
| 0.0345 | 24.1 | 12000 | 0.5138 | 0.3431 |
| 0.029 | 25.1 | 12500 | 0.5067 | 0.3413 |
| 0.0274 | 26.1 | 13000 | 0.5421 | 0.3422 |
| 0.0243 | 27.11 | 13500 | 0.5456 | 0.3392 |
| 0.0226 | 28.11 | 14000 | 0.5665 | 0.3368 |
| 0.0216 | 29.12 | 14500 | 0.5506 | 0.3355 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 1.13.3
- Tokenizers 0.12.1
|
sun1638650145/q-FrozenLake-v1-4x4-noSlippery
|
sun1638650145
| 2022-06-17T03:02:12Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-17T03:02:00Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# 使用**Q-Learning**智能体来玩**FrozenLake-v1**
这是一个使用**Q-Learning**训练有素的模型玩**FrozenLake-v1**.
## 用法
```python
model = load_from_hub(repo_id='sun1638650145/q-FrozenLake-v1-4x4-noSlippery', filename='q-learning.pkl')
# 不要忘记检查是否需要添加额外的参数(例如is_slippery=False)
env = gym.make(model['env_id'])
evaluate_agent(env, model['max_steps'], model['n_eval_episodes'], model['qtable'], model['eval_seed'])
```
|
huggingtweets/mcdonaldsuk-potus-tomcruise
|
huggingtweets
| 2022-06-17T01:44:54Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-17T01:44:46Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/603269306026106880/42CwEF4n_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1533739179770843141/kNhGgW4K_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1380530524779859970/TfwVAbyX_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Tom Cruise & McDonald's UK & President Biden</div>
<div style="text-align: center; font-size: 14px;">@mcdonaldsuk-potus-tomcruise</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Tom Cruise & McDonald's UK & President Biden.
| Data | Tom Cruise | McDonald's UK | President Biden |
| --- | --- | --- | --- |
| Tweets downloaded | 3036 | 3250 | 3250 |
| Retweets | 1055 | 0 | 96 |
| Short tweets | 88 | 36 | 8 |
| Tweets kept | 1893 | 3214 | 3146 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/xo9k90g3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mcdonaldsuk-potus-tomcruise's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/uk4lqo8q) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/uk4lqo8q/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mcdonaldsuk-potus-tomcruise')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/tomhanks
|
huggingtweets
| 2022-06-17T01:00:56Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-17T01:00:48Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1193951507026075648/Ot3GmqGK_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Tom Hanks</div>
<div style="text-align: center; font-size: 14px;">@tomhanks</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Tom Hanks.
| Data | Tom Hanks |
| --- | --- |
| Tweets downloaded | 948 |
| Retweets | 9 |
| Short tweets | 15 |
| Tweets kept | 924 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3mkvpkso/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tomhanks's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2tplh98q) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2tplh98q/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/tomhanks')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/tomcruise
|
huggingtweets
| 2022-06-17T01:00:02Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-17T00:59:56Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/603269306026106880/42CwEF4n_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Tom Cruise</div>
<div style="text-align: center; font-size: 14px;">@tomcruise</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Tom Cruise.
| Data | Tom Cruise |
| --- | --- |
| Tweets downloaded | 3036 |
| Retweets | 1055 |
| Short tweets | 88 |
| Tweets kept | 1893 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ppnkvd5o/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tomcruise's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2q772s43) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2q772s43/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/tomcruise')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ouiame/bertGpt2Summ
|
ouiame
| 2022-06-17T00:38:07Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"autotrain",
"unk",
"dataset:ouiame/autotrain-data-Robertatogpt2",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-16T20:13:43Z |
---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- ouiame/autotrain-data-Robertatogpt2
co2_eq_emissions: 2.4722651844547827
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 995132940
- CO2 Emissions (in grams): 2.4722651844547827
## Validation Metrics
- Loss: 3.5972988605499268
- Rouge1: 16.1218
- Rouge2: 2.9195
- RougeL: 13.0085
- RougeLsum: 13.2975
- Gen Len: 19.9962
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/ouiame/autotrain-Robertatogpt2-995132940
```
|
huggingtweets/fawfulthgreat64
|
huggingtweets
| 2022-06-17T00:31:51Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-17T00:31:08Z |
---
language: en
thumbnail: http://www.huggingtweets.com/fawfulthgreat64/1655425906757/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1520110813209665538/-4GuBQGb_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Jamey Viv 🏳️⚧️ 🇺🇦 #Toaster4DisneyPlus</div>
<div style="text-align: center; font-size: 14px;">@fawfulthgreat64</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Jamey Viv 🏳️⚧️ 🇺🇦 #Toaster4DisneyPlus.
| Data | Jamey Viv 🏳️⚧️ 🇺🇦 #Toaster4DisneyPlus |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 1394 |
| Short tweets | 133 |
| Tweets kept | 1719 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/21ve8lp9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @fawfulthgreat64's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/hg3e2g0j) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/hg3e2g0j/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/fawfulthgreat64')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
mindwrapped/bible-generator-awd-lstm
|
mindwrapped
| 2022-06-16T23:35:05Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2022-06-16T23:34:56Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
huggingtweets/chrishemsworth-deadpoolmovie
|
huggingtweets
| 2022-06-16T23:26:07Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-16T20:28:47Z |
---
language: en
thumbnail: http://www.huggingtweets.com/chrishemsworth-deadpoolmovie/1655421962384/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1247482752351588352/EgHoUNqQ_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1208234904405757953/mT0cFOVQ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Chris Hemsworth & Deadpool Movie</div>
<div style="text-align: center; font-size: 14px;">@chrishemsworth-deadpoolmovie</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Chris Hemsworth & Deadpool Movie.
| Data | Chris Hemsworth | Deadpool Movie |
| --- | --- | --- |
| Tweets downloaded | 482 | 1125 |
| Retweets | 140 | 276 |
| Short tweets | 39 | 115 |
| Tweets kept | 303 | 734 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3f48nrzp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @chrishemsworth-deadpoolmovie's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2tf8a3vu) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2tf8a3vu/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/chrishemsworth-deadpoolmovie')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/jbsalvagno
|
huggingtweets
| 2022-06-16T22:41:16Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-16T22:41:08Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/817874051146412032/rPvqTOFF_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Javier Bustos</div>
<div style="text-align: center; font-size: 14px;">@jbsalvagno</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Javier Bustos.
| Data | Javier Bustos |
| --- | --- |
| Tweets downloaded | 3179 |
| Retweets | 2756 |
| Short tweets | 30 |
| Tweets kept | 393 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/29wlz981/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jbsalvagno's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/k72pz4ho) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/k72pz4ho/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jbsalvagno')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/leisha_hailey
|
huggingtweets
| 2022-06-16T22:08:08Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-16T22:04:50Z |
---
language: en
thumbnail: http://www.huggingtweets.com/leisha_hailey/1655417283179/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1601201593/Screen_shot_2011-10-20_at_8.42.01_PM_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Leisha Hailey</div>
<div style="text-align: center; font-size: 14px;">@leisha_hailey</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Leisha Hailey.
| Data | Leisha Hailey |
| --- | --- |
| Tweets downloaded | 1084 |
| Retweets | 77 |
| Short tweets | 66 |
| Tweets kept | 941 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3ecfevcj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @leisha_hailey's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/vat0dsmp) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/vat0dsmp/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/leisha_hailey')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
SimingSiming/dqn-SpaceInvadersNoFrameskip-v4
|
SimingSiming
| 2022-06-16T21:19:12Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-16T21:18:41Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 210.00 +/- 102.52
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga SimingSiming -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga SimingSiming
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
roymukund/xlm-roberta-base-finetuned-ner
|
roymukund
| 2022-06-16T20:32:08Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:hi_ner-original",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-16T09:30:15Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- hi_ner-original
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: xlm-roberta-base-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: hi_ner-original
type: hi_ner-original
args: HiNER
metrics:
- name: Precision
type: precision
value: 0.7366076627460114
- name: Recall
type: recall
value: 0.6770947627585838
- name: F1
type: f1
value: 0.7055985498152408
- name: Accuracy
type: accuracy
value: 0.9359390321752693
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-ner
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the hi_ner-original dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2314
- Precision: 0.7366
- Recall: 0.6771
- F1: 0.7056
- Accuracy: 0.9359
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2025 | 0.74 | 7000 | 0.2146 | 0.7399 | 0.6197 | 0.6745 | 0.9316 |
| 0.1641 | 1.47 | 14000 | 0.2238 | 0.7618 | 0.6108 | 0.6780 | 0.9336 |
| 0.1404 | 2.21 | 21000 | 0.2302 | 0.7560 | 0.6327 | 0.6889 | 0.9350 |
| 0.1371 | 2.95 | 28000 | 0.2226 | 0.7395 | 0.6600 | 0.6975 | 0.9350 |
| 0.1248 | 3.68 | 35000 | 0.2314 | 0.7366 | 0.6771 | 0.7056 | 0.9359 |
| 0.1112 | 4.42 | 42000 | 0.2423 | 0.7089 | 0.7064 | 0.7077 | 0.9333 |
| 0.1048 | 5.16 | 49000 | 0.2599 | 0.7326 | 0.6793 | 0.7050 | 0.9349 |
| 0.1091 | 5.89 | 56000 | 0.2542 | 0.7244 | 0.6918 | 0.7077 | 0.9348 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
eplatas/scibert_scivocab_uncased_finetuned_leukaemia
|
eplatas
| 2022-06-16T20:01:22Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-16T19:41:12Z |
---
tags:
- generated_from_trainer
model-index:
- name: scibert_scivocab_uncased_finetuned_leukaemia
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# scibert_scivocab_uncased_finetuned_leukaemia
This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4985
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.742 | 1.0 | 50 | 2.9184 |
| 0.7729 | 2.0 | 100 | 1.0324 |
| 0.697 | 3.0 | 150 | 0.5968 |
| 0.6573 | 4.0 | 200 | 0.4985 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
S2312dal/M3_MLM
|
S2312dal
| 2022-06-16T19:46:04Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-16T19:22:36Z |
---
tags:
- generated_from_trainer
model-index:
- name: M3_MLM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# M3_MLM
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.8186
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 9.6707 | 1.0 | 26 | 7.4412 |
| 6.9122 | 2.0 | 52 | 6.3385 |
| 6.2166 | 3.0 | 78 | 5.9148 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
S2312dal/M2_MLM
|
S2312dal
| 2022-06-16T19:42:48Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-16T19:10:00Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: M2_MLM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# M2_MLM
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3686
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5955 | 1.0 | 25 | 1.4376 |
| 1.4736 | 2.0 | 50 | 1.2969 |
| 1.3925 | 3.0 | 75 | 1.3163 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
income/bpr-gpl-fiqa-base-msmarco-distilbert-tas-b
|
income
| 2022-06-16T18:21:26Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-06-16T18:21:18Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 5076 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
income/bpr-gpl-nfcorpus-base-msmarco-distilbert-tas-b
|
income
| 2022-06-16T18:17:34Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-06-16T18:17:25Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 338 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
income/bpr-gpl-nq-base-msmarco-distilbert-tas-b
|
income
| 2022-06-16T18:15:23Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-06-16T18:15:15Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 245832 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
income/bpr-gpl-scidocs-base-msmarco-distilbert-tas-b
|
income
| 2022-06-16T18:07:08Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-06-16T18:07:01Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2337 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
income/bpr-gpl-scifact-base-msmarco-distilbert-tas-b
|
income
| 2022-06-16T18:05:25Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-06-16T18:05:17Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 481 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
income/bpr-gpl-signal1m-base-msmarco-distilbert-tas-b
|
income
| 2022-06-16T18:02:31Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-06-16T18:02:22Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 263015 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
income/bpr-gpl-trec-covid-base-msmarco-distilbert-tas-b
|
income
| 2022-06-16T18:00:33Z | 14 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-06-16T18:00:26Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 15001 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
income/bpr-gpl-trec-news-base-msmarco-distilbert-tas-b
|
income
| 2022-06-16T17:59:18Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-06-16T17:59:11Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 55028 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
income/bpr-gpl-webis-touche2020-base-msmarco-distilbert-tas-b
|
income
| 2022-06-16T17:58:08Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-06-16T17:57:58Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 34866 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 5,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
huggingtweets/basilhalperin-ben_golub-tylercowen
|
huggingtweets
| 2022-06-16T17:09:13Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-16T17:03:42Z |
---
language: en
thumbnail: http://www.huggingtweets.com/basilhalperin-ben_golub-tylercowen/1655399323629/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1483290763056320512/oILN7yPo_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1043847779355897857/xyZk8v-m_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1284936824075550723/ix2eGZd7_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">tylercowen & Basil Halperin & Ben Golub 🇺🇦</div>
<div style="text-align: center; font-size: 14px;">@basilhalperin-ben_golub-tylercowen</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from tylercowen & Basil Halperin & Ben Golub 🇺🇦.
| Data | tylercowen | Basil Halperin | Ben Golub 🇺🇦 |
| --- | --- | --- | --- |
| Tweets downloaded | 2642 | 1024 | 3247 |
| Retweets | 2065 | 80 | 1009 |
| Short tweets | 43 | 60 | 390 |
| Tweets kept | 534 | 884 | 1848 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/4x0ck2xi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @basilhalperin-ben_golub-tylercowen's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/fuzqv36t) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/fuzqv36t/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/basilhalperin-ben_golub-tylercowen')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.