modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-29 12:28:39
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 526
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-29 12:28:30
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ahmeddbahaa/xlmroberta-finetuned-Spanish
|
ahmeddbahaa
| 2022-06-16T21:05:45Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"summarization",
"xlmroberta",
"es",
"abstractive summarization",
"generated_from_trainer",
"dataset:wiki_lingua",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-06-16T11:04:00Z |
---
tags:
- summarization
- xlmroberta
- encoder-decoder
- es
- abstractive summarization
- generated_from_trainer
datasets:
- wiki_lingua
model-index:
- name: xlmroberta-finetuned-Spanish
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmroberta-finetuned-Spanish
This model is a fine-tuned version of [](https://huggingface.co/) on the wiki_lingua dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 5
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
roymukund/xlm-roberta-base-finetuned-ner
|
roymukund
| 2022-06-16T20:32:08Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:hi_ner-original",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-16T09:30:15Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- hi_ner-original
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: xlm-roberta-base-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: hi_ner-original
type: hi_ner-original
args: HiNER
metrics:
- name: Precision
type: precision
value: 0.7366076627460114
- name: Recall
type: recall
value: 0.6770947627585838
- name: F1
type: f1
value: 0.7055985498152408
- name: Accuracy
type: accuracy
value: 0.9359390321752693
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-ner
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the hi_ner-original dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2314
- Precision: 0.7366
- Recall: 0.6771
- F1: 0.7056
- Accuracy: 0.9359
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2025 | 0.74 | 7000 | 0.2146 | 0.7399 | 0.6197 | 0.6745 | 0.9316 |
| 0.1641 | 1.47 | 14000 | 0.2238 | 0.7618 | 0.6108 | 0.6780 | 0.9336 |
| 0.1404 | 2.21 | 21000 | 0.2302 | 0.7560 | 0.6327 | 0.6889 | 0.9350 |
| 0.1371 | 2.95 | 28000 | 0.2226 | 0.7395 | 0.6600 | 0.6975 | 0.9350 |
| 0.1248 | 3.68 | 35000 | 0.2314 | 0.7366 | 0.6771 | 0.7056 | 0.9359 |
| 0.1112 | 4.42 | 42000 | 0.2423 | 0.7089 | 0.7064 | 0.7077 | 0.9333 |
| 0.1048 | 5.16 | 49000 | 0.2599 | 0.7326 | 0.6793 | 0.7050 | 0.9349 |
| 0.1091 | 5.89 | 56000 | 0.2542 | 0.7244 | 0.6918 | 0.7077 | 0.9348 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
eplatas/scibert_scivocab_uncased_finetuned_leukaemia
|
eplatas
| 2022-06-16T20:01:22Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-16T19:41:12Z |
---
tags:
- generated_from_trainer
model-index:
- name: scibert_scivocab_uncased_finetuned_leukaemia
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# scibert_scivocab_uncased_finetuned_leukaemia
This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4985
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.742 | 1.0 | 50 | 2.9184 |
| 0.7729 | 2.0 | 100 | 1.0324 |
| 0.697 | 3.0 | 150 | 0.5968 |
| 0.6573 | 4.0 | 200 | 0.4985 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
S2312dal/M3_MLM
|
S2312dal
| 2022-06-16T19:46:04Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-16T19:22:36Z |
---
tags:
- generated_from_trainer
model-index:
- name: M3_MLM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# M3_MLM
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.8186
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 9.6707 | 1.0 | 26 | 7.4412 |
| 6.9122 | 2.0 | 52 | 6.3385 |
| 6.2166 | 3.0 | 78 | 5.9148 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
S2312dal/M4_MLM
|
S2312dal
| 2022-06-16T19:42:02Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-16T19:32:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: M4_MLM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# M4_MLM
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 7.3456
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 8.7633 | 1.0 | 26 | 8.0400 |
| 7.8899 | 2.0 | 52 | 7.6923 |
| 7.589 | 3.0 | 78 | 7.4373 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
huggingtweets/alanrmacleod-karl_was_right-yaboihakim
|
huggingtweets
| 2022-06-16T19:29:02Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-16T19:28:53Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1521992020977348609/RrM3MB-G_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1412117139071418386/3bmc9Vk7_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1067405915077468161/tRoXWi8G_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Michael Parenti’s Stache 🚩☭ & Alan MacLeod & Hakim</div>
<div style="text-align: center; font-size: 14px;">@alanrmacleod-karl_was_right-yaboihakim</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Michael Parenti’s Stache 🚩☭ & Alan MacLeod & Hakim.
| Data | Michael Parenti’s Stache 🚩☭ | Alan MacLeod | Hakim |
| --- | --- | --- | --- |
| Tweets downloaded | 3236 | 3244 | 2415 |
| Retweets | 283 | 480 | 709 |
| Short tweets | 360 | 177 | 139 |
| Tweets kept | 2593 | 2587 | 1567 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/38bj8kvf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @alanrmacleod-karl_was_right-yaboihakim's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1klcaw4v) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1klcaw4v/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/alanrmacleod-karl_was_right-yaboihakim')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
income/bpr-gpl-bioasq-base-msmarco-distilbert-tas-b
|
income
| 2022-06-16T18:26:16Z | 37 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-06-16T18:26:10Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 92924 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
income/bpr-gpl-climate-fever-base-msmarco-distilbert-tas-b
|
income
| 2022-06-16T18:25:16Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-06-16T18:25:09Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 169268 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
income/bpr-gpl-dbpedia-entity-base-msmarco-distilbert-tas-b
|
income
| 2022-06-16T18:23:49Z | 5 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-06-16T18:23:42Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 144872 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
income/bpr-gpl-hotpotqa-base-msmarco-distilbert-tas-b
|
income
| 2022-06-16T18:19:52Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-06-16T18:19:43Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 163541 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
income/bpr-gpl-nfcorpus-base-msmarco-distilbert-tas-b
|
income
| 2022-06-16T18:17:34Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-06-16T18:17:25Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 338 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
income/bpr-gpl-nq-base-msmarco-distilbert-tas-b
|
income
| 2022-06-16T18:15:23Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-06-16T18:15:15Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 245832 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
income/bpr-gpl-quora-base-msmarco-distilbert-tas-b
|
income
| 2022-06-16T18:14:29Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-06-16T18:14:22Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 16341 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 5,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
income/bpr-gpl-trec-covid-base-msmarco-distilbert-tas-b
|
income
| 2022-06-16T18:00:33Z | 14 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-06-16T18:00:26Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 15001 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
income/bpr-gpl-trec-news-base-msmarco-distilbert-tas-b
|
income
| 2022-06-16T17:59:18Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-06-16T17:59:11Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 55028 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
huggingtweets/basilhalperin-ben_golub-tylercowen
|
huggingtweets
| 2022-06-16T17:09:13Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-16T17:03:42Z |
---
language: en
thumbnail: http://www.huggingtweets.com/basilhalperin-ben_golub-tylercowen/1655399323629/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1483290763056320512/oILN7yPo_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1043847779355897857/xyZk8v-m_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1284936824075550723/ix2eGZd7_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">tylercowen & Basil Halperin & Ben Golub 🇺🇦</div>
<div style="text-align: center; font-size: 14px;">@basilhalperin-ben_golub-tylercowen</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from tylercowen & Basil Halperin & Ben Golub 🇺🇦.
| Data | tylercowen | Basil Halperin | Ben Golub 🇺🇦 |
| --- | --- | --- | --- |
| Tweets downloaded | 2642 | 1024 | 3247 |
| Retweets | 2065 | 80 | 1009 |
| Short tweets | 43 | 60 | 390 |
| Tweets kept | 534 | 884 | 1848 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/4x0ck2xi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @basilhalperin-ben_golub-tylercowen's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/fuzqv36t) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/fuzqv36t/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/basilhalperin-ben_golub-tylercowen')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
anantoj/T5-summarizer-simple-wiki-v2
|
anantoj
| 2022-06-16T16:44:54Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-16T16:35:58Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: T5-summarizer-simple-wiki-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# T5-summarizer-simple-wiki-v2
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.2575 | 1.0 | 14719 | 2.1173 |
| 2.2663 | 2.0 | 29438 | 2.0926 |
| 2.2092 | 3.0 | 44157 | 2.0866 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
huggingtweets/unknownco123
|
huggingtweets
| 2022-06-16T16:20:12Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-16T16:18:10Z |
---
language: en
thumbnail: http://www.huggingtweets.com/unknownco123/1655396407192/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1522164949904248832/IdAMZkO9_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">UnknownCollector 🇺🇦🕊🙏🏼</div>
<div style="text-align: center; font-size: 14px;">@unknownco123</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from UnknownCollector 🇺🇦🕊🙏🏼.
| Data | UnknownCollector 🇺🇦🕊🙏🏼 |
| --- | --- |
| Tweets downloaded | 3244 |
| Retweets | 1208 |
| Short tweets | 184 |
| Tweets kept | 1852 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/gtnmsztt/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @unknownco123's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2osaytek) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2osaytek/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/unknownco123')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
S2312dal/M1_MLM
|
S2312dal
| 2022-06-16T15:54:27Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"albert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-16T14:48:09Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: M1_MLM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# M1_MLM
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2887
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 8.2418 | 1.0 | 25 | 2.4870 |
| 2.4653 | 2.0 | 50 | 2.3762 |
| 2.2127 | 3.0 | 75 | 2.3000 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
aleks0309/q-FrozenLake-v1-4x4-noSlippery
|
aleks0309
| 2022-06-16T14:37:01Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-16T14:36:53Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="aleks0309/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Corianas/PPO-QbertNoFrameskip-v4_2
|
Corianas
| 2022-06-16T14:26:25Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"QbertNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-16T14:22:03Z |
---
library_name: stable-baselines3
tags:
- QbertNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 12830.00 +/- 4355.31
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: QbertNoFrameskip-v4
type: QbertNoFrameskip-v4
---
# **PPO** Agent playing **QbertNoFrameskip-v4**
This is a trained model of a **PPO** agent playing **QbertNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo ppo --env QbertNoFrameskip-v4 -orga Corianas -f logs/
python enjoy.py --algo ppo --env QbertNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ppo --env QbertNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo ppo --env QbertNoFrameskip-v4 -f logs/ -orga Corianas
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('clip_range', 'lin_0.1'),
('ent_coef', 0.01),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('frame_stack', 4),
('learning_rate', 'lin_2.5e-4'),
('n_envs', 8),
('n_epochs', 4),
('n_steps', 128),
('n_timesteps', 10000000.0),
('policy', 'CnnPolicy'),
('vf_coef', 0.5),
('normalize', False)])
```
|
Zengwei/pruned_transducer_stateless6_hubert_xtralarge_ll60k_finetune_ls960
|
Zengwei
| 2022-06-16T14:15:23Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-06-16T06:16:05Z |
Things worth to meantion:
1. The float type teacher embedding is quantized into a sequence of
8-bit integer codebook indexes.
2. a middle layer 36(1-based) out of total 48 layers is used to extract
teacher embeddings.
3. a middle layer 6(1-based) out of total 6 layers is used to extract
student embeddings.
|
bekirbakar/wav2vec2-large-xls-r-300m-finnish
|
bekirbakar
| 2022-06-16T13:34:45Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-06T10:46:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-finnish
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-finnish
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4747
- Wer: 0.5143
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1666 | 14.8 | 400 | 0.4747 | 0.5143 |
| 0.0875 | 29.62 | 800 | 0.4747 | 0.5143 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
bekirbakar/wav2vec2-large-xls-r-300m-slovenian
|
bekirbakar
| 2022-06-16T13:33:50Z | 281 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-06T14:23:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-slovenian
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-slovenian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4462
- Wer: 0.3271
## Training procedure
### Training Hyper-parameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
### Training Results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.3681 | 4.93 | 400 | 0.7067 | 0.6486 |
| 0.2311 | 9.87 | 800 | 0.5155 | 0.4341 |
| 0.0833 | 14.81 | 1200 | 0.4996 | 0.3799 |
| 0.0455 | 19.75 | 1600 | 0.4462 | 0.3271 |
### Framework Versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
ArthurZ/roberta-large-sharded
|
ArthurZ
| 2022-06-16T13:33:48Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"feature-extraction",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-06-16T13:18:24Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: roberta-large-sharded
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# roberta-large-sharded
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- TensorFlow 2.9.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Abeljones/Ye
|
Abeljones
| 2022-06-16T12:07:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-06-16T12:07:26Z |
git lfs install
git clone https://huggingface.co/dalle-mini/dalle-mini
|
eunbeee/ainize-kobart-news-eb-finetuned-papers
|
eunbeee
| 2022-06-16T12:07:21Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-12T16:20:50Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: ainize-kobart-news-eb-finetuned-papers
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ainize-kobart-news-eb-finetuned-papers
This model is a fine-tuned version of [ainize/kobart-news](https://huggingface.co/ainize/kobart-news) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3066
- Rouge1: 14.5433
- Rouge2: 5.2238
- Rougel: 14.4731
- Rougelsum: 14.5183
- Gen Len: 19.9934
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 0.1918 | 1.0 | 7200 | 0.2403 | 14.6883 | 5.2427 | 14.6306 | 14.6489 | 19.9938 |
| 0.1332 | 2.0 | 14400 | 0.2391 | 14.5165 | 5.2443 | 14.493 | 14.4908 | 19.9972 |
| 0.0966 | 3.0 | 21600 | 0.2539 | 14.758 | 5.4976 | 14.6906 | 14.7188 | 19.9941 |
| 0.0736 | 4.0 | 28800 | 0.2782 | 14.6267 | 5.3371 | 14.5578 | 14.6014 | 19.9934 |
| 0.0547 | 5.0 | 36000 | 0.3066 | 14.5433 | 5.2238 | 14.4731 | 14.5183 | 19.9934 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
waboucay/camembert-large-finetuned-rua_wl
|
waboucay
| 2022-06-16T12:02:25Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"camembert",
"text-classification",
"nli",
"fr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-16T11:58:20Z |
---
language:
- fr
tags:
- nli
metrics:
- f1
---
## Eval results
We obtain the following results on ```validation``` and ```test``` sets:
| Set | F1<sub>micro</sub> | F1<sub>macro</sub> |
|------------|--------------------|--------------------|
| validation | 74.8 | 74.5 |
| test | 74.8 | 74.6 |
|
rajeshradhakrishnan/ml-news-classify-fastai
|
rajeshradhakrishnan
| 2022-06-16T11:57:58Z | 0 | 2 |
fastai
|
[
"fastai",
"arxiv:2005.00085",
"region:us"
] | null | 2022-06-15T10:53:22Z |
---
tags:
- fastai
---
# Malayalam (മലയാളം) Classifier using fastai (Working in Progress)
🥳 This model is my attempt to use machine learning using Malayalam Language. Huge inspiration from [Malayalam Text Classifier](https://kurianbenoy.com/2022-05-30-malayalamtext-0/). Courtesy to @waydegilliam for [blurr](https://ohmeow.github.io/blurr/text-examples-multilabel.html)
🌈 മലയാളത്തിൽ മെഷീൻ ലീർണിങ് പഠിക്കാനും പിന്നേ പരിചയപ്പെടാനും, to be continued...
# How its built ? & How to use ?
Please find the [notebook](https://nbviewer.org/github/rajeshradhakrishnanmvk/kitchen2.0/blob/feature101-frontend/ml/fastai_X_Hugging_Face_Group_2022.ipynb) used for training the model
Usage:
First, install the utilities to load the model as well as `blurr`, which was used to train this model.
```bash
!pip install huggingface_hub[fastai]
!git clone https://github.com/ohmeow/blurr.git && cd blurr && pip install -e ".[dev]"
```
```python
from huggingface_hub import from_pretrained_fastai
learner = from_pretrained_fastai("rajeshradhakrishnan/ml-news-classify-fastai")
sentences = ["ഓഹരി വിപണി തകരുമ്പോള് നിക്ഷേപം എങ്ങനെ സുരക്ഷിതമാക്കാം",
"വാര്ണറുടെ ഒറ്റക്കയ്യന് ക്യാച്ചില് അമ്പരന്ന് ക്രിക്കറ്റ് ലോകം"]
probs = learner.predict(sentences)
# 'business', 'entertainment', 'sports', 'technology'
for idx in range(len(sentences)):
print(f"Probability that sentence '{sentences[idx]}' is business is: {100*probs[idx]['probs'][0]:.2f}%")
print(f"Probability that sentence '{sentences[idx]}' is entertainment is: {100*probs[idx]['probs'][1]:.2f}%")
print(f"Probability that sentence '{sentences[idx]}' is sports is: {100*probs[idx]['probs'][2]:.2f}%")
print(f"Probability that sentence '{sentences[idx]}' is technology is: {100*probs[idx]['probs'][3]:.2f}%")
```
---
# Model card
## Model description
The is a Malayalam classifier model for labels 'business', 'entertainment', 'sports', 'technology'.
## Intended uses & limitations
The model can be used to categorize malayalam new sfeed.
## Training and evaluation data
Data is from the [AI4Bharat-IndicNLP Dataset](https://github.com/AI4Bharat/indicnlp_corpus#indicnlp-news-article-classification-dataset) and wrapper to extract only Malayalam data( [HF dataset](https://huggingface.co/datasets/rajeshradhakrishnan/malayalam_news))!.
## Citation
```
@article{kunchukuttan2020indicnlpcorpus,
title={AI4Bharat-IndicNLP Corpus: Monolingual Corpora and Word Embeddings for Indic Languages},
author={Anoop Kunchukuttan and Divyanshu Kakwani and Satish Golla and Gokul N.C. and Avik Bhattacharyya and Mitesh M. Khapra and Pratyush Kumar},
year={2020},
journal={arXiv preprint arXiv:2005.00085},
}
```
|
eleldar/repunct-model_ft
|
eleldar
| 2022-06-16T11:16:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-06-16T09:38:08Z |
Model for API: https://github.com/eleldar/Punctuation
|
eleldar/rubert-base-cased-sentence
|
eleldar
| 2022-06-16T11:16:20Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-06-16T10:30:20Z |
Model for API: https://github.com/eleldar/Punctuation
|
anantoj/T5-summarizer-simple-wiki
|
anantoj
| 2022-06-16T10:47:42Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-16T10:35:32Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0868
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.2583 | 1.0 | 14719 | 2.1164 |
| 2.2649 | 2.0 | 29438 | 2.0925 |
| 2.209 | 3.0 | 44157 | 2.0868 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ahmeddbahaa/xlmroberta2xlmroberta-finetune-summarization-ur
|
ahmeddbahaa
| 2022-06-16T10:27:20Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"summarization",
"ur",
"xlm-roberta",
"Abstractive Summarization",
"roberta",
"generated_from_trainer",
"dataset:xlsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-06-15T16:34:48Z |
---
tags:
- summarization
- ur
- encoder-decoder
- xlm-roberta
- Abstractive Summarization
- roberta
- generated_from_trainer
datasets:
- xlsum
model-index:
- name: xlmroberta2xlmroberta-finetune-summarization-ur
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmroberta2xlmroberta-finetune-summarization-ur
This model is a fine-tuned version of [](https://huggingface.co/) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 5.4576
- Rouge-1: 26.51
- Rouge-2: 9.4
- Rouge-l: 23.21
- Gen Len: 19.99
- Bertscore: 68.15
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 5
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Corianas/q-FrozenLake-v1-4x4-Slippery
|
Corianas
| 2022-06-16T10:11:36Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-16T09:14:52Z |
---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-Slippery
results:
- metrics:
- type: mean_reward
value: 0.72 +/- 0.45
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Corianas/q-FrozenLake-v1-4x4-Slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
waboucay/camembert-large-finetuned-repnum_wl
|
waboucay
| 2022-06-16T09:46:51Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"camembert",
"text-classification",
"nli",
"fr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-16T09:37:43Z |
---
language:
- fr
tags:
- nli
metrics:
- f1
---
## Eval results
We obtain the following results on ```validation``` and ```test``` sets:
| Set | F1<sub>micro</sub> | F1<sub>macro</sub> |
|------------|--------------------|--------------------|
| validation | 80.4 | 80.4 |
| test | 80.6 | 80.6 |
|
huggingtweets/minusgn
|
huggingtweets
| 2022-06-16T09:01:01Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-16T09:00:54Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1081285419512127488/Mkb9FgN3_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Isak Vik</div>
<div style="text-align: center; font-size: 14px;">@minusgn</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Isak Vik.
| Data | Isak Vik |
| --- | --- |
| Tweets downloaded | 3222 |
| Retweets | 190 |
| Short tweets | 550 |
| Tweets kept | 2482 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1dy32g00/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @minusgn's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3njlvz02) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3njlvz02/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/minusgn')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
waboucay/camembert-base-finetuned-repnum_wl-rua_wl_3_classes
|
waboucay
| 2022-06-16T07:44:53Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"camembert",
"text-classification",
"nli",
"fr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-16T07:27:43Z |
---
language:
- fr
tags:
- nli
metrics:
- f1
---
## Eval results
We obtain the following results on ```validation``` and ```test``` sets:
| Set | F1<sub>micro</sub> | F1<sub>macro</sub> |
|------------|--------------------|--------------------|
| validation | 75.6 | 75.3 |
| test | 76.1 | 75.8 |
|
waboucay/camembert-base-finetuned-rua_wl_3_classes
|
waboucay
| 2022-06-16T07:39:30Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"camembert",
"text-classification",
"nli",
"fr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-16T07:29:41Z |
---
language:
- fr
tags:
- nli
metrics:
- f1
---
## Eval results
We obtain the following results on ```validation``` and ```test``` sets:
| Set | F1<sub>micro</sub> | F1<sub>macro</sub> |
|------------|--------------------|--------------------|
| validation | 73.5 | 73.3 |
| test | 73.8 | 73.6 |
|
iaanimashaun/opus-mt-en-sw-finetuned-en-to-sw
|
iaanimashaun
| 2022-06-16T06:40:29Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-13T06:44:32Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: opus-mt-en-sw-finetuned-en-to-sw
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-sw-finetuned-en-to-sw
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-sw](https://huggingface.co/Helsinki-NLP/opus-mt-en-sw) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 113 | 0.9884 | 50.2226 | 19.0434 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Corianas/SkiingNoFrameskip-v4_ScoringTest
|
Corianas
| 2022-06-16T06:22:47Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SkiingNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-16T06:20:38Z |
---
library_name: stable-baselines3
tags:
- SkiingNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -30000.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SkiingNoFrameskip-v4
type: SkiingNoFrameskip-v4
---
# **PPO** Agent playing **SkiingNoFrameskip-v4**
This is a trained model of a **PPO** agent playing **SkiingNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo ppo --env SkiingNoFrameskip-v4 -orga Corianas -f logs/
python enjoy.py --algo ppo --env SkiingNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ppo --env SkiingNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo ppo --env SkiingNoFrameskip-v4 -f logs/ -orga Corianas
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('clip_range', 'lin_0.1'),
('ent_coef', 0.01),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('frame_stack', 4),
('learning_rate', 'lin_2.5e-4'),
('n_envs', 8),
('n_epochs', 4),
('n_steps', 128),
('n_timesteps', 10000000.0),
('policy', 'CnnPolicy'),
('vf_coef', 0.5),
('normalize', False)])
```
|
ouiame/T5_mlsum
|
ouiame
| 2022-06-16T05:31:30Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain",
"fr",
"dataset:ouiame/autotrain-data-trainproject",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-15T13:51:07Z |
---
tags: autotrain
language: fr
widget:
- text: "I love AutoTrain 🤗"
datasets:
- ouiame/autotrain-data-trainproject
co2_eq_emissions: 976.8219757938544
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 985232789
- CO2 Emissions (in grams): 976.8219757938544
## Validation Metrics
- Loss: 1.7047555446624756
- Rouge1: 20.2108
- Rouge2: 7.8633
- RougeL: 16.9554
- RougeLsum: 17.3178
- Gen Len: 18.9874
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/ouiame/autotrain-trainproject-985232789
```
|
eslamxm/mbert2mbert-finetune-fa
|
eslamxm
| 2022-06-16T05:28:50Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"summarization",
"fa",
"mbert",
"mbert2mbert",
"Abstractive Summarization",
"generated_from_trainer",
"dataset:pn_summary",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-06-15T22:17:31Z |
---
tags:
- summarization
- fa
- mbert
- mbert2mbert
- Abstractive Summarization
- generated_from_trainer
datasets:
- pn_summary
model-index:
- name: mbert2mbert-finetune-fa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbert2mbert-finetune-fa
This model is a fine-tuned version of [](https://huggingface.co/) on the pn_summary dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 5
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Corianas/dqn-BeamRiderNoFrameskip-v4_2
|
Corianas
| 2022-06-16T04:42:20Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"BeamRiderNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-16T04:38:44Z |
---
library_name: stable-baselines3
tags:
- BeamRiderNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 4574.80 +/- 2171.74
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BeamRiderNoFrameskip-v4
type: BeamRiderNoFrameskip-v4
---
# **DQN** Agent playing **BeamRiderNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **BeamRiderNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env BeamRiderNoFrameskip-v4 -orga Corianas -f logs/
python enjoy.py --algo dqn --env BeamRiderNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env BeamRiderNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env BeamRiderNoFrameskip-v4 -f logs/ -orga Corianas
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
sasuke/bert-base-uncased-finetuned-sst2
|
sasuke
| 2022-06-16T03:58:09Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-13T03:38:03Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9323394495412844
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2982
- Accuracy: 0.9323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1817 | 1.0 | 4210 | 0.2920 | 0.9186 |
| 0.1297 | 2.0 | 8420 | 0.3069 | 0.9209 |
| 0.0978 | 3.0 | 12630 | 0.2982 | 0.9323 |
| 0.062 | 4.0 | 16840 | 0.3278 | 0.9312 |
| 0.0303 | 5.0 | 21050 | 0.3642 | 0.9323 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
twieland/MIX2_ja-en_helsinki
|
twieland
| 2022-06-16T01:03:42Z | 107 | 2 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-12T01:01:47Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: MIX2_ja-en_helsinki
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MIX2_ja-en_helsinki
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ja-en](https://huggingface.co/Helsinki-NLP/opus-mt-ja-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4929
- Otaku Benchmark VN BLEU: 20.21
- Otaku Benchmark LN BLEU: 13.29
- Otaku Benchmark MANGA BLEU: 19.07
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 96
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 2.8467 | 0.01 | 2000 | 2.3237 |
| 2.6439 | 0.02 | 4000 | 2.2542 |
| 2.547 | 0.03 | 6000 | 2.1956 |
| 2.4852 | 0.04 | 8000 | 2.1088 |
| 2.4408 | 0.05 | 10000 | 2.0909 |
| 2.404 | 0.06 | 12000 | 2.1029 |
| 2.3634 | 0.07 | 14000 | 2.0636 |
| 2.3491 | 0.08 | 16000 | 2.0312 |
| 2.3203 | 0.09 | 18000 | 2.0187 |
| 2.3002 | 0.1 | 20000 | 1.9999 |
| 2.2791 | 0.11 | 22000 | 1.9823 |
| 2.2607 | 0.11 | 24000 | 1.9588 |
| 2.2475 | 0.12 | 26000 | 1.9728 |
| 2.2308 | 0.13 | 28000 | 1.9330 |
| 2.2237 | 0.14 | 30000 | 1.9657 |
| 2.208 | 0.15 | 32000 | 1.9560 |
| 2.2019 | 0.16 | 34000 | 1.9704 |
| 2.1864 | 0.17 | 36000 | 1.9513 |
| 2.1764 | 0.18 | 38000 | 1.9534 |
| 2.163 | 0.19 | 40000 | 1.9140 |
| 2.1534 | 0.2 | 42000 | 1.9241 |
| 2.146 | 0.21 | 44000 | 1.9162 |
| 2.1403 | 0.22 | 46000 | 1.9030 |
| 2.1309 | 0.23 | 48000 | 1.8741 |
| 2.1174 | 0.24 | 50000 | 1.8834 |
| 2.1157 | 0.25 | 52000 | 1.8666 |
| 2.1116 | 0.26 | 54000 | 1.8870 |
| 2.1062 | 0.27 | 56000 | 1.8837 |
| 2.0994 | 0.28 | 58000 | 1.8638 |
| 2.0924 | 0.29 | 60000 | 1.8766 |
| 2.0874 | 0.3 | 62000 | 1.8712 |
| 2.0805 | 0.31 | 64000 | 1.8792 |
| 2.0746 | 0.32 | 66000 | 1.8586 |
| 2.0684 | 0.32 | 68000 | 1.8819 |
| 2.0678 | 0.33 | 70000 | 1.8529 |
| 2.061 | 0.34 | 72000 | 1.8219 |
| 2.0532 | 0.35 | 74000 | 1.8383 |
| 2.0536 | 0.36 | 76000 | 1.8273 |
| 2.0432 | 0.37 | 78000 | 1.8304 |
| 2.0386 | 0.38 | 80000 | 1.8208 |
| 2.0361 | 0.39 | 82000 | 1.8103 |
| 2.0353 | 0.4 | 84000 | 1.8193 |
| 2.0266 | 0.41 | 86000 | 1.8369 |
| 2.0277 | 0.42 | 88000 | 1.8266 |
| 2.0221 | 0.43 | 90000 | 1.8372 |
| 2.0181 | 0.44 | 92000 | 1.8436 |
| 2.0182 | 0.45 | 94000 | 1.8505 |
| 2.0088 | 0.46 | 96000 | 1.8127 |
| 2.005 | 0.47 | 98000 | 1.8325 |
| 2.0003 | 0.48 | 100000 | 1.8407 |
| 2.0031 | 0.49 | 102000 | 1.8140 |
| 1.9954 | 0.5 | 104000 | 1.8177 |
| 1.9894 | 0.51 | 106000 | 1.8072 |
| 1.9901 | 0.52 | 108000 | 1.7971 |
| 1.9864 | 0.53 | 110000 | 1.8007 |
| 1.9848 | 0.53 | 112000 | 1.7961 |
| 1.9774 | 0.54 | 114000 | 1.7933 |
| 1.9802 | 0.55 | 116000 | 1.8031 |
| 1.9698 | 0.56 | 118000 | 1.8137 |
| 1.973 | 0.57 | 120000 | 1.7930 |
| 1.9696 | 0.58 | 122000 | 1.7838 |
| 1.9641 | 0.59 | 124000 | 1.7730 |
| 1.9609 | 0.6 | 126000 | 1.7800 |
| 1.9605 | 0.61 | 128000 | 1.7680 |
| 1.9516 | 0.62 | 130000 | 1.7895 |
| 1.9529 | 0.63 | 132000 | 1.7825 |
| 1.9503 | 0.64 | 134000 | 1.7792 |
| 1.9528 | 0.65 | 136000 | 1.8031 |
| 1.9439 | 0.66 | 138000 | 1.7652 |
| 1.9453 | 0.67 | 140000 | 1.7713 |
| 1.9404 | 0.68 | 142000 | 1.7585 |
| 1.9399 | 0.69 | 144000 | 1.7454 |
| 1.9325 | 0.7 | 146000 | 1.7605 |
| 1.9327 | 0.71 | 148000 | 1.7608 |
| 1.9301 | 0.72 | 150000 | 1.7743 |
| 1.928 | 0.73 | 152000 | 1.7532 |
| 1.9286 | 0.74 | 154000 | 1.7682 |
| 1.9194 | 0.74 | 156000 | 1.7582 |
| 1.9247 | 0.75 | 158000 | 1.7601 |
| 1.9183 | 0.76 | 160000 | 1.7600 |
| 1.9138 | 0.77 | 162000 | 1.7555 |
| 1.9148 | 0.78 | 164000 | 1.7447 |
| 1.913 | 0.79 | 166000 | 1.7512 |
| 1.9084 | 0.8 | 168000 | 1.7408 |
| 1.9109 | 0.81 | 170000 | 1.7463 |
| 1.905 | 0.82 | 172000 | 1.7543 |
| 1.9067 | 0.83 | 174000 | 1.7662 |
| 1.9005 | 0.84 | 176000 | 1.7428 |
| 1.8997 | 0.85 | 178000 | 1.7500 |
| 1.8963 | 0.86 | 180000 | 1.7297 |
| 1.8938 | 0.87 | 182000 | 1.7356 |
| 1.8923 | 0.88 | 184000 | 1.7602 |
| 1.8896 | 0.89 | 186000 | 1.7426 |
| 1.8866 | 0.9 | 188000 | 1.7323 |
| 1.887 | 0.91 | 190000 | 1.7587 |
| 1.8855 | 0.92 | 192000 | 1.7591 |
| 1.8842 | 0.93 | 194000 | 1.7570 |
| 1.8808 | 0.94 | 196000 | 1.7311 |
| 1.8836 | 0.95 | 198000 | 1.7449 |
| 1.8761 | 0.96 | 200000 | 1.7534 |
| 1.8721 | 0.96 | 202000 | 1.7623 |
| 1.8765 | 0.97 | 204000 | 1.7462 |
| 1.8747 | 0.98 | 206000 | 1.7452 |
| 1.8667 | 0.99 | 208000 | 1.7303 |
| 1.8618 | 1.0 | 210000 | 1.7468 |
| 1.8475 | 1.01 | 212000 | 1.7443 |
| 1.8435 | 1.02 | 214000 | 1.7622 |
| 1.8452 | 1.03 | 216000 | 1.7153 |
| 1.84 | 1.04 | 218000 | 1.6976 |
| 1.8432 | 1.05 | 220000 | 1.7013 |
| 1.842 | 1.06 | 222000 | 1.7073 |
| 1.8428 | 1.07 | 224000 | 1.6991 |
| 1.841 | 1.08 | 226000 | 1.7477 |
| 1.8321 | 1.09 | 228000 | 1.7438 |
| 1.838 | 1.1 | 230000 | 1.7352 |
| 1.8339 | 1.11 | 232000 | 1.7242 |
| 1.836 | 1.12 | 234000 | 1.7221 |
| 1.8329 | 1.13 | 236000 | 1.7402 |
| 1.8337 | 1.14 | 238000 | 1.7083 |
| 1.8267 | 1.15 | 240000 | 1.7200 |
| 1.8335 | 1.16 | 242000 | 1.7092 |
| 1.8306 | 1.17 | 244000 | 1.7340 |
| 1.8279 | 1.17 | 246000 | 1.6983 |
| 1.8261 | 1.18 | 248000 | 1.6928 |
| 1.8295 | 1.19 | 250000 | 1.7135 |
| 1.8227 | 1.2 | 252000 | 1.7156 |
| 1.822 | 1.21 | 254000 | 1.7018 |
| 1.8216 | 1.22 | 256000 | 1.7157 |
| 1.8205 | 1.23 | 258000 | 1.7047 |
| 1.8163 | 1.24 | 260000 | 1.6988 |
| 1.8187 | 1.25 | 262000 | 1.7077 |
| 1.8188 | 1.26 | 264000 | 1.6859 |
| 1.8138 | 1.27 | 266000 | 1.6831 |
| 1.8173 | 1.28 | 268000 | 1.6887 |
| 1.813 | 1.29 | 270000 | 1.6967 |
| 1.8114 | 1.3 | 272000 | 1.7085 |
| 1.8057 | 1.31 | 274000 | 1.6885 |
| 1.8094 | 1.32 | 276000 | 1.7198 |
| 1.8079 | 1.33 | 278000 | 1.7036 |
| 1.8056 | 1.34 | 280000 | 1.7106 |
| 1.8044 | 1.35 | 282000 | 1.6704 |
| 1.8047 | 1.36 | 284000 | 1.6811 |
| 1.7978 | 1.37 | 286000 | 1.6848 |
| 1.7997 | 1.38 | 288000 | 1.6698 |
| 1.7997 | 1.38 | 290000 | 1.6820 |
| 1.7945 | 1.39 | 292000 | 1.6963 |
| 1.7958 | 1.4 | 294000 | 1.6922 |
| 1.7923 | 1.41 | 296000 | 1.6577 |
| 1.7975 | 1.42 | 298000 | 1.6621 |
| 1.7914 | 1.43 | 300000 | 1.6804 |
| 1.7944 | 1.44 | 302000 | 1.6953 |
| 1.7927 | 1.45 | 304000 | 1.6846 |
| 1.789 | 1.46 | 306000 | 1.6889 |
| 1.7851 | 1.47 | 308000 | 1.6652 |
| 1.7902 | 1.48 | 310000 | 1.6823 |
| 1.7873 | 1.49 | 312000 | 1.6603 |
| 1.7868 | 1.5 | 314000 | 1.6766 |
| 1.7856 | 1.51 | 316000 | 1.6717 |
| 1.7807 | 1.52 | 318000 | 1.6466 |
| 1.7767 | 1.53 | 320000 | 1.6639 |
| 1.7782 | 1.54 | 322000 | 1.6678 |
| 1.7762 | 1.55 | 324000 | 1.6853 |
| 1.7746 | 1.56 | 326000 | 1.6785 |
| 1.7746 | 1.57 | 328000 | 1.6777 |
| 1.7716 | 1.58 | 330000 | 1.6784 |
| 1.7699 | 1.59 | 332000 | 1.6648 |
| 1.7739 | 1.59 | 334000 | 1.6725 |
| 1.7703 | 1.6 | 336000 | 1.6915 |
| 1.7707 | 1.61 | 338000 | 1.6858 |
| 1.7619 | 1.62 | 340000 | 1.6624 |
| 1.7652 | 1.63 | 342000 | 1.6797 |
| 1.7626 | 1.64 | 344000 | 1.6728 |
| 1.7647 | 1.65 | 346000 | 1.6580 |
| 1.7616 | 1.66 | 348000 | 1.6679 |
| 1.7616 | 1.67 | 350000 | 1.6470 |
| 1.7611 | 1.68 | 352000 | 1.6489 |
| 1.759 | 1.69 | 354000 | 1.6603 |
| 1.7604 | 1.7 | 356000 | 1.6532 |
| 1.7599 | 1.71 | 358000 | 1.6477 |
| 1.7529 | 1.72 | 360000 | 1.6322 |
| 1.7596 | 1.73 | 362000 | 1.6447 |
| 1.7508 | 1.74 | 364000 | 1.6509 |
| 1.7533 | 1.75 | 366000 | 1.6465 |
| 1.755 | 1.76 | 368000 | 1.6485 |
| 1.7473 | 1.77 | 370000 | 1.6493 |
| 1.7435 | 1.78 | 372000 | 1.6542 |
| 1.7483 | 1.79 | 374000 | 1.6573 |
| 1.7475 | 1.8 | 376000 | 1.6626 |
| 1.7439 | 1.8 | 378000 | 1.6366 |
| 1.7417 | 1.81 | 380000 | 1.6312 |
| 1.7387 | 1.82 | 382000 | 1.6424 |
| 1.7415 | 1.83 | 384000 | 1.6468 |
| 1.7409 | 1.84 | 386000 | 1.6528 |
| 1.7362 | 1.85 | 388000 | 1.6394 |
| 1.7372 | 1.86 | 390000 | 1.6581 |
| 1.7347 | 1.87 | 392000 | 1.6546 |
| 1.7368 | 1.88 | 394000 | 1.6468 |
| 1.7302 | 1.89 | 396000 | 1.6450 |
| 1.7317 | 1.9 | 398000 | 1.6368 |
| 1.7306 | 1.91 | 400000 | 1.6399 |
| 1.7304 | 1.92 | 402000 | 1.6180 |
| 1.726 | 1.93 | 404000 | 1.6212 |
| 1.7271 | 1.94 | 406000 | 1.6302 |
| 1.7312 | 1.95 | 408000 | 1.6264 |
| 1.7249 | 1.96 | 410000 | 1.6584 |
| 1.7226 | 1.97 | 412000 | 1.6514 |
| 1.7214 | 1.98 | 414000 | 1.6516 |
| 1.7228 | 1.99 | 416000 | 1.6346 |
| 1.7205 | 2.0 | 418000 | 1.6370 |
| 1.7041 | 2.01 | 420000 | 1.6021 |
| 1.691 | 2.02 | 422000 | 1.6385 |
| 1.6896 | 2.02 | 424000 | 1.6280 |
| 1.6882 | 2.03 | 426000 | 1.6295 |
| 1.6889 | 2.04 | 428000 | 1.6445 |
| 1.6904 | 2.05 | 430000 | 1.6558 |
| 1.6933 | 2.06 | 432000 | 1.6164 |
| 1.6916 | 2.07 | 434000 | 1.6011 |
| 1.6873 | 2.08 | 436000 | 1.6199 |
| 1.6903 | 2.09 | 438000 | 1.6300 |
| 1.6859 | 2.1 | 440000 | 1.6104 |
| 1.6901 | 2.11 | 442000 | 1.6248 |
| 1.6884 | 2.12 | 444000 | 1.6251 |
| 1.6859 | 2.13 | 446000 | 1.6145 |
| 1.6906 | 2.14 | 448000 | 1.6181 |
| 1.6859 | 2.15 | 450000 | 1.6264 |
| 1.6814 | 2.16 | 452000 | 1.6069 |
| 1.6853 | 2.17 | 454000 | 1.6089 |
| 1.6881 | 2.18 | 456000 | 1.6102 |
| 1.6869 | 2.19 | 458000 | 1.6327 |
| 1.6827 | 2.2 | 460000 | 1.6069 |
| 1.6813 | 2.21 | 462000 | 1.6278 |
| 1.6806 | 2.22 | 464000 | 1.6176 |
| 1.6763 | 2.23 | 466000 | 1.6180 |
| 1.68 | 2.23 | 468000 | 1.6226 |
| 1.6816 | 2.24 | 470000 | 1.6071 |
| 1.6845 | 2.25 | 472000 | 1.6178 |
| 1.6764 | 2.26 | 474000 | 1.6073 |
| 1.682 | 2.27 | 476000 | 1.5966 |
| 1.6727 | 2.28 | 478000 | 1.5979 |
| 1.6718 | 2.29 | 480000 | 1.6109 |
| 1.6764 | 2.3 | 482000 | 1.6034 |
| 1.671 | 2.31 | 484000 | 1.6001 |
| 1.6691 | 2.32 | 486000 | 1.6148 |
| 1.6706 | 2.33 | 488000 | 1.6003 |
| 1.6705 | 2.34 | 490000 | 1.6021 |
| 1.6699 | 2.35 | 492000 | 1.5940 |
| 1.6708 | 2.36 | 494000 | 1.6077 |
| 1.6715 | 2.37 | 496000 | 1.6188 |
| 1.6672 | 2.38 | 498000 | 1.5903 |
| 1.6638 | 2.39 | 500000 | 1.6042 |
| 1.6634 | 2.4 | 502000 | 1.5967 |
| 1.6669 | 2.41 | 504000 | 1.5904 |
| 1.6643 | 2.42 | 506000 | 1.6071 |
| 1.6606 | 2.43 | 508000 | 1.6065 |
| 1.6573 | 2.44 | 510000 | 1.6010 |
| 1.6603 | 2.44 | 512000 | 1.5801 |
| 1.6568 | 2.45 | 514000 | 1.5961 |
| 1.6564 | 2.46 | 516000 | 1.6020 |
| 1.6596 | 2.47 | 518000 | 1.5952 |
| 1.6567 | 2.48 | 520000 | 1.5760 |
| 1.6536 | 2.49 | 522000 | 1.5697 |
| 1.6564 | 2.5 | 524000 | 1.5664 |
| 1.652 | 2.51 | 526000 | 1.5616 |
| 1.653 | 2.52 | 528000 | 1.5738 |
| 1.6525 | 2.53 | 530000 | 1.5754 |
| 1.65 | 2.54 | 532000 | 1.5749 |
| 1.6519 | 2.55 | 534000 | 1.5788 |
| 1.6515 | 2.56 | 536000 | 1.5953 |
| 1.6492 | 2.57 | 538000 | 1.5836 |
| 1.6473 | 2.58 | 540000 | 1.5896 |
| 1.6452 | 2.59 | 542000 | 1.5858 |
| 1.6464 | 2.6 | 544000 | 1.5760 |
| 1.6445 | 2.61 | 546000 | 1.5683 |
| 1.6457 | 2.62 | 548000 | 1.5823 |
| 1.6417 | 2.63 | 550000 | 1.5780 |
| 1.6407 | 2.64 | 552000 | 1.5715 |
| 1.6368 | 2.65 | 554000 | 1.5618 |
| 1.6357 | 2.65 | 556000 | 1.5725 |
| 1.6446 | 2.66 | 558000 | 1.5744 |
| 1.634 | 2.67 | 560000 | 1.5360 |
| 1.6351 | 2.68 | 562000 | 1.5599 |
| 1.6362 | 2.69 | 564000 | 1.5607 |
| 1.637 | 2.7 | 566000 | 1.5561 |
| 1.6324 | 2.71 | 568000 | 1.5591 |
| 1.6325 | 2.72 | 570000 | 1.5527 |
| 1.6323 | 2.73 | 572000 | 1.5537 |
| 1.629 | 2.74 | 574000 | 1.5673 |
| 1.627 | 2.75 | 576000 | 1.5509 |
| 1.6279 | 2.76 | 578000 | 1.5507 |
| 1.6291 | 2.77 | 580000 | 1.5304 |
| 1.625 | 2.78 | 582000 | 1.5540 |
| 1.6246 | 2.79 | 584000 | 1.5530 |
| 1.6228 | 2.8 | 586000 | 1.5570 |
| 1.6241 | 2.81 | 588000 | 1.5586 |
| 1.6224 | 2.82 | 590000 | 1.5480 |
| 1.6264 | 2.83 | 592000 | 1.5624 |
| 1.6214 | 2.84 | 594000 | 1.5565 |
| 1.6187 | 2.85 | 596000 | 1.5397 |
| 1.6191 | 2.86 | 598000 | 1.5520 |
| 1.6192 | 2.87 | 600000 | 1.5494 |
| 1.6182 | 2.87 | 602000 | 1.5608 |
| 1.6164 | 2.88 | 604000 | 1.5428 |
| 1.6107 | 2.89 | 606000 | 1.5525 |
| 1.614 | 2.9 | 608000 | 1.5277 |
| 1.6158 | 2.91 | 610000 | 1.5502 |
| 1.6082 | 2.92 | 612000 | 1.5452 |
| 1.6089 | 2.93 | 614000 | 1.5400 |
| 1.6112 | 2.94 | 616000 | 1.5322 |
| 1.6069 | 2.95 | 618000 | 1.5394 |
| 1.6111 | 2.96 | 620000 | 1.5537 |
| 1.6038 | 2.97 | 622000 | 1.5486 |
| 1.6073 | 2.98 | 624000 | 1.5551 |
| 1.6046 | 2.99 | 626000 | 1.5386 |
| 1.6051 | 3.0 | 628000 | 1.5369 |
| 1.5672 | 3.01 | 630000 | 1.5361 |
| 1.5694 | 3.02 | 632000 | 1.5390 |
| 1.5692 | 3.03 | 634000 | 1.5386 |
| 1.5651 | 3.04 | 636000 | 1.5456 |
| 1.5724 | 3.05 | 638000 | 1.5419 |
| 1.5708 | 3.06 | 640000 | 1.5363 |
| 1.5665 | 3.07 | 642000 | 1.5446 |
| 1.5706 | 3.08 | 644000 | 1.5331 |
| 1.5679 | 3.08 | 646000 | 1.5449 |
| 1.5678 | 3.09 | 648000 | 1.5436 |
| 1.5676 | 3.1 | 650000 | 1.5309 |
| 1.5657 | 3.11 | 652000 | 1.5334 |
| 1.5697 | 3.12 | 654000 | 1.5303 |
| 1.5617 | 3.13 | 656000 | 1.5380 |
| 1.5675 | 3.14 | 658000 | 1.5404 |
| 1.5612 | 3.15 | 660000 | 1.5258 |
| 1.5639 | 3.16 | 662000 | 1.5329 |
| 1.567 | 3.17 | 664000 | 1.5418 |
| 1.5619 | 3.18 | 666000 | 1.5314 |
| 1.5637 | 3.19 | 668000 | 1.5201 |
| 1.5608 | 3.2 | 670000 | 1.5181 |
| 1.5641 | 3.21 | 672000 | 1.5290 |
| 1.5626 | 3.22 | 674000 | 1.5180 |
| 1.5605 | 3.23 | 676000 | 1.5156 |
| 1.5566 | 3.24 | 678000 | 1.5266 |
| 1.5587 | 3.25 | 680000 | 1.5286 |
| 1.5602 | 3.26 | 682000 | 1.5265 |
| 1.5535 | 3.27 | 684000 | 1.5354 |
| 1.5589 | 3.28 | 686000 | 1.5265 |
| 1.5569 | 3.29 | 688000 | 1.5346 |
| 1.559 | 3.29 | 690000 | 1.5306 |
| 1.5507 | 3.3 | 692000 | 1.5359 |
| 1.5547 | 3.31 | 694000 | 1.5264 |
| 1.5498 | 3.32 | 696000 | 1.5264 |
| 1.5559 | 3.33 | 698000 | 1.5273 |
| 1.553 | 3.34 | 700000 | 1.5137 |
| 1.5503 | 3.35 | 702000 | 1.5143 |
| 1.5498 | 3.36 | 704000 | 1.5263 |
| 1.5516 | 3.37 | 706000 | 1.5096 |
| 1.5461 | 3.38 | 708000 | 1.5112 |
| 1.5489 | 3.39 | 710000 | 1.5094 |
| 1.5451 | 3.4 | 712000 | 1.5079 |
| 1.544 | 3.41 | 714000 | 1.5058 |
| 1.5446 | 3.42 | 716000 | 1.5005 |
| 1.5417 | 3.43 | 718000 | 1.4972 |
| 1.5469 | 3.44 | 720000 | 1.5043 |
| 1.5407 | 3.45 | 722000 | 1.5041 |
| 1.5484 | 3.46 | 724000 | 1.5104 |
| 1.5409 | 3.47 | 726000 | 1.5087 |
| 1.5431 | 3.48 | 728000 | 1.5114 |
| 1.5393 | 3.49 | 730000 | 1.5102 |
| 1.5364 | 3.5 | 732000 | 1.5143 |
| 1.5403 | 3.5 | 734000 | 1.5202 |
| 1.5386 | 3.51 | 736000 | 1.5143 |
| 1.5381 | 3.52 | 738000 | 1.5198 |
| 1.5341 | 3.53 | 740000 | 1.5136 |
| 1.5344 | 3.54 | 742000 | 1.5172 |
| 1.5347 | 3.55 | 744000 | 1.5149 |
| 1.5292 | 3.56 | 746000 | 1.5141 |
| 1.5344 | 3.57 | 748000 | 1.5066 |
| 1.5307 | 3.58 | 750000 | 1.5087 |
| 1.5324 | 3.59 | 752000 | 1.5113 |
| 1.5273 | 3.6 | 754000 | 1.5101 |
| 1.5273 | 3.61 | 756000 | 1.4975 |
| 1.5282 | 3.62 | 758000 | 1.5053 |
| 1.5252 | 3.63 | 760000 | 1.4998 |
| 1.525 | 3.64 | 762000 | 1.5020 |
| 1.5297 | 3.65 | 764000 | 1.5075 |
| 1.5215 | 3.66 | 766000 | 1.4980 |
| 1.5237 | 3.67 | 768000 | 1.5066 |
| 1.5248 | 3.68 | 770000 | 1.5093 |
| 1.5231 | 3.69 | 772000 | 1.5090 |
| 1.5224 | 3.7 | 774000 | 1.5093 |
| 1.526 | 3.71 | 776000 | 1.5015 |
| 1.5215 | 3.71 | 778000 | 1.5045 |
| 1.5231 | 3.72 | 780000 | 1.4971 |
| 1.5205 | 3.73 | 782000 | 1.4987 |
| 1.5171 | 3.74 | 784000 | 1.5001 |
| 1.5134 | 3.75 | 786000 | 1.4951 |
| 1.5155 | 3.76 | 788000 | 1.4975 |
| 1.5154 | 3.77 | 790000 | 1.4928 |
| 1.5167 | 3.78 | 792000 | 1.4983 |
| 1.5146 | 3.79 | 794000 | 1.4938 |
| 1.5138 | 3.8 | 796000 | 1.4985 |
| 1.5137 | 3.81 | 798000 | 1.5021 |
| 1.5111 | 3.82 | 800000 | 1.5020 |
| 1.5134 | 3.83 | 802000 | 1.4998 |
| 1.5086 | 3.84 | 804000 | 1.5001 |
| 1.5081 | 3.85 | 806000 | 1.5031 |
| 1.5097 | 3.86 | 808000 | 1.5008 |
| 1.5128 | 3.87 | 810000 | 1.4990 |
| 1.5093 | 3.88 | 812000 | 1.4994 |
| 1.5109 | 3.89 | 814000 | 1.5021 |
| 1.5049 | 3.9 | 816000 | 1.5012 |
| 1.5042 | 3.91 | 818000 | 1.5013 |
| 1.5053 | 3.92 | 820000 | 1.4946 |
| 1.5066 | 3.93 | 822000 | 1.4984 |
| 1.5074 | 3.93 | 824000 | 1.4963 |
| 1.5046 | 3.94 | 826000 | 1.4972 |
| 1.5043 | 3.95 | 828000 | 1.4970 |
| 1.5064 | 3.96 | 830000 | 1.4940 |
| 1.4999 | 3.97 | 832000 | 1.4940 |
| 1.5022 | 3.98 | 834000 | 1.4934 |
| 1.5054 | 3.99 | 836000 | 1.4929 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/43folders-hotdogsladies
|
huggingtweets
| 2022-06-15T23:14:40Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-15T23:10:07Z |
---
language: en
thumbnail: http://www.huggingtweets.com/43folders-hotdogsladies/1655334875186/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1165801400/43f-logo-square-300_400x400.png')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1474526156430798849/0Z_zfYqH_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">43 Folders & Merlin Mann</div>
<div style="text-align: center; font-size: 14px;">@43folders-hotdogsladies</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 43 Folders & Merlin Mann.
| Data | 43 Folders | Merlin Mann |
| --- | --- | --- |
| Tweets downloaded | 149 | 317 |
| Retweets | 8 | 41 |
| Short tweets | 0 | 48 |
| Tweets kept | 141 | 228 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2gd31yq9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @43folders-hotdogsladies's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/148w4fxc) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/148w4fxc/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/43folders-hotdogsladies')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
emilys/BERTweet-WNUT17
|
emilys
| 2022-06-15T22:31:22Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"NER",
"en",
"dataset:wnut_17",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-14T22:59:18Z |
---
language:
- en
tags:
- NER
datasets:
- wnut_17
---
bertweet-base (https://huggingface.co/vinai/bertweet-base) finetuned on WNUT (2017), following https://github.com/huggingface/transformers/tree/main/examples/legacy/token-classification
|
tuni/xlm-roberta-large-xnli-finetuned-mnli
|
tuni
| 2022-06-15T21:46:28Z | 21 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-15T09:57:35Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: xlm-roberta-large-xnli-finetuned-mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.8548888888888889
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large-xnli-finetuned-mnli
This model is a fine-tuned version of [joeddav/xlm-roberta-large-xnli](https://huggingface.co/joeddav/xlm-roberta-large-xnli) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2542
- Accuracy: 0.8549
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.7468 | 1.0 | 2250 | 0.8551 | 0.8348 |
| 0.567 | 2.0 | 4500 | 0.8935 | 0.8377 |
| 0.318 | 3.0 | 6750 | 0.9892 | 0.8492 |
| 0.1146 | 4.0 | 9000 | 1.2373 | 0.8446 |
| 0.0383 | 5.0 | 11250 | 1.2542 | 0.8549 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.0
- Tokenizers 0.12.1
|
huggingtweets/yemeen
|
huggingtweets
| 2022-06-15T21:27:04Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-15T21:22:42Z |
---
language: en
thumbnail: http://www.huggingtweets.com/yemeen/1655328324400/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1438226079030947845/pwH4SUlU_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">𝕐𝕖𝕞𝕖𝕖𝕟</div>
<div style="text-align: center; font-size: 14px;">@yemeen</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 𝕐𝕖𝕞𝕖𝕖𝕟.
| Data | 𝕐𝕖𝕞𝕖𝕖𝕟 |
| --- | --- |
| Tweets downloaded | 2911 |
| Retweets | 1038 |
| Short tweets | 198 |
| Tweets kept | 1675 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3it77r2s/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @yemeen's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/39fvs51l) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/39fvs51l/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/yemeen')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
jianyang/dqn-SpaceInvadersNoFrameskip-v4
|
jianyang
| 2022-06-15T20:31:27Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-15T20:30:43Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 699.00 +/- 184.58
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jianyang -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga jianyang
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
kcarnold/inquisitive2
|
kcarnold
| 2022-06-15T19:55:47Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-15T18:28:55Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: inquisitive2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# inquisitive2
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1760
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0
- Datasets 2.3.0
- Tokenizers 0.12.1
|
ouiame/bert2gpt2Summy
|
ouiame
| 2022-06-15T19:31:08Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain",
"fr",
"dataset:ouiame/autotrain-data-trainproject",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-15T13:08:46Z |
---
tags: autotrain
language: fr
widget:
- text: "I love AutoTrain 🤗"
datasets:
- ouiame/autotrain-data-trainproject
co2_eq_emissions: 894.9753853627794
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 985232782
- CO2 Emissions (in grams): 894.9753853627794
## Validation Metrics
- Loss: 1.9692628383636475
- Rouge1: 19.3642
- Rouge2: 7.3644
- RougeL: 16.148
- RougeLsum: 16.4988
- Gen Len: 18.9975
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/ouiame/autotrain-trainproject-985232782
```
|
Ambiwlans/dqn-SpaceInvadersNoFrameskip-v4
|
Ambiwlans
| 2022-06-15T18:24:24Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-15T18:23:45Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 594.50 +/- 167.46
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Ambiwlans -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Ambiwlans
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
castorini/afriberta_base
|
castorini
| 2022-06-15T18:23:04Z | 64 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"xlm-roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
Hugging Face's logo
---
language:
- om
- am
- rw
- rn
- ha
- ig
- pcm
- so
- sw
- ti
- yo
- multilingual
---
# afriberta_base
## Model description
AfriBERTa base is a pretrained multilingual language model with around 111 million parameters.
The model has 8 layers, 6 attention heads, 768 hidden units and 3072 feed forward size.
The model was pretrained on 11 African languages namely - Afaan Oromoo (also called Oromo), Amharic, Gahuza (a mixed language containing Kinyarwanda and Kirundi), Hausa, Igbo, Nigerian Pidgin, Somali, Swahili, Tigrinya and Yorùbá.
The model has been shown to obtain competitive downstream performances on text classification and Named Entity Recognition on several African languages, including those it was not pretrained on.
## Intended uses & limitations
#### How to use
You can use this model with Transformers for any downstream task.
For example, assuming we want to finetune this model on a token classification task, we do the following:
```python
>>> from transformers import AutoTokenizer, AutoModelForTokenClassification
>>> model = AutoModelForTokenClassification.from_pretrained("castorini/afriberta_base")
>>> tokenizer = AutoTokenizer.from_pretrained("castorini/afriberta_base")
# we have to manually set the model max length because it is an imported sentencepiece model, which huggingface does not properly support right now
>>> tokenizer.model_max_length = 512
```
#### Limitations and bias
- This model is possibly limited by its training dataset which are majorly obtained from news articles from a specific span of time. Thus, it may not generalize well.
- This model is trained on very little data (less than 1 GB), hence it may not have seen enough data to learn very complex linguistic relations.
## Training data
The model was trained on an aggregation of datasets from the BBC news website and Common Crawl.
## Training procedure
For information on training procedures, please refer to the AfriBERTa [paper]() or [repository](https://github.com/keleog/afriberta)
### BibTeX entry and citation info
```
@inproceedings{ogueji-etal-2021-small,
title = "Small Data? No Problem! Exploring the Viability of Pretrained Multilingual Language Models for Low-resourced Languages",
author = "Ogueji, Kelechi and
Zhu, Yuxin and
Lin, Jimmy",
booktitle = "Proceedings of the 1st Workshop on Multilingual Representation Learning",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.mrl-1.11",
pages = "116--126",
}
```
|
Vkt/model-960hfacebook-2022.06.08
|
Vkt
| 2022-06-15T18:17:56Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-08T16:16:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: model-960hfacebook-2022.06.08
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model-960hfacebook-2022.06.08
This model is a fine-tuned version of [facebook/wav2vec2-large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2907
- Wer: 0.1804
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 6.7634 | 0.21 | 300 | 2.9743 | 0.9998 |
| 1.6536 | 0.43 | 600 | 0.8605 | 0.7529 |
| 0.9823 | 0.64 | 900 | 0.6600 | 0.6286 |
| 0.8708 | 0.86 | 1200 | 0.5780 | 0.5736 |
| 0.7878 | 1.07 | 1500 | 0.5386 | 0.5326 |
| 0.7033 | 1.29 | 1800 | 0.4986 | 0.4992 |
| 0.681 | 1.5 | 2100 | 0.4575 | 0.4778 |
| 0.6537 | 1.72 | 2400 | 0.4591 | 0.4482 |
| 0.6263 | 1.93 | 2700 | 0.4317 | 0.4353 |
| 0.5811 | 2.14 | 3000 | 0.4149 | 0.4159 |
| 0.5565 | 2.36 | 3300 | 0.4170 | 0.3956 |
| 0.5501 | 2.57 | 3600 | 0.4007 | 0.3929 |
| 0.5444 | 2.79 | 3900 | 0.3930 | 0.3851 |
| 0.5177 | 3.0 | 4200 | 0.4006 | 0.3630 |
| 0.4682 | 3.22 | 4500 | 0.3707 | 0.3713 |
| 0.4805 | 3.43 | 4800 | 0.3564 | 0.3583 |
| 0.4715 | 3.65 | 5100 | 0.3596 | 0.3434 |
| 0.4482 | 3.86 | 5400 | 0.3555 | 0.3394 |
| 0.4407 | 4.07 | 5700 | 0.3680 | 0.3312 |
| 0.4134 | 4.29 | 6000 | 0.3534 | 0.3328 |
| 0.4165 | 4.5 | 6300 | 0.3294 | 0.3259 |
| 0.4196 | 4.72 | 6600 | 0.3353 | 0.3214 |
| 0.4117 | 4.93 | 6900 | 0.3266 | 0.3211 |
| 0.3847 | 5.15 | 7200 | 0.3365 | 0.3156 |
| 0.3687 | 5.36 | 7500 | 0.3233 | 0.3014 |
| 0.376 | 5.58 | 7800 | 0.3345 | 0.2979 |
| 0.3732 | 5.79 | 8100 | 0.3105 | 0.2882 |
| 0.3705 | 6.0 | 8400 | 0.3252 | 0.2935 |
| 0.3311 | 6.22 | 8700 | 0.3266 | 0.2911 |
| 0.3386 | 6.43 | 9000 | 0.2975 | 0.2765 |
| 0.337 | 6.65 | 9300 | 0.3070 | 0.2826 |
| 0.3458 | 6.86 | 9600 | 0.3090 | 0.2766 |
| 0.3218 | 7.08 | 9900 | 0.3117 | 0.2748 |
| 0.3041 | 7.29 | 10200 | 0.2989 | 0.2651 |
| 0.3031 | 7.51 | 10500 | 0.3210 | 0.2672 |
| 0.3037 | 7.72 | 10800 | 0.3040 | 0.2667 |
| 0.3126 | 7.93 | 11100 | 0.2867 | 0.2613 |
| 0.3005 | 8.15 | 11400 | 0.3075 | 0.2610 |
| 0.2802 | 8.36 | 11700 | 0.3129 | 0.2608 |
| 0.2785 | 8.58 | 12000 | 0.3002 | 0.2579 |
| 0.2788 | 8.79 | 12300 | 0.3063 | 0.2476 |
| 0.286 | 9.01 | 12600 | 0.2971 | 0.2495 |
| 0.2534 | 9.22 | 12900 | 0.2766 | 0.2452 |
| 0.2542 | 9.44 | 13200 | 0.2893 | 0.2405 |
| 0.2576 | 9.65 | 13500 | 0.3038 | 0.2518 |
| 0.2552 | 9.86 | 13800 | 0.2851 | 0.2429 |
| 0.2487 | 10.08 | 14100 | 0.2858 | 0.2356 |
| 0.2441 | 10.29 | 14400 | 0.2999 | 0.2364 |
| 0.2345 | 10.51 | 14700 | 0.2907 | 0.2373 |
| 0.2352 | 10.72 | 15000 | 0.2885 | 0.2402 |
| 0.2464 | 10.94 | 15300 | 0.2896 | 0.2339 |
| 0.2219 | 11.15 | 15600 | 0.2999 | 0.2351 |
| 0.2257 | 11.37 | 15900 | 0.2930 | 0.2326 |
| 0.2184 | 11.58 | 16200 | 0.2980 | 0.2353 |
| 0.2182 | 11.79 | 16500 | 0.2832 | 0.2296 |
| 0.2224 | 12.01 | 16800 | 0.2797 | 0.2285 |
| 0.1991 | 12.22 | 17100 | 0.2810 | 0.2296 |
| 0.1993 | 12.44 | 17400 | 0.2949 | 0.2253 |
| 0.2042 | 12.65 | 17700 | 0.2864 | 0.2207 |
| 0.2083 | 12.87 | 18000 | 0.2860 | 0.2278 |
| 0.1998 | 13.08 | 18300 | 0.2872 | 0.2232 |
| 0.1919 | 13.3 | 18600 | 0.2894 | 0.2247 |
| 0.1925 | 13.51 | 18900 | 0.3007 | 0.2234 |
| 0.1966 | 13.72 | 19200 | 0.2831 | 0.2176 |
| 0.1942 | 13.94 | 19500 | 0.2811 | 0.2161 |
| 0.1778 | 14.15 | 19800 | 0.2901 | 0.2196 |
| 0.1755 | 14.37 | 20100 | 0.2864 | 0.2188 |
| 0.1795 | 14.58 | 20400 | 0.2927 | 0.2170 |
| 0.1817 | 14.8 | 20700 | 0.2846 | 0.2156 |
| 0.1754 | 15.01 | 21000 | 0.3036 | 0.2137 |
| 0.1674 | 15.23 | 21300 | 0.2876 | 0.2156 |
| 0.171 | 15.44 | 21600 | 0.2812 | 0.2106 |
| 0.1603 | 15.65 | 21900 | 0.2692 | 0.2093 |
| 0.1663 | 15.87 | 22200 | 0.2745 | 0.2094 |
| 0.1608 | 16.08 | 22500 | 0.2807 | 0.2043 |
| 0.1555 | 16.3 | 22800 | 0.2872 | 0.2036 |
| 0.1546 | 16.51 | 23100 | 0.2837 | 0.2049 |
| 0.1515 | 16.73 | 23400 | 0.2746 | 0.2031 |
| 0.1571 | 16.94 | 23700 | 0.2767 | 0.2047 |
| 0.1498 | 17.16 | 24000 | 0.2837 | 0.2050 |
| 0.143 | 17.37 | 24300 | 0.2745 | 0.2038 |
| 0.1471 | 17.58 | 24600 | 0.2787 | 0.2004 |
| 0.1442 | 17.8 | 24900 | 0.2779 | 0.2005 |
| 0.1481 | 18.01 | 25200 | 0.2906 | 0.2021 |
| 0.1318 | 18.23 | 25500 | 0.2936 | 0.1991 |
| 0.1396 | 18.44 | 25800 | 0.2913 | 0.1984 |
| 0.144 | 18.66 | 26100 | 0.2806 | 0.1953 |
| 0.1341 | 18.87 | 26400 | 0.2896 | 0.1972 |
| 0.1375 | 19.09 | 26700 | 0.2937 | 0.2002 |
| 0.1286 | 19.3 | 27000 | 0.2929 | 0.1954 |
| 0.1242 | 19.51 | 27300 | 0.2968 | 0.1962 |
| 0.1305 | 19.73 | 27600 | 0.2879 | 0.1944 |
| 0.1287 | 19.94 | 27900 | 0.2850 | 0.1937 |
| 0.1286 | 20.16 | 28200 | 0.2910 | 0.1961 |
| 0.121 | 20.37 | 28500 | 0.2908 | 0.1912 |
| 0.1264 | 20.59 | 28800 | 0.2853 | 0.1904 |
| 0.1238 | 20.8 | 29100 | 0.2913 | 0.1926 |
| 0.117 | 21.02 | 29400 | 0.2907 | 0.1922 |
| 0.1154 | 21.23 | 29700 | 0.2902 | 0.1888 |
| 0.1142 | 21.44 | 30000 | 0.2854 | 0.1907 |
| 0.1168 | 21.66 | 30300 | 0.2918 | 0.1873 |
| 0.1168 | 21.87 | 30600 | 0.2897 | 0.1873 |
| 0.1105 | 22.09 | 30900 | 0.2951 | 0.1856 |
| 0.1134 | 22.3 | 31200 | 0.2842 | 0.1847 |
| 0.1111 | 22.52 | 31500 | 0.2884 | 0.1829 |
| 0.1088 | 22.73 | 31800 | 0.2991 | 0.1840 |
| 0.1139 | 22.94 | 32100 | 0.2876 | 0.1839 |
| 0.1078 | 23.16 | 32400 | 0.2899 | 0.1830 |
| 0.1087 | 23.37 | 32700 | 0.2927 | 0.1803 |
| 0.1076 | 23.59 | 33000 | 0.2924 | 0.1801 |
| 0.11 | 23.8 | 33300 | 0.2877 | 0.1804 |
| 0.1067 | 24.02 | 33600 | 0.2918 | 0.1799 |
| 0.1104 | 24.23 | 33900 | 0.2908 | 0.1809 |
| 0.1023 | 24.45 | 34200 | 0.2939 | 0.1807 |
| 0.0993 | 24.66 | 34500 | 0.2925 | 0.1802 |
| 0.1053 | 24.87 | 34800 | 0.2907 | 0.1804 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.8.1+cu111
- Datasets 2.2.1
- Tokenizers 0.12.1
|
huggingtweets/_mohamads
|
huggingtweets
| 2022-06-15T17:37:47Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-15T17:33:04Z |
---
language: en
thumbnail: http://www.huggingtweets.com/_mohamads/1655314541919/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1522920330960027648/Z5piAxnG_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">🧬 محمد الزهراني</div>
<div style="text-align: center; font-size: 14px;">@_mohamads</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 🧬 محمد الزهراني.
| Data | 🧬 محمد الزهراني |
| --- | --- |
| Tweets downloaded | 1108 |
| Retweets | 75 |
| Short tweets | 90 |
| Tweets kept | 943 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/y8wg10zm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @_mohamads's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1jm1spua) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1jm1spua/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/_mohamads')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
joaogante/test_text
|
joaogante
| 2022-06-15T16:53:59Z | 44 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"distilbert",
"fill-mask",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1910.01108",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-05-31T16:02:39Z |
---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# DistilBERT base model (uncased)
This model is a distilled version of the [BERT base model](https://huggingface.co/bert-base-uncased). It was
introduced in [this paper](https://arxiv.org/abs/1910.01108). The code for the distillation process can be found
[here](https://github.com/huggingface/transformers/tree/master/examples/distillation). This model is uncased: it does
not make a difference between english and English.
## Model description
DistilBERT is a transformers model, smaller and faster than BERT, which was pretrained on the same corpus in a
self-supervised fashion, using the BERT base model as a teacher. This means it was pretrained on the raw texts only,
with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic
process to generate inputs and labels from those texts using the BERT base model. More precisely, it was pretrained
with three objectives:
- Distillation loss: the model was trained to return the same probabilities as the BERT base model.
- Masked language modeling (MLM): this is part of the original training loss of the BERT base model. When taking a
sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the
model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that
usually see the words one after the other, or from autoregressive models like GPT which internally mask the future
tokens. It allows the model to learn a bidirectional representation of the sentence.
- Cosine embedding loss: the model was also trained to generate hidden states as close as possible as the BERT base
model.
This way, the model learns the same inner representation of the English language than its teacher model, while being
faster for inference or downstream tasks.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=distilbert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='distilbert-base-uncased')
>>> unmasker("Hello I'm a [MASK] model.")
[{'sequence': "[CLS] hello i'm a role model. [SEP]",
'score': 0.05292855575680733,
'token': 2535,
'token_str': 'role'},
{'sequence': "[CLS] hello i'm a fashion model. [SEP]",
'score': 0.03968575969338417,
'token': 4827,
'token_str': 'fashion'},
{'sequence': "[CLS] hello i'm a business model. [SEP]",
'score': 0.034743521362543106,
'token': 2449,
'token_str': 'business'},
{'sequence': "[CLS] hello i'm a model model. [SEP]",
'score': 0.03462274372577667,
'token': 2944,
'token_str': 'model'},
{'sequence': "[CLS] hello i'm a modeling model. [SEP]",
'score': 0.018145186826586723,
'token': 11643,
'token_str': 'modeling'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import DistilBertTokenizer, DistilBertModel
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = DistilBertModel.from_pretrained("distilbert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import DistilBertTokenizer, TFDistilBertModel
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = TFDistilBertModel.from_pretrained("distilbert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. It also inherits some of
[the bias of its teacher model](https://huggingface.co/bert-base-uncased#limitations-and-bias).
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='distilbert-base-uncased')
>>> unmasker("The White man worked as a [MASK].")
[{'sequence': '[CLS] the white man worked as a blacksmith. [SEP]',
'score': 0.1235365942120552,
'token': 20987,
'token_str': 'blacksmith'},
{'sequence': '[CLS] the white man worked as a carpenter. [SEP]',
'score': 0.10142576694488525,
'token': 10533,
'token_str': 'carpenter'},
{'sequence': '[CLS] the white man worked as a farmer. [SEP]',
'score': 0.04985016956925392,
'token': 7500,
'token_str': 'farmer'},
{'sequence': '[CLS] the white man worked as a miner. [SEP]',
'score': 0.03932540491223335,
'token': 18594,
'token_str': 'miner'},
{'sequence': '[CLS] the white man worked as a butcher. [SEP]',
'score': 0.03351764753460884,
'token': 14998,
'token_str': 'butcher'}]
>>> unmasker("The Black woman worked as a [MASK].")
[{'sequence': '[CLS] the black woman worked as a waitress. [SEP]',
'score': 0.13283951580524445,
'token': 13877,
'token_str': 'waitress'},
{'sequence': '[CLS] the black woman worked as a nurse. [SEP]',
'score': 0.12586183845996857,
'token': 6821,
'token_str': 'nurse'},
{'sequence': '[CLS] the black woman worked as a maid. [SEP]',
'score': 0.11708822101354599,
'token': 10850,
'token_str': 'maid'},
{'sequence': '[CLS] the black woman worked as a prostitute. [SEP]',
'score': 0.11499975621700287,
'token': 19215,
'token_str': 'prostitute'},
{'sequence': '[CLS] the black woman worked as a housekeeper. [SEP]',
'score': 0.04722772538661957,
'token': 22583,
'token_str': 'housekeeper'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
DistilBERT pretrained on the same data as BERT, which is [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset
consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia)
(excluding lists, tables and headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 8 16 GB V100 for 90 hours. See the
[training code](https://github.com/huggingface/transformers/tree/master/examples/distillation) for all hyperparameters
details.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Task | MNLI | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE |
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|
| | 82.2 | 88.5 | 89.2 | 91.3 | 51.3 | 85.8 | 87.5 | 59.9 |
### BibTeX entry and citation info
```bibtex
@article{Sanh2019DistilBERTAD,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
journal={ArXiv},
year={2019},
volume={abs/1910.01108}
}
```
<a href="https://huggingface.co/exbert/?model=distilbert-base-uncased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
testingacc/dall-e-private
|
testingacc
| 2022-06-15T16:42:48Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-06-15T16:42:19Z |
---
title: DALL·E mini
description: "DALL·E mini - a Hugging Face Space by Boris Dayma et al."
emoji: 🥑
colorFrom: yellow
colorTo: green
sdk: static
pinned: True
server : testingacc
license: apache-2.0
---
|
Alireza1044/mobilebert_rte
|
Alireza1044
| 2022-06-15T16:24:42Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-15T16:09:49Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE RTE
type: glue
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.6678700361010831
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rte
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8396
- Accuracy: 0.6679
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ksabeh/albert-base-v2-attribute-correction-mlm
|
ksabeh
| 2022-06-15T15:49:41Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"albert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-06-15T07:46:56Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ksabeh/albert-base-v2-mlm-electronics-attribute-correction
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ksabeh/albert-base-v2-mlm-electronics-attribute-correction
This model is a fine-tuned version of [ksabeh/albert-base-v2-mlm-electronics](https://huggingface.co/ksabeh/albert-base-v2-mlm-electronics) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0541
- Validation Loss: 0.0570
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 36852, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1364 | 0.0743 | 0 |
| 0.0541 | 0.0570 | 1 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
ncfrey/ChemGPT-1.2B
|
ncfrey
| 2022-06-15T15:44:24Z | 116 | 13 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"chemistry",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-11T20:16:48Z |
---
tags:
- chemistry
---
# ChemGPT 1.2B
ChemGPT is based on the GPT-Neo model and was introduced in the paper [Neural Scaling of Deep Chemical Models](https://chemrxiv.org/engage/chemrxiv/article-details/627bddd544bdd532395fb4b5).
## Model description
ChemGPT is a transformers model for generative molecular modeling, which was pretrained on the PubChem10M dataset.
## Intended uses & limitations
### How to use
You can use this model directly from the 🤗/transformers library.
### Limitations and bias
This model was trained on a subset of molecules from PubChem. You can use this model to generate molecules, but it is mostly intended to be used for investigations of the effects of pre-training and fine-tuning on downstream datasets.
## Training data
PubChem10M, a dataset of SMILES strings from PubChem, available via [DeepChem](https://deepchemdata.s3-us-west-1.amazonaws.com/datasets/pubchem_10m.txt.zip).
## Training procedure
### Preprocessing
SMILES strings were converted to SELFIES using version 1.0.4 of the SELFIES library.
### Pretraining
See code in the [LitMatter repository](https://github.com/ncfrey/litmatter/blob/main/lit_models/lit_chemgpt.py).
### BibTeX entry and citation info
```
@article{frey_soklaski_axelrod_samsi_gomez-bombarelli_coley_gadepally_2022,
place={Cambridge}, title={Neural Scaling of Deep Chemical Models},
DOI={10.26434/chemrxiv-2022-3s512}, journal={ChemRxiv}, publisher={Cambridge Open Engage},
author={Frey, Nathan and Soklaski, Ryan and Axelrod, Simon and Samsi, Siddharth and Gomez-Bombarelli, Rafael and Coley, Connor and Gadepally, Vijay},
year={2022}} This content is a preprint and has not been peer-reviewed.
```
```
Frey, Nathan, Ryan Soklaski, Simon Axelrod, Siddharth Samsi, Rafael Gomez-Bombarelli, Connor Coley, and Vijay Gadepally.
"Neural Scaling of Deep Chemical Models." ChemRxiv (2022). Print. This content is a preprint and has not been peer-reviewed.
```
|
Alireza1044/mobilebert_stsb
|
Alireza1044
| 2022-06-15T15:37:52Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-15T15:05:55Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE STSB
type: glue
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.8735136732190296
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stsb
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5348
- Pearson: 0.8773
- Spearmanr: 0.8735
- Combined Score: 0.8754
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ncfrey/ChemGPT-19M
|
ncfrey
| 2022-06-15T15:19:57Z | 384 | 5 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"chemistry",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-11T20:02:27Z |
---
tags:
- chemistry
---
# ChemGPT 19M
ChemGPT is based on the GPT-Neo model and was introduced in the paper [Neural Scaling of Deep Chemical Models](https://chemrxiv.org/engage/chemrxiv/article-details/627bddd544bdd532395fb4b5).
## Model description
ChemGPT is a transformers model for generative molecular modeling, which was pretrained on the PubChem10M dataset.
## Intended uses & limitations
### How to use
You can use this model directly from the 🤗/transformers library.
### Limitations and bias
This model was trained on a subset of molecules from PubChem. You can use this model to generate molecules, but it is mostly intended to be used for investigations of the effects of pre-training and fine-tuning on downstream datasets.
## Training data
PubChem10M, a dataset of SMILES strings from PubChem, available via [DeepChem](https://deepchemdata.s3-us-west-1.amazonaws.com/datasets/pubchem_10m.txt.zip).
## Training procedure
### Preprocessing
SMILES strings were converted to SELFIES using version 1.0.4 of the SELFIES library.
### Pretraining
See code in the [LitMatter repository](https://github.com/ncfrey/litmatter/blob/main/lit_models/lit_chemgpt.py).
### BibTeX entry and citation info
```
@article{frey_soklaski_axelrod_samsi_gomez-bombarelli_coley_gadepally_2022,
place={Cambridge}, title={Neural Scaling of Deep Chemical Models},
DOI={10.26434/chemrxiv-2022-3s512}, journal={ChemRxiv}, publisher={Cambridge Open Engage},
author={Frey, Nathan and Soklaski, Ryan and Axelrod, Simon and Samsi, Siddharth and Gomez-Bombarelli, Rafael and Coley, Connor and Gadepally, Vijay},
year={2022}} This content is a preprint and has not been peer-reviewed.
```
```
Frey, Nathan, Ryan Soklaski, Simon Axelrod, Siddharth Samsi, Rafael Gomez-Bombarelli, Connor Coley, and Vijay Gadepally.
"Neural Scaling of Deep Chemical Models." ChemRxiv (2022). Print. This content is a preprint and has not been peer-reviewed.
```
|
ncfrey/ChemGPT-4.7M
|
ncfrey
| 2022-06-15T15:17:11Z | 391 | 19 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"chemistry",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-11T19:54:55Z |
---
tags:
- chemistry
---
# ChemGPT 4.7M
ChemGPT is based on the GPT-Neo model and was introduced in the paper [Neural Scaling of Deep Chemical Models](https://chemrxiv.org/engage/chemrxiv/article-details/627bddd544bdd532395fb4b5).
## Model description
ChemGPT is a transformers model for generative molecular modeling, which was pretrained on the PubChem10M dataset.
## Intended uses & limitations
### How to use
You can use this model directly from the 🤗/transformers library.
### Limitations and bias
This model was trained on a subset of molecules from PubChem. You can use this model to generate molecules, but it is mostly intended to be used for investigations of the effects of pre-training and fine-tuning on downstream datasets.
## Training data
PubChem10M, a dataset of SMILES strings from PubChem, available via [DeepChem](https://deepchemdata.s3-us-west-1.amazonaws.com/datasets/pubchem_10m.txt.zip).
## Training procedure
### Preprocessing
SMILES strings were converted to SELFIES using version 1.0.4 of the SELFIES library.
### Pretraining
See code in the [LitMatter repository](https://github.com/ncfrey/litmatter/blob/main/lit_models/lit_chemgpt.py).
### BibTeX entry and citation info
```
@article{frey_soklaski_axelrod_samsi_gomez-bombarelli_coley_gadepally_2022,
place={Cambridge}, title={Neural Scaling of Deep Chemical Models},
DOI={10.26434/chemrxiv-2022-3s512}, journal={ChemRxiv}, publisher={Cambridge Open Engage},
author={Frey, Nathan and Soklaski, Ryan and Axelrod, Simon and Samsi, Siddharth and Gomez-Bombarelli, Rafael and Coley, Connor and Gadepally, Vijay},
year={2022}} This content is a preprint and has not been peer-reviewed.
```
```
Frey, Nathan, Ryan Soklaski, Simon Axelrod, Siddharth Samsi, Rafael Gomez-Bombarelli, Connor Coley, and Vijay Gadepally.
"Neural Scaling of Deep Chemical Models." ChemRxiv (2022). Print. This content is a preprint and has not been peer-reviewed.
```
|
themindorchestra/Soundhealing
|
themindorchestra
| 2022-06-15T13:02:25Z | 0 | 0 | null |
[
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2022-06-15T13:02:25Z |
---
license: cc-by-nc-sa-4.0
---
|
AnyaSchen/rugpt3_tyutchev
|
AnyaSchen
| 2022-06-15T11:33:16Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-15T11:27:40Z |
This model was created as a fine-tuned GPT-3 medium model, which is tuned to the style of Tyutchev's poetry in Russian. You can give her a word, a phrase, or just an empty line as an input, and she will give out a poem in the style of Tyutchev.

|
AnyaSchen/rugpt3_pushkin
|
AnyaSchen
| 2022-06-15T11:25:56Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-10T14:45:08Z |
This model was created by additional training of the giant GPT-3 medium on the works of A.S. Pushkin. Now this model can generate poetry in the style of this poet. Fine-tuning of GPT-3 was produced.

|
vijaygoriya/test_trainer
|
vijaygoriya
| 2022-06-15T11:23:25Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-19T11:03:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: test_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9646
- Accuracy: 0.8171
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4452 | 1.0 | 2000 | 0.5505 | 0.7673 |
| 0.277 | 2.0 | 4000 | 0.7271 | 0.8210 |
| 0.1412 | 3.0 | 6000 | 0.9646 | 0.8171 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
Corianas/dqn-BeamRiderNoFrameskip-v4
|
Corianas
| 2022-06-15T10:41:50Z | 5 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"BeamRiderNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-15T08:55:40Z |
---
library_name: stable-baselines3
tags:
- BeamRiderNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 3983.00 +/- 1512.41
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BeamRiderNoFrameskip-v4
type: BeamRiderNoFrameskip-v4
---
# **DQN** Agent playing **BeamRiderNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **BeamRiderNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env BeamRiderNoFrameskip-v4 -orga Corianas -f logs/
python enjoy.py --algo dqn --env BeamRiderNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env BeamRiderNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env BeamRiderNoFrameskip-v4 -f logs/ -orga Corianas
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
fabiochiu/dqn-SpaceInvadersNoFrameskip-v4
|
fabiochiu
| 2022-06-15T10:32:49Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-15T10:32:10Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 631.50 +/- 84.41
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga fabiochiu -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga fabiochiu
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
HrayrM/bert-base-uncased-issues-128
|
HrayrM
| 2022-06-15T10:29:34Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-15T01:38:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-issues-128
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-issues-128
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2432
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0987 | 1.0 | 291 | 1.6066 |
| 1.631 | 2.0 | 582 | 1.4775 |
| 1.4933 | 3.0 | 873 | 1.4646 |
| 1.3984 | 4.0 | 1164 | 1.3314 |
| 1.3377 | 5.0 | 1455 | 1.3122 |
| 1.274 | 6.0 | 1746 | 1.2062 |
| 1.2538 | 7.0 | 2037 | 1.2626 |
| 1.192 | 8.0 | 2328 | 1.1832 |
| 1.1612 | 9.0 | 2619 | 1.2055 |
| 1.1489 | 10.0 | 2910 | 1.1605 |
| 1.1262 | 11.0 | 3201 | 1.1925 |
| 1.1022 | 12.0 | 3492 | 1.1309 |
| 1.0892 | 13.0 | 3783 | 1.1692 |
| 1.0812 | 14.0 | 4074 | 1.2384 |
| 1.0666 | 15.0 | 4365 | 1.0822 |
| 1.0533 | 16.0 | 4656 | 1.2432 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0
- Datasets 2.2.2
- Tokenizers 0.10.3
|
TinySuitStarfish/q-FrozenLake-v1-4x4-Slippery
|
TinySuitStarfish
| 2022-06-15T10:09:42Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-15T10:09:31Z |
---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-Slippery
results:
- metrics:
- type: mean_reward
value: 0.72 +/- 0.45
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="TinySuitStarfish/q-FrozenLake-v1-4x4-Slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
roscazo/gpt2-covid
|
roscazo
| 2022-06-15T09:46:02Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-15T08:55:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: gpt2-covid
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-covid
This model is a fine-tuned version of [PlanTL-GOB-ES/gpt2-base-bne](https://huggingface.co/PlanTL-GOB-ES/gpt2-base-bne) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.0
- Tokenizers 0.12.1
|
FritzOS/TEdetection_distiBERT_NER_final_8e
|
FritzOS
| 2022-06-15T09:37:10Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-15T09:36:53Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: TEdetection_distiBERT_NER_final_8e
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# TEdetection_distiBERT_NER_final_8e
This model is a fine-tuned version of [FritzOS/TEdetection_distiBERT_mLM_final_8e](https://huggingface.co/FritzOS/TEdetection_distiBERT_mLM_final_8e) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0032
- Validation Loss: 0.0037
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 220743, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.0032 | 0.0037 | 0 |
### Framework versions
- Transformers 4.19.4
- TensorFlow 2.8.2
- Datasets 2.3.0
- Tokenizers 0.12.1
|
multimodalart/compvis-latent-diffusion-text2img-large
|
multimodalart
| 2022-06-15T08:59:10Z | 0 | 12 | null |
[
"text-to-image",
"license:mit",
"region:us"
] |
text-to-image
| 2022-04-11T15:44:02Z |
---
license: mit
tags:
- text-to-image
---
|
RuiqianLi/Malaya-speech_fine-tune_MrBrown_15_Jun
|
RuiqianLi
| 2022-06-15T08:23:28Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:uob_singlish",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-15T04:20:17Z |
---
tags:
- generated_from_trainer
datasets:
- uob_singlish
model-index:
- name: Malaya-speech_fine-tune_MrBrown_15_Jun
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Malaya-speech_fine-tune_MrBrown_15_Jun
This model is a fine-tuned version of [malay-huggingface/wav2vec2-xls-r-300m-mixed](https://huggingface.co/malay-huggingface/wav2vec2-xls-r-300m-mixed) on the uob_singlish dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4822
- Wer: 0.2449
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1607 | 5.26 | 200 | 0.3983 | 0.2381 |
| 0.5184 | 10.52 | 400 | 0.3256 | 0.2245 |
| 0.2993 | 15.78 | 600 | 0.3437 | 0.2426 |
| 0.2485 | 21.05 | 800 | 0.4547 | 0.2585 |
| 0.1917 | 26.31 | 1000 | 0.4598 | 0.2517 |
| 0.1586 | 31.57 | 1200 | 0.4050 | 0.2290 |
| 0.1486 | 36.83 | 1400 | 0.4186 | 0.2653 |
| 0.1307 | 42.1 | 1600 | 0.4284 | 0.2857 |
| 0.0895 | 47.36 | 1800 | 0.5158 | 0.2562 |
| 0.0526 | 52.62 | 2000 | 0.4525 | 0.2449 |
| 0.0553 | 57.88 | 2200 | 0.4364 | 0.2336 |
| 0.037 | 63.16 | 2400 | 0.3873 | 0.2449 |
| 0.0439 | 68.42 | 2600 | 0.3914 | 0.2404 |
| 0.0411 | 73.68 | 2800 | 0.4673 | 0.2494 |
| 0.0242 | 78.94 | 3000 | 0.4801 | 0.2426 |
| 0.0833 | 84.21 | 3200 | 0.4641 | 0.2630 |
| 0.034 | 89.47 | 3400 | 0.4607 | 0.2449 |
| 0.02 | 94.73 | 3600 | 0.4825 | 0.2449 |
| 0.0211 | 99.99 | 3800 | 0.4822 | 0.2449 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
sdugar/cross-en-de-fr-xlmr-200d-sentence-transformer
|
sdugar
| 2022-06-15T08:21:33Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-06-15T07:00:45Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 200 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 124278 with parameters:
```
{'batch_size': 25, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"correct_bias": false,
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(dense): Dense({'in_features': 768, 'out_features': 200, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
FritzOS/TEdetection_distiBERT_mLM_final_8e
|
FritzOS
| 2022-06-15T07:55:31Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-15T07:55:17Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: TEdetection_distiBERT_mLM_final_8e
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# TEdetection_distiBERT_mLM_final_8e
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 208018, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.19.4
- TensorFlow 2.8.2
- Datasets 2.3.0
- Tokenizers 0.12.1
|
hossay/biobert-base-cased-v1.2-finetuned-ner
|
hossay
| 2022-06-15T07:38:51Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:ncbi_disease",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-15T07:19:38Z |
---
tags:
- generated_from_trainer
datasets:
- ncbi_disease
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: biobert-base-cased-v1.2-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: ncbi_disease
type: ncbi_disease
args: ncbi_disease
metrics:
- name: Precision
type: precision
value: 0.8396334478808706
- name: Recall
type: recall
value: 0.8731387730792138
- name: F1
type: f1
value: 0.856058394160584
- name: Accuracy
type: accuracy
value: 0.9824805769647444
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert-base-cased-v1.2-finetuned-ner
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.2](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2) on the ncbi_disease dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0706
- Precision: 0.8396
- Recall: 0.8731
- F1: 0.8561
- Accuracy: 0.9825
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 340 | 0.0691 | 0.8190 | 0.7868 | 0.8026 | 0.9777 |
| 0.101 | 2.0 | 680 | 0.0700 | 0.8334 | 0.8553 | 0.8442 | 0.9807 |
| 0.0244 | 3.0 | 1020 | 0.0706 | 0.8396 | 0.8731 | 0.8561 | 0.9825 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.0
- Tokenizers 0.12.1
|
jkhan447/sarcasm-detection-Bert-base-uncased-POS
|
jkhan447
| 2022-06-15T07:17:36Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-15T04:05:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sarcasm-detection-Bert-base-uncased-POS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sarcasm-detection-Bert-base-uncased-POS
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1904
- Accuracy: 0.591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.0
- Tokenizers 0.12.1
|
lewtun/dog-vs-chicken
|
lewtun
| 2022-06-15T07:09:02Z | 52 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-06-15T07:08:51Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: dog-vs-chicken
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
# dog-vs-chicken
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### crispy fried chicken

#### poodle

|
seomh/distilbert-base-uncased-finetuned-squad
|
seomh
| 2022-06-15T06:49:56Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-06-11T14:04:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0083
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2258 | 1.0 | 5533 | 0.0560 |
| 0.952 | 2.0 | 11066 | 0.0096 |
| 0.7492 | 3.0 | 16599 | 0.0083 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/wikisignpost
|
huggingtweets
| 2022-06-15T06:24:26Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-15T06:07:57Z |
---
language: en
thumbnail: http://www.huggingtweets.com/wikisignpost/1655274233816/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/795028567398576128/GG1GUpJ7_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">The Signpost</div>
<div style="text-align: center; font-size: 14px;">@wikisignpost</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from The Signpost.
| Data | The Signpost |
| --- | --- |
| Tweets downloaded | 3216 |
| Retweets | 522 |
| Short tweets | 47 |
| Tweets kept | 2647 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/7z6btxad/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @wikisignpost's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/27ceco72) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/27ceco72/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/wikisignpost')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
olpa/pegasus-samsum
|
olpa
| 2022-06-15T04:40:48Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-15T03:21:16Z |
---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7014 | 0.54 | 500 | 1.4863 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
danielcfho/q-Taxi-v3
|
danielcfho
| 2022-06-15T04:32:17Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-15T04:32:10Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.50 +/- 2.78
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="danielcfho/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
huggingtweets/mysteriousgam54
|
huggingtweets
| 2022-06-15T04:06:06Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-15T04:05:58Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1429866660299689984/CGXAQuWf_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">themysteriousgamer</div>
<div style="text-align: center; font-size: 14px;">@mysteriousgam54</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from themysteriousgamer.
| Data | themysteriousgamer |
| --- | --- |
| Tweets downloaded | 1315 |
| Retweets | 210 |
| Short tweets | 168 |
| Tweets kept | 937 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/m4i8lg1e/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mysteriousgam54's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3rz0m12t) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3rz0m12t/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mysteriousgam54')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
steven123/Teeth_C
|
steven123
| 2022-06-15T02:53:44Z | 52 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-06-15T02:53:33Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: Teeth_C
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.5
---
# Teeth_C
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Good Teeth

#### Missing Teeth

#### Rotten Teeth

|
DLochmelis33/22s-dl-sentiment-1
|
DLochmelis33
| 2022-06-15T01:07:08Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:yelp_review_full",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-15T01:01:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- yelp_review_full
metrics:
- accuracy
model-index:
- name: 22s-dl-sentiment-1
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: yelp_review_full
type: yelp_review_full
args: yelp_review_full
metrics:
- name: Accuracy
type: accuracy
value: 0.9542333333333334
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 22s-dl-sentiment-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2574
- Accuracy: 0.9542
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.0
- Tokenizers 0.12.1
|
tanbwilson/q-Taxi-v3
|
tanbwilson
| 2022-06-15T01:04:48Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-15T01:04:42Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.54 +/- 2.69
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="tanbwilson/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
tanbwilson/q-FrozenLake-v1-4x4-noSlippery
|
tanbwilson
| 2022-06-15T01:02:56Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-15T01:02:49Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="tanbwilson/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
enoriega/rule_learning_margin_1mm_spanpred
|
enoriega
| 2022-06-15T00:55:38Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"generated_from_trainer",
"dataset:enoriega/odinsynth_dataset",
"endpoints_compatible",
"region:us"
] | null | 2022-06-11T02:59:23Z |
---
tags:
- generated_from_trainer
datasets:
- enoriega/odinsynth_dataset
model-index:
- name: rule_learning_margin_1mm_spanpred
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rule_learning_margin_1mm_spanpred
This model is a fine-tuned version of [enoriega/rule_softmatching](https://huggingface.co/enoriega/rule_softmatching) on the enoriega/odinsynth_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3250
- Margin Accuracy: 0.8518
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2000
- total_train_batch_size: 8000
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Margin Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------------:|
| 0.5448 | 0.16 | 20 | 0.5229 | 0.7717 |
| 0.4571 | 0.32 | 40 | 0.4292 | 0.8109 |
| 0.4296 | 0.48 | 60 | 0.4009 | 0.8193 |
| 0.4028 | 0.64 | 80 | 0.3855 | 0.8296 |
| 0.3878 | 0.8 | 100 | 0.3757 | 0.8334 |
| 0.3831 | 0.96 | 120 | 0.3643 | 0.8367 |
| 0.3591 | 1.12 | 140 | 0.3582 | 0.8393 |
| 0.3598 | 1.28 | 160 | 0.3533 | 0.8401 |
| 0.3635 | 1.44 | 180 | 0.3442 | 0.8427 |
| 0.3478 | 1.6 | 200 | 0.3406 | 0.8472 |
| 0.342 | 1.76 | 220 | 0.3352 | 0.8479 |
| 0.3327 | 1.92 | 240 | 0.3352 | 0.8486 |
| 0.3487 | 2.08 | 260 | 0.3293 | 0.8487 |
| 0.3387 | 2.24 | 280 | 0.3298 | 0.8496 |
| 0.3457 | 2.4 | 300 | 0.3279 | 0.8505 |
| 0.3483 | 2.56 | 320 | 0.3286 | 0.8510 |
| 0.3421 | 2.72 | 340 | 0.3245 | 0.8517 |
| 0.3332 | 2.88 | 360 | 0.3252 | 0.8517 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
steven123/Teeth_B
|
steven123
| 2022-06-15T00:31:50Z | 50 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-06-15T00:31:36Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: Teeth_B
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.6800000071525574
---
# Teeth_B
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Good Teeth

#### Missing Teeth

#### Rotten Teeth

|
tyler-richardett/ppo-LunarLander-v2
|
tyler-richardett
| 2022-06-14T23:18:24Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-14T23:17:36Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 136.42 +/- 57.88
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ahmeddbahaa/xlmroberta2xlmroberta-finetuned-ar-wikilingua
|
ahmeddbahaa
| 2022-06-14T20:55:49Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"summarization",
"ar",
"roberta",
"xlmroberta2xlmroberta",
"Abstractive Summarization",
"generated_from_trainer",
"dataset:wiki_lingua",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-06-14T08:51:35Z |
---
tags:
- summarization
- ar
- encoder-decoder
- roberta
- xlmroberta2xlmroberta
- Abstractive Summarization
- generated_from_trainer
datasets:
- wiki_lingua
model-index:
- name: xlmroberta2xlmroberta-finetuned-ar-wikilingua
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmroberta2xlmroberta-finetuned-ar-wikilingua
This model is a fine-tuned version of [](https://huggingface.co/) on the wiki_lingua dataset.
It achieves the following results on the evaluation set:
- Loss: 4.7757
- Rouge-1: 11.2
- Rouge-2: 1.96
- Rouge-l: 10.28
- Gen Len: 19.8
- Bertscore: 66.27
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 10
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:|
| 8.03 | 1.0 | 312 | 7.3208 | 0.19 | 0.0 | 0.19 | 20.0 | 54.84 |
| 7.2309 | 2.0 | 624 | 7.1107 | 1.17 | 0.03 | 1.16 | 20.0 | 60.0 |
| 7.0752 | 3.0 | 936 | 7.0061 | 2.58 | 0.15 | 2.55 | 20.0 | 63.52 |
| 6.7538 | 4.0 | 1248 | 6.4189 | 5.75 | 0.46 | 5.55 | 19.95 | 62.83 |
| 6.1513 | 5.0 | 1560 | 5.8402 | 8.46 | 1.04 | 8.08 | 19.2 | 64.25 |
| 5.6639 | 6.0 | 1872 | 5.3938 | 8.62 | 1.17 | 8.16 | 19.28 | 64.81 |
| 5.2857 | 7.0 | 2184 | 5.0719 | 9.34 | 1.41 | 8.61 | 19.71 | 65.29 |
| 5.027 | 8.0 | 2496 | 4.9047 | 10.42 | 1.52 | 9.57 | 19.57 | 65.75 |
| 4.8747 | 9.0 | 2808 | 4.8032 | 10.79 | 1.71 | 9.91 | 19.42 | 66.2 |
| 4.7855 | 10.0 | 3120 | 4.7757 | 11.01 | 1.73 | 10.04 | 19.55 | 66.24 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Jaay/test
|
Jaay
| 2022-06-14T20:51:25Z | 0 | 0 | null |
[
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2022-06-14T20:51:25Z |
---
license: bigscience-bloom-rail-1.0
---
|
ahmeddbahaa/AraBART-finetuned-ar
|
ahmeddbahaa
| 2022-06-14T20:41:43Z | 24 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"summarization",
"generated_from_trainer",
"dataset:xlsum",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-04-04T14:58:44Z |
---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- xlsum
model-index:
- name: AraBART-finetune-ar-xlsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AraBART-finetuned-ar
This model is a fine-tuned version of [moussaKam/AraBART](https://huggingface.co/moussaKam/AraBART) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7449
- Rouge-1: 31.08
- Rouge-2: 14.68
- Rouge-l: 27.36
- Gen Len: 19.64
- Bertscore: 73.86
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 10
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:|
| 4.4318 | 1.0 | 2345 | 3.7996 | 28.93 | 13.2 | 25.56 | 19.51 | 73.17 |
| 4.0338 | 2.0 | 4690 | 3.7483 | 30.29 | 14.24 | 26.73 | 19.5 | 73.59 |
| 3.8586 | 3.0 | 7035 | 3.7281 | 30.44 | 14.44 | 26.92 | 19.75 | 73.58 |
| 3.7289 | 4.0 | 9380 | 3.7204 | 30.55 | 14.49 | 26.88 | 19.66 | 73.73 |
| 3.6245 | 5.0 | 11725 | 3.7199 | 30.73 | 14.63 | 27.11 | 19.69 | 73.68 |
| 3.5392 | 6.0 | 14070 | 3.7221 | 30.85 | 14.65 | 27.21 | 19.7 | 73.77 |
| 3.4694 | 7.0 | 16415 | 3.7286 | 31.08 | 14.8 | 27.41 | 19.62 | 73.84 |
| 3.4126 | 8.0 | 18760 | 3.7384 | 31.06 | 14.77 | 27.41 | 19.64 | 73.82 |
| 3.3718 | 9.0 | 21105 | 3.7398 | 31.18 | 14.89 | 27.49 | 19.67 | 73.87 |
| 3.3428 | 10.0 | 23450 | 3.7449 | 31.19 | 14.88 | 27.44 | 19.68 | 73.87 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
AAkhilesh/wav2vec2-large-xls-r-300m-ta-colab
|
AAkhilesh
| 2022-06-14T20:39:54Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-02T14:12:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-ta-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-ta-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
tanbwilson/ppo-LunarLander-v2
|
tanbwilson
| 2022-06-14T20:31:40Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-14T20:31:12Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 270.14 +/- 22.06
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nateraw/koala-panda-wombat
|
nateraw
| 2022-06-14T20:31:04Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-06-14T20:30:51Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: koala-panda-wombat
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9850746393203735
---
# koala-panda-wombat
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### koala

#### panda

#### wombat

|
cindy203cc/finetuning-sentiment-model-3000-samples
|
cindy203cc
| 2022-06-14T19:16:33Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-14T18:55:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8633333333333333
- name: F1
type: f1
value: 0.8628762541806019
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3187
- Accuracy: 0.8633
- F1: 0.8629
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.0
- Tokenizers 0.12.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.