modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-12 18:33:19
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-12 18:33:14
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
hhffxx/pegasus-samsum
|
hhffxx
| 2022-08-29T10:52:44Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-29T06:48:07Z |
---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [stas/pegasus-cnn_dailymail-tiny-random](https://huggingface.co/stas/pegasus-cnn_dailymail-tiny-random) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 7.5735
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.6148 | 0.54 | 500 | 7.5735 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.11.0
- Datasets 2.4.0
- Tokenizers 0.12.1
|
autoevaluate/image-multi-class-classification
|
autoevaluate
| 2022-08-29T10:11:22Z | 118 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:mnist",
"dataset:autoevaluate/mnist-sample",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-06-21T08:52:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mnist
- autoevaluate/mnist-sample
metrics:
- accuracy
model-index:
- name: image-classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: mnist
type: mnist
args: mnist
metrics:
- name: Accuracy
type: accuracy
value: 0.9833333333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image-classification
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the mnist dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0556
- Accuracy: 0.9833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3743 | 1.0 | 422 | 0.0556 | 0.9833 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
autoevaluate/translation
|
autoevaluate
| 2022-08-29T10:08:28Z | 25 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"dataset:autoevaluate/wmt16-sample",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-28T14:14:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
- autoevaluate/wmt16-sample
metrics:
- bleu
model-index:
- name: translation
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 28.5866
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# translation
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3170
- Bleu: 28.5866
- Gen Len: 33.9575
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 0.8302 | 0.03 | 1000 | 1.3170 | 28.5866 | 33.9575 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
artfrontier/ddpm-butterflies-128
|
artfrontier
| 2022-08-29T09:07:51Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-08-29T07:14:18Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/artfrontier/ddpm-butterflies-128/tensorboard?#scalars)
|
kingabzpro/Reinforce-CartPole-v1
|
kingabzpro
| 2022-08-29T08:58:15Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-29T08:56:09Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
Arashasg/WikiBert2WikiBert
|
Arashasg
| 2022-08-29T08:34:49Z | 17 | 1 |
transformers
|
[
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"Wikipedia",
"Summarizer",
"bert2bert",
"Summarization",
"fa",
"dataset:pn-summary",
"dataset:XL-Sum",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-25T12:17:42Z |
---
language:
- fa
tags:
- Wikipedia
- Summarizer
- bert2bert
- Summarization
task_categories:
- Summarization
- text generation
task_ids:
- news-articles-summarization
license:
- apache-2.0
multilinguality:
- monolingual
datasets:
- pn-summary
- XL-Sum
metrics:
- rouge-1
- rouge-2
- rouge-l
---
# WikiBert2WikiBert
Bert language models can be employed for Summarization tasks. WikiBert2WikiBert is an encoder-decoder transformer model that is initialized using the Persian WikiBert Model weights. The WikiBert Model is a Bert language model which is fine-tuned on Persian Wikipedia. After using the WikiBert weights for initialization, the model is trained for five epochs on PN-summary and Persian BBC datasets.
## How to Use:
You can use the code below to get the model's outputs, or you can simply use the demo on the right.
```
from transformers import (
BertTokenizerFast,
EncoderDecoderConfig,
EncoderDecoderModel,
BertConfig
)
model_name = 'Arashasg/WikiBert2WikiBert'
tokenizer = BertTokenizerFast.from_pretrained(model_name)
config = EncoderDecoderConfig.from_pretrained(model_name)
model = EncoderDecoderModel.from_pretrained(model_name, config=config)
def generate_summary(text):
inputs = tokenizer(text, padding="max_length", truncation=True, max_length=512, return_tensors="pt")
input_ids = inputs.input_ids.to("cuda")
attention_mask = inputs.attention_mask.to("cuda")
outputs = model.generate(input_ids, attention_mask=attention_mask)
output_str = tokenizer.batch_decode(outputs, skip_special_tokens=True)
return output_str
input = 'your input comes here'
summary = generate_summary(input)
```
## Evaluation
I separated 5 percent of the pn-summary for evaluation of the model. The rouge scores of the model are as follows:
| Rouge-1 | Rouge-2 | Rouge-l |
| ------------- | ------------- | ------------- |
| 38.97% | 18.42% | 34.50% |
|
dav3794/demo_knots_1_2
|
dav3794
| 2022-08-29T08:26:35Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"unk",
"dataset:dav3794/autotrain-data-demo-knots-1-2_bis",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-29T08:21:00Z |
---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- dav3794/autotrain-data-demo-knots-1-2_bis
co2_eq_emissions:
emissions: 0.04019334522125584
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1328150718
- CO2 Emissions (in grams): 0.0402
## Validation Metrics
- Loss: 0.381
- Accuracy: 0.857
- Precision: 0.842
- Recall: 0.970
- AUC: 0.889
- F1: 0.901
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/dav3794/autotrain-demo-knots-1-2_bis-1328150718
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("dav3794/autotrain-demo-knots-1-2_bis-1328150718", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("dav3794/autotrain-demo-knots-1-2_bis-1328150718", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
hieule/bert-finetuned-ner
|
hieule
| 2022-08-29T07:32:11Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-29T06:30:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: train
args: conll2003
metrics:
- name: Recall
type: recall
value: 0.9522046449007069
- name: F1
type: f1
value: 0.9441802252816022
- name: Accuracy
type: accuracy
value: 0.9866221227997881
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0858
- Precition: 0.9363
- Recall: 0.9522
- F1: 0.9442
- Accuracy: 0.9866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precition | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0081 | 1.0 | 1756 | 0.0914 | 0.9273 | 0.9446 | 0.9359 | 0.9848 |
| 0.012 | 2.0 | 3512 | 0.0852 | 0.9321 | 0.9478 | 0.9399 | 0.9857 |
| 0.0036 | 3.0 | 5268 | 0.0858 | 0.9363 | 0.9522 | 0.9442 | 0.9866 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
pinot/wav2vec2-large-xls-r-300m-ja-colab-new
|
pinot
| 2022-08-29T07:21:29Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_10_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-08-28T16:18:00Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_10_0
model-index:
- name: wav2vec2-large-xls-r-300m-ja-colab-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-ja-colab-new
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_10_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1931
- Wer: 0.2584
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 637 | 5.3089 | 0.9670 |
| No log | 2.0 | 1274 | 3.2716 | 0.6123 |
| No log | 3.0 | 1911 | 2.1797 | 0.4708 |
| No log | 4.0 | 2548 | 1.8331 | 0.4113 |
| 6.3938 | 5.0 | 3185 | 1.5111 | 0.3460 |
| 6.3938 | 6.0 | 3822 | 1.3575 | 0.3132 |
| 6.3938 | 7.0 | 4459 | 1.2946 | 0.2957 |
| 6.3938 | 8.0 | 5096 | 1.2346 | 0.2762 |
| 1.023 | 9.0 | 5733 | 1.2053 | 0.2653 |
| 1.023 | 10.0 | 6370 | 1.1931 | 0.2584 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.10.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
brilianputraa/q-Taxi-v3
|
brilianputraa
| 2022-08-29T07:15:46Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-29T07:10:31Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
akrisroof/ddpm-butterflies-128
|
akrisroof
| 2022-08-29T04:18:07Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-08-29T03:37:31Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/akrisroof/ddpm-butterflies-128/tensorboard?#scalars)
|
JAlexis/ajuste_02
|
JAlexis
| 2022-08-29T02:11:02Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-08-29T02:08:03Z |
---
widget:
- text: "How can I protect myself against covid-19?"
context: "Preventative measures consist of recommendations to wear a mask in public, maintain social distancing of at least six feet, wash hands regularly, and use hand sanitizer. To facilitate this aim, we adapt the conceptual model and measures of Liao et al. "
- text: "What are the risk factors for covid-19?"
context: "To identify risk factors for hospital deaths from COVID-19, the OpenSAFELY platform examined electronic health records from 17.4 million UK adults. The authors used multivariable Cox proportional hazards model to identify the association of risk of death with older age, lower socio-economic status, being male, non-white ethnic background and certain clinical conditions (diabetes, obesity, cancer, respiratory diseases, heart, kidney, liver, neurological and autoimmune conditions). Notably, asthma was identified as a risk factor, despite prior suggestion of a potential protective role. Interestingly, higher risks due to ethnicity or lower socio-economic status could not be completely attributed to pre-existing health conditions."
---
|
JAlexis/ajuste_01
|
JAlexis
| 2022-08-29T01:10:25Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-08-29T00:29:25Z |
---
widget:
- text: "How can I protect myself against covid-19?"
context: "Preventative measures consist of recommendations to wear a mask in public, maintain social distancing of at least six feet, wash hands regularly, and use hand sanitizer. To facilitate this aim, we adapt the conceptual model and measures of Liao et al. "
- text: "What are the risk factors for covid-19?"
context: "To identify risk factors for hospital deaths from COVID-19, the OpenSAFELY platform examined electronic health records from 17.4 million UK adults. The authors used multivariable Cox proportional hazards model to identify the association of risk of death with older age, lower socio-economic status, being male, non-white ethnic background and certain clinical conditions (diabetes, obesity, cancer, respiratory diseases, heart, kidney, liver, neurological and autoimmune conditions). Notably, asthma was identified as a risk factor, despite prior suggestion of a potential protective role. Interestingly, higher risks due to ethnicity or lower socio-economic status could not be completely attributed to pre-existing health conditions."
---
## Model description
This model was obtained by fine-tuning deepset/bert-base-cased-squad2 on Cord19 Dataset.
## How to use
```python
from transformers.pipelines import pipeline
model_name = "JAlexis/ajuste_01"
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
inputs = {
'question': 'What are the risk factors for covid-19?',
'context': 'To identify risk factors for hospital deaths from COVID-19, the OpenSAFELY platform examined electronic health records from 17.4 million UK adults. The authors used multivariable Cox proportional hazards model to identify the association of risk of death with older age, lower socio-economic status, being male, non-white ethnic background and certain clinical conditions (diabetes, obesity, cancer, respiratory diseases, heart, kidney, liver, neurological and autoimmune conditions). Notably, asthma was identified as a risk factor, despite prior suggestion of a potential protective role. Interestingly, higher risks due to ethnicity or lower socio-economic status could not be completely attributed to pre-existing health conditions.',
}
nlp(inputs)
```
|
silviacamplani/distilbert-finetuned-dapt_tapt-lm-music
|
silviacamplani
| 2022-08-28T22:28:18Z | 55 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-28T18:43:22Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert-finetuned-dapt_tapt-lm-music
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert-finetuned-dapt_tapt-lm-music
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.8680
- Validation Loss: 2.4306
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 32918, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.8680 | 2.4306 | 0 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
mabrouk/distilbert-base-uncased-finetuned-emotion
|
mabrouk
| 2022-08-28T22:07:48Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-28T21:36:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9255
- name: F1
type: f1
value: 0.9254357449049359
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2206
- Accuracy: 0.9255
- F1: 0.9254
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8523 | 1.0 | 250 | 0.3186 | 0.908 | 0.9064 |
| 0.247 | 2.0 | 500 | 0.2206 | 0.9255 | 0.9254 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ChaoLi/xlm-roberta-base-finetuned-panx-it
|
ChaoLi
| 2022-08-28T19:55:33Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-28T19:52:28Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8224755700325732
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2521
- F1: 0.8225
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8088 | 1.0 | 70 | 0.3423 | 0.7009 |
| 0.2844 | 2.0 | 140 | 0.2551 | 0.8027 |
| 0.1905 | 3.0 | 210 | 0.2521 | 0.8225 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ChaoLi/xlm-roberta-base-finetuned-panx-de-fr
|
ChaoLi
| 2022-08-28T19:46:37Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-28T19:37:01Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1643
- F1: 0.8626
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2891 | 1.0 | 715 | 0.1780 | 0.8288 |
| 0.1472 | 2.0 | 1430 | 0.1633 | 0.8488 |
| 0.0948 | 3.0 | 2145 | 0.1643 | 0.8626 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
baudm/crnn
|
baudm
| 2022-08-28T19:06:36Z | 0 | 0 | null |
[
"pytorch",
"image-to-text",
"en",
"license:apache-2.0",
"region:us"
] |
image-to-text
| 2022-08-28T19:03:22Z |
---
language:
- en
license: apache-2.0
tags:
- image-to-text
---
# CRNN v1.0
CRNN model pre-trained on various real [STR datasets](https://github.com/baudm/parseq/blob/main/Datasets.md) at image size 128x32.
Disclaimer: this model card was not written by the original authors.
## Model description
*TODO*
## Intended uses & limitations
You can use the model for STR on images containing Latin characters (62 case-sensitive alphanumeric + 32 punctuation marks).
### How to use
*TODO*
### BibTeX entry and citation info
```bibtex
@article{shi2016end,
title={An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition},
author={Shi, Baoguang and Bai, Xiang and Yao, Cong},
journal={IEEE transactions on pattern analysis and machine intelligence},
volume={39},
number={11},
pages={2298--2304},
year={2016},
publisher={IEEE}
}
```
|
baudm/trba
|
baudm
| 2022-08-28T19:03:01Z | 0 | 0 | null |
[
"pytorch",
"image-to-text",
"en",
"license:apache-2.0",
"region:us"
] |
image-to-text
| 2022-08-28T19:01:11Z |
---
language:
- en
license: apache-2.0
tags:
- image-to-text
---
# TRBA v1.0
TRBA model pre-trained on various real [STR datasets](https://github.com/baudm/parseq/blob/main/Datasets.md) at image size 128x32.
Disclaimer: this model card was not written by the original authors.
## Model description
*TODO*
## Intended uses & limitations
You can use the model for STR on images containing Latin characters (62 case-sensitive alphanumeric + 32 punctuation marks).
### How to use
*TODO*
### BibTeX entry and citation info
```bibtex
@InProceedings{Baek_2019_ICCV,
author = {Baek, Jeonghun and Kim, Geewook and Lee, Junyeop and Park, Sungrae and Han, Dongyoon and Yun, Sangdoo and Oh, Seong Joon and Lee, Hwalsuk},
title = {What Is Wrong With Scene Text Recognition Model Comparisons? Dataset and Model Analysis},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {10},
year = {2019}
}
```
|
baudm/abinet-lv
|
baudm
| 2022-08-28T19:00:28Z | 0 | 0 | null |
[
"pytorch",
"image-to-text",
"en",
"license:apache-2.0",
"region:us"
] |
image-to-text
| 2022-08-28T18:55:28Z |
---
language:
- en
license: apache-2.0
tags:
- image-to-text
---
# ABINet-LV v1.0
ABINet model pre-trained on various real [STR datasets](https://github.com/baudm/parseq/blob/main/Datasets.md) at image size 128x32.
Disclaimer: this model card was not written by the original authors.
## Model description
*TODO*
## Intended uses & limitations
You can use the model for STR on images containing Latin characters (62 case-sensitive alphanumeric + 32 punctuation marks).
### How to use
*TODO*
### BibTeX entry and citation info
```bibtex
@InProceedings{Fang_2021_CVPR,
author = {Fang, Shancheng and Xie, Hongtao and Wang, Yuxin and Mao, Zhendong and Zhang, Yongdong},
title = {Read Like Humans: Autonomous, Bidirectional and Iterative Language Modeling for Scene Text Recognition},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {6},
year = {2021},
pages = {7098-7107}
}
```
|
baudm/vitstr-small
|
baudm
| 2022-08-28T18:47:40Z | 0 | 0 | null |
[
"pytorch",
"image-to-text",
"en",
"license:apache-2.0",
"region:us"
] |
image-to-text
| 2022-08-28T18:41:54Z |
---
language:
- en
license: apache-2.0
tags:
- image-to-text
---
# ViTSTR small v1.0
ViTSTR model pre-trained on various real [STR datasets](https://github.com/baudm/parseq/blob/main/Datasets.md) at image size 128x32 with a patch size of 8x4.
Disclaimer: this model card was not written by the original author.
## Model description
*TODO*
## Intended uses & limitations
You can use the model for STR on images containing Latin characters (62 case-sensitive alphanumeric + 32 punctuation marks).
### How to use
*TODO*
### BibTeX entry and citation info
```bibtex
@InProceedings{atienza2021vision,
title={Vision transformer for fast and efficient scene text recognition},
author={Atienza, Rowel},
booktitle={International Conference on Document Analysis and Recognition},
pages={319--334},
year={2021},
organization={Springer}
}
```
|
caffsean/t5-base-finetuned-keyword-to-text-generation
|
caffsean
| 2022-08-28T18:36:02Z | 11 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-27T23:29:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-base-finetuned-keyword-to-text-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-keyword-to-text-generation
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4643
- Rouge1: 2.1108
- Rouge2: 0.3331
- Rougel: 1.7368
- Rougelsum: 1.7391
- Gen Len: 16.591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 375 | 3.4862 | 2.0718 | 0.326 | 1.7275 | 1.7308 | 16.7995 |
| 3.5928 | 2.0 | 750 | 3.4761 | 2.0829 | 0.3253 | 1.7192 | 1.7224 | 16.773 |
| 3.5551 | 3.0 | 1125 | 3.4701 | 2.1028 | 0.3272 | 1.7274 | 1.7296 | 16.6505 |
| 3.5225 | 4.0 | 1500 | 3.4671 | 2.11 | 0.3305 | 1.7343 | 1.7362 | 16.699 |
| 3.5225 | 5.0 | 1875 | 3.4653 | 2.1134 | 0.3319 | 1.7418 | 1.7437 | 16.5485 |
| 3.4987 | 6.0 | 2250 | 3.4643 | 2.1108 | 0.3331 | 1.7368 | 1.7391 | 16.591 |
| 3.4939 | 7.0 | 2625 | 3.4643 | 2.1108 | 0.3331 | 1.7368 | 1.7391 | 16.591 |
| 3.498 | 8.0 | 3000 | 3.4643 | 2.1108 | 0.3331 | 1.7368 | 1.7391 | 16.591 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
baudm/parseq-tiny
|
baudm
| 2022-08-28T18:31:35Z | 0 | 2 | null |
[
"pytorch",
"image-to-text",
"en",
"license:apache-2.0",
"region:us"
] |
image-to-text
| 2022-08-28T18:31:35Z |
---
language:
- en
license: apache-2.0
tags:
- image-to-text
---
# PARSeq tiny v1.0
PARSeq model pre-trained on various real [STR datasets](https://github.com/baudm/parseq/blob/main/Datasets.md) at image size 128x32 with a patch size of 8x4.
## Model description
PARSeq (Permuted Autoregressive Sequence) models unify the prevailing modeling/decoding schemes in Scene Text Recognition (STR). In particular, with a single model, it allows for context-free non-autoregressive inference (like CRNN and ViTSTR), context-aware autoregressive inference (like TRBA), and bidirectional iterative refinement (like ABINet).

## Intended uses & limitations
You can use the model for STR on images containing Latin characters (62 case-sensitive alphanumeric + 32 punctuation marks).
### How to use
*TODO*
### BibTeX entry and citation info
```bibtex
@InProceedings{bautista2022parseq,
author={Bautista, Darwin and Atienza, Rowel},
title={Scene Text Recognition with Permuted Autoregressive Sequence Models},
booktitle={Proceedings of the 17th European Conference on Computer Vision (ECCV)},
month={10},
year={2022},
publisher={Springer International Publishing},
address={Cham}
}
```
|
vikram71198/roberta-base-finetuned-irony
|
vikram71198
| 2022-08-28T18:19:31Z | 106 | 1 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"Irony Detection",
"Text Classification",
"tweet_eval",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-28T17:36:41Z |
---
license: apache-2.0
tags:
- Irony Detection
- Text Classification
- tweet_eval
#metrics:
#- accuracy
model-index:
- name: roberta-base-finetuned-irony
results: []
---
# roberta-base-finetuned-irony
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the Irony Dataset from [Tweet_Eval](https://huggingface.co/datasets/tweet_eval).
This is the classification report after training for 10 full epochs:
| | Precision | Recall | F-1 Score | Support |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| Not Irony (0) | 0.73 | 0.78| 0.75 | 473 |
| Irony (1) | 0.62 | 0.56 | 0.59 | 311 |
| accuracy | | | 0.69 | 784 |
| macro avg | 0.68 | 0.67 | 0.67 | 784 |
| weighted avg | 0.69 | 0.69 | 0.69 | 784 |
## Training and evaluation data
All of the process to train this model is available in [this](https://github.com/vikram71198/Transformers/tree/main/Irony%20Detection) repository. The dataset has been split into 2,862 examples for training, 955 for validation & 784 for testing.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- optimizer: default AdamW Optimizer
- num_epochs: 10
- warmup_steps: 500
- weight_decay: 0.01
- random seed: 42
I also trained for 10 full epochs on Colab's Tesla P100-PCIE-16GB GPU.
### Training results
| Epoch | Training Loss | Validation Loss |
|:-------------:|:----:|:---------------:|
| 1 | 0.691600 |0.6738196 |
| 2 | 0.621800 | 0.611911 |
| 3 | 0.510800 | 0.516174 |
| 4 | 0.384700 | 0.574607 |
| 5 | 0.273900 | 0.644613 |
| 6 | 0.162300 | 0.846262 |
| 7 | 0.119000 | 0.869178 |
| 8 | 0.079700 | 1.131574 |
| 9 | 0.035800 | 1.5123457 |
| 10 | 0.013600 |1.5706617 |
## Model in Action 🚀
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch.nn as nn
tokenizer = AutoTokenizer.from_pretrained("vikram71198/roberta-base-finetuned-irony")
model = AutoModelForSequenceClassification.from_pretrained("vikram71198/roberta-base-finetuned-irony")
#Following the same truncation & padding strategy used while training
encoded_input = tokenizer("Enter any text/tweet to be classified. Can input a list of tweets too.", padding = True, return_tensors='pt')
output = model(**encoded_input)["logits"]
#detaching the output from the computation graph
detached_output = output.detach()
#Applying softmax here for single label classification
softmax = nn.Softmax(dim = 1)
prediction_probabilities = list(softmax(detached_output).detach().numpy())
predictions = []
for x,y in prediction_probabilities:
predictions.append("not_irony") if x > y else predictions.append("irony")
print(predictions)
```
Please note that if you're performing inference on a lengthy dataset, split it up into multiple batches, otherwise your RAM will overflow, unless you're using a really high end GPU/TPU setup. I'd recommend a batch length of 50, if you're working with a vanilla GPU setup.
### Framework versions
- Transformers 4.12.5
- Pytorch 1.11.0
- Datasets 1.17.0
- Tokenizers 0.10.3
|
muhtasham/bert-small-eurlex
|
muhtasham
| 2022-08-28T17:47:16Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"generated_from_trainer",
"dataset:eurlex",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-08-27T21:06:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- eurlex
model-index:
- name: bert-small-eurlex
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-small-eurlex
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the eurlex dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4260
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 10
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 80
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.9536 | 1.5 | 1000 | 2.0670 |
| 2.0331 | 3.0 | 2000 | 1.7540 |
| 1.8046 | 4.5 | 3000 | 1.5993 |
| 1.678 | 6.0 | 4000 | 1.5039 |
| 1.6074 | 7.5 | 5000 | 1.4544 |
| 1.5664 | 8.99 | 6000 | 1.4260 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
silviacamplani/distilbert-finetuned-tapt-lm-music
|
silviacamplani
| 2022-08-28T16:28:36Z | 7 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-28T16:24:24Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert-finetuned-tapt-lm-music
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert-finetuned-tapt-lm-music
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -1000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
aware-ai/wav2vec2-xls-r-300m-english
|
aware-ai
| 2022-08-28T16:15:04Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_10_0",
"generated_from_trainer",
"de",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-08-26T12:31:54Z |
---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_10_0
- generated_from_trainer
model-index:
- name: wav2vec2-xls-r-300m-english
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-english
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_10_0 - DE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5577
- Wer: 0.3864
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.317 | 1.0 | 7194 | 0.5577 | 0.3864 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.11.0
- Datasets 2.4.0
- Tokenizers 0.12.1
|
antoinev17/xlm-roberta-base-finetuned-panx-de
|
antoinev17
| 2022-08-28T16:01:00Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-28T14:59:32Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8658245134858313
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1455
- F1: 0.8658
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.259 | 1.0 | 1258 | 0.1906 | 0.8256 |
| 0.1332 | 2.0 | 2516 | 0.1491 | 0.8495 |
| 0.0841 | 3.0 | 3774 | 0.1455 | 0.8658 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
rajistics/layoutlmv2-finetuned-cord_100
|
rajistics
| 2022-08-28T15:48:40Z | 79 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"dataset:cord-layoutlmv3",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-28T01:37:57Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- cord-layoutlmv3
model-index:
- name: layoutlmv2-finetuned-cord_100
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-finetuned-cord_100
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on the cord-layoutlmv3 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 3000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.21.2
- Pytorch 1.10.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
silviacamplani/distilbert-finetuned-dapt-lm-music
|
silviacamplani
| 2022-08-28T15:42:41Z | 65 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-28T11:31:06Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert-finetuned-dapt-lm-music
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert-finetuned-dapt-lm-music
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 32911, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
buddhist-nlp/mbart-buddhist-chinese-to-eng
|
buddhist-nlp
| 2022-08-28T15:27:25Z | 10 | 2 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"translation",
"zh",
"en",
"autotrain_compatible",
"region:us"
] |
translation
| 2022-08-28T10:39:38Z |
---
language:
- zh
- en
tags:
- translation
widget:
- text: "如是我闻:一时,佛在舍卫国只树花林窟,与大比丘众千二百五十人俱。"
inference: false
---
This model is based on MBART and translates Buddhist Chinese to English. It is optimized for a sequence length of 300 (Buddhist Chinese input sequences shouldn't exceed 150 characters). This model uses "#" with a space before and after as delimiter between sentences (in addition to the normal Chinese punctuation). Input should be converted to simplified Chinese before running. The model also doesn't like short sequences very much, for best results supply input sequences between 100 and 150 characters in length.
The model shows good performance on Sūtra texts and does perform not too bad on Abhidharma and Yogācāra. However, it does have the usual problems that NMT systems have with named entities (names of persons and places). Also it shows a tendency to hallucinate at times, i.e. generating a translation that has no direct relationship with the input.
|
buddhist-nlp/sanstib
|
buddhist-nlp
| 2022-08-28T15:02:42Z | 104 | 2 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"license:lgpl-lr",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-04-22T08:35:32Z |
---
license: lgpl-lr
---
This model creates Sanskrit and Tibetan sentence embeddings and can be used for semantic similarity tasks.
Sanskrit needs to be segmented first and converted into internal transliteration (I will upload the according script here soon). The Tibetan needs to be converted into wylie transliteration.
|
tanvirkhan/distilbert-base-uncased-finetuned-imdb
|
tanvirkhan
| 2022-08-28T14:59:47Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-28T11:50:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4898 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
yirmibesogluz/t2t-ner-ade-balanced
|
yirmibesogluz
| 2022-08-28T12:59:14Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"adverse-drug-events",
"twitter",
"social-media-mining-for-health",
"SMM4H",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-28T12:30:48Z |
---
license: mit
language: en
tags:
- adverse-drug-events
- twitter
- social-media-mining-for-health
- SMM4H
widget:
- text: "ner ade: i'm so irritable when my vyvanse wears off"
example_title: "ADE"
- text: "ner ade: bout to have a kick ass summer then it's time to get serious fer school #vyvanse #geekmode"
example_title: "noADE"
---
## t2t-ner-ade-balanced
t2t-ner-ade-balanced is a text-to-text (**t2t**) adverse drug event (**ade**) extraction (NER) model trained with over- and undersampled (balanced) English tweets reporting adverse drug events. It is trained as part of BOUN-TABI system for the Social Media Mining for Health (SMM4H) 2022 shared task. The system description paper has been accepted for publication in *Proceedings of the Seventh Social Media Mining for Health (#SMM4H) Workshop and Shared Task* and will be available soon. The source code has been released on GitHub at [https://github.com/gokceuludogan/boun-tabi-smm4h22](https://github.com/gokceuludogan/boun-tabi-smm4h22).
The model utilizes the T5 model and its text-to-text formulation. The inputs are fed to the model with the task prefix "ner ade:", followed with a sentence/tweet. In turn, either the extracted adverse event span is returned, or "none".
## Requirements
```
sentencepiece
transformers
```
## Usage
```python
from transformers import pipeline, AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("yirmibesogluz/t2t-ner-ade-balanced")
model = AutoModelForSeq2SeqLM.from_pretrained("yirmibesogluz/t2t-ner-ade-balanced")
predictor = pipeline("text2text-generation", model=model, tokenizer=tokenizer)
predictor("ner ade: i'm so irritable when my vyvanse wears off")
```
## Citation
```bibtex
@inproceedings{uludogan-gokce-yirmibesoglu-zeynep-2022-boun-tabi-smm4h22,
title = "{BOUN}-{TABI}@{SMM4H}'22: Text-to-{T}ext {A}dverse {D}rug {E}vent {E}xtraction with {D}ata {B}alancing and {P}rompting",
author = "Uludo{\u{g}}an, G{\"{o}}k{\c{c}}e and Yirmibe{\c{s}}o{\u{g}}lu, Zeynep",
booktitle = "Proceedings of the Seventh Social Media Mining for Health ({\#}SMM4H) Workshop and Shared Task",
year = "2022",
}
```
|
yirmibesogluz/t2t-assert-ade-balanced
|
yirmibesogluz
| 2022-08-28T12:02:18Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"adverse-drug-events",
"twitter",
"social-media-mining-for-health",
"SMM4H",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-28T11:13:39Z |
---
license: mit
language: en
tags:
- adverse-drug-events
- twitter
- social-media-mining-for-health
- SMM4H
widget:
- text: "assert ade: joints killing me now i have gone back up on the lamotrigine. sick of side effects. sick of meds. want my own self back. knackered today"
example_title: "ADE"
- text: "assert ade: bout to have a kick ass summer then it's time to get serious fer school #vyvanse #geekmode"
example_title: "noADE"
---
## t2t-assert-ade-balanced
t2t-assert-ade-balanced is a text-to-text (**t2t**) adverse drug event (**ade**) detection model trained with over- and undersampled (balanced) English tweets reporting adverse drug events. It is trained as part of BOUN-TABI system for the Social Media Mining for Health (SMM4H) 2022 shared task. The system description paper has been accepted for publication in *Proceedings of the Seventh Social Media Mining for Health (#SMM4H) Workshop and Shared Task* and will be available soon. The source code has been released on GitHub at [https://github.com/gokceuludogan/boun-tabi-smm4h22](https://github.com/gokceuludogan/boun-tabi-smm4h22).
The model utilizes the T5 model and its text-to-text formulation. The inputs are fed to the model with the task prefix "assert ade:", followed with a sentence/tweet. In turn, the output "adverse event problem" or "healthy okay" is received.
## Requirements
```
sentencepiece
transformers
```
## Usage
```python
from transformers import pipeline, AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("yirmibesogluz/t2t-assert-ade-balanced")
model = AutoModelForSeq2SeqLM.from_pretrained("yirmibesogluz/t2t-assert-ade-balanced")
predictor = pipeline("text2text-generation", model=model, tokenizer=tokenizer)
predictor("assert ade: joints killing me now i have gone back up on the lamotrigine. sick of side effects. sick of meds. want my own self back. knackered today")
```
## Citation
```bibtex
@inproceedings{uludogan-gokce-yirmibesoglu-zeynep-2022-boun-tabi-smm4h22,
title = "{BOUN}-{TABI}@{SMM4H}'22: Text-to-{T}ext {A}dverse {D}rug {E}vent {E}xtraction with {D}ata {B}alancing and {P}rompting",
author = "Uludo{\u{g}}an, G{\"{o}}k{\c{c}}e and Yirmibe{\c{s}}o{\u{g}}lu, Zeynep",
booktitle = "Proceedings of the Seventh Social Media Mining for Health ({\#}SMM4H) Workshop and Shared Task",
year = "2022",
}
```
|
Shivus/q-FrozenLake-v1-4x4-noSlippery
|
Shivus
| 2022-08-28T11:25:26Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-28T11:25:18Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Shivus/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
flair/ner-german-large
|
flair
| 2022-08-28T09:08:06Z | 221,703 | 39 |
flair
|
[
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"dataset:conll2003",
"arxiv:2011.06993",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- flair
- token-classification
- sequence-tagger-model
language: de
datasets:
- conll2003
widget:
- text: "George Washington ging nach Washington"
---
## German NER in Flair (large model)
This is the large 4-class NER model for German that ships with [Flair](https://github.com/flairNLP/flair/).
F1-Score: **92,31** (CoNLL-03 German revised)
Predicts 4 tags:
| **tag** | **meaning** |
|---------------------------------|-----------|
| PER | person name |
| LOC | location name |
| ORG | organization name |
| MISC | other name |
Based on document-level XLM-R embeddings and [FLERT](https://arxiv.org/pdf/2011.06993v1.pdf).
---
### Demo: How to use in Flair
Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("flair/ner-german-large")
# make example sentence
sentence = Sentence("George Washington ging nach Washington")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)
```
This yields the following output:
```
Span [1,2]: "George Washington" [− Labels: PER (1.0)]
Span [5]: "Washington" [− Labels: LOC (1.0)]
```
So, the entities "*George Washington*" (labeled as a **person**) and "*Washington*" (labeled as a **location**) are found in the sentence "*George Washington ging nach Washington*".
---
### Training: Script to train this model
The following Flair script was used to train this model:
```python
import torch
# 1. get the corpus
from flair.datasets import CONLL_03_GERMAN
corpus = CONLL_03_GERMAN()
# 2. what tag do we want to predict?
tag_type = 'ner'
# 3. make the tag dictionary from the corpus
tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type)
# 4. initialize fine-tuneable transformer embeddings WITH document context
from flair.embeddings import TransformerWordEmbeddings
embeddings = TransformerWordEmbeddings(
model='xlm-roberta-large',
layers="-1",
subtoken_pooling="first",
fine_tune=True,
use_context=True,
)
# 5. initialize bare-bones sequence tagger (no CRF, no RNN, no reprojection)
from flair.models import SequenceTagger
tagger = SequenceTagger(
hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type='ner',
use_crf=False,
use_rnn=False,
reproject_embeddings=False,
)
# 6. initialize trainer with AdamW optimizer
from flair.trainers import ModelTrainer
trainer = ModelTrainer(tagger, corpus, optimizer=torch.optim.AdamW)
# 7. run training with XLM parameters (20 epochs, small LR)
from torch.optim.lr_scheduler import OneCycleLR
trainer.train('resources/taggers/ner-german-large',
learning_rate=5.0e-6,
mini_batch_size=4,
mini_batch_chunk_size=1,
max_epochs=20,
scheduler=OneCycleLR,
embeddings_storage_mode='none',
weight_decay=0.,
)
)
```
---
### Cite
Please cite the following paper when using this model.
```
@misc{schweter2020flert,
title={FLERT: Document-Level Features for Named Entity Recognition},
author={Stefan Schweter and Alan Akbik},
year={2020},
eprint={2011.06993},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
---
### Issues?
The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
|
paola-md/recipe-lr1e05-wd0.1-bs32
|
paola-md
| 2022-08-28T08:13:25Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-28T07:45:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr1e05-wd0.1-bs32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr1e05-wd0.1-bs32
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2756
- Rmse: 0.5250
- Mse: 0.2756
- Mae: 0.4181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2769 | 1.0 | 623 | 0.2768 | 0.5261 | 0.2768 | 0.4281 |
| 0.2743 | 2.0 | 1246 | 0.2739 | 0.5234 | 0.2739 | 0.4152 |
| 0.2732 | 3.0 | 1869 | 0.2760 | 0.5253 | 0.2760 | 0.4229 |
| 0.2719 | 4.0 | 2492 | 0.2749 | 0.5243 | 0.2749 | 0.4041 |
| 0.271 | 5.0 | 3115 | 0.2761 | 0.5255 | 0.2761 | 0.4238 |
| 0.2699 | 6.0 | 3738 | 0.2756 | 0.5250 | 0.2756 | 0.4181 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
yoyoyo1118/xlm-roberta-base-finetuned-panx-de-fr
|
yoyoyo1118
| 2022-08-28T07:53:58Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-28T07:31:23Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1654
- F1: 0.8590
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2845 | 1.0 | 715 | 0.1831 | 0.8249 |
| 0.1449 | 2.0 | 1430 | 0.1643 | 0.8479 |
| 0.0929 | 3.0 | 2145 | 0.1654 | 0.8590 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
paola-md/recipe-lr1e05-wd0.005-bs32
|
paola-md
| 2022-08-28T07:45:24Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-28T07:17:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr1e05-wd0.005-bs32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr1e05-wd0.005-bs32
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2756
- Rmse: 0.5250
- Mse: 0.2756
- Mae: 0.4181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2769 | 1.0 | 623 | 0.2768 | 0.5261 | 0.2768 | 0.4281 |
| 0.2743 | 2.0 | 1246 | 0.2739 | 0.5234 | 0.2739 | 0.4153 |
| 0.2732 | 3.0 | 1869 | 0.2760 | 0.5253 | 0.2760 | 0.4229 |
| 0.2719 | 4.0 | 2492 | 0.2749 | 0.5243 | 0.2749 | 0.4041 |
| 0.271 | 5.0 | 3115 | 0.2761 | 0.5255 | 0.2761 | 0.4238 |
| 0.2699 | 6.0 | 3738 | 0.2756 | 0.5250 | 0.2756 | 0.4181 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
paola-md/recipe-lr1e05-wd0.01-bs32
|
paola-md
| 2022-08-28T07:17:08Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-28T06:49:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr1e05-wd0.01-bs32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr1e05-wd0.01-bs32
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2756
- Rmse: 0.5250
- Mse: 0.2756
- Mae: 0.4181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2769 | 1.0 | 623 | 0.2768 | 0.5261 | 0.2768 | 0.4282 |
| 0.2743 | 2.0 | 1246 | 0.2739 | 0.5234 | 0.2739 | 0.4152 |
| 0.2732 | 3.0 | 1869 | 0.2760 | 0.5253 | 0.2760 | 0.4229 |
| 0.2719 | 4.0 | 2492 | 0.2749 | 0.5243 | 0.2749 | 0.4041 |
| 0.271 | 5.0 | 3115 | 0.2761 | 0.5255 | 0.2761 | 0.4238 |
| 0.2699 | 6.0 | 3738 | 0.2756 | 0.5250 | 0.2756 | 0.4181 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Minds/rare-puppers
|
Minds
| 2022-08-28T06:54:12Z | 45 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-08-28T06:54:01Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8888888955116272
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### fresh leaf of plant

#### plant diseases

|
paola-md/recipe-lr8e06-wd0.1-bs32
|
paola-md
| 2022-08-28T06:21:06Z | 167 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-28T05:53:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr8e06-wd0.1-bs32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr8e06-wd0.1-bs32
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2752
- Rmse: 0.5246
- Mse: 0.2752
- Mae: 0.4184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2769 | 1.0 | 623 | 0.2773 | 0.5266 | 0.2773 | 0.4297 |
| 0.2745 | 2.0 | 1246 | 0.2739 | 0.5233 | 0.2739 | 0.4144 |
| 0.2733 | 3.0 | 1869 | 0.2752 | 0.5246 | 0.2752 | 0.4215 |
| 0.2722 | 4.0 | 2492 | 0.2744 | 0.5238 | 0.2744 | 0.4058 |
| 0.2714 | 5.0 | 3115 | 0.2758 | 0.5252 | 0.2758 | 0.4233 |
| 0.2705 | 6.0 | 3738 | 0.2752 | 0.5246 | 0.2752 | 0.4184 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
yoyoyo1118/xlm-roberta-base-finetuned-panx-de
|
yoyoyo1118
| 2022-08-28T06:05:49Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-28T05:45:44Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.863677639046538
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1343
- F1: 0.8637
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2578 | 1.0 | 525 | 0.1562 | 0.8273 |
| 0.1297 | 2.0 | 1050 | 0.1330 | 0.8474 |
| 0.0809 | 3.0 | 1575 | 0.1343 | 0.8637 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
rebolforces/Reinforce-CartPole-v1-exp2
|
rebolforces
| 2022-08-28T05:35:42Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-28T05:35:26Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1-exp2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
paola-md/recipe-lr8e06-wd0.01-bs32
|
paola-md
| 2022-08-28T05:25:05Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-28T04:57:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr8e06-wd0.01-bs32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr8e06-wd0.01-bs32
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2753
- Rmse: 0.5246
- Mse: 0.2753
- Mae: 0.4184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2769 | 1.0 | 623 | 0.2774 | 0.5266 | 0.2774 | 0.4296 |
| 0.2745 | 2.0 | 1246 | 0.2739 | 0.5233 | 0.2739 | 0.4145 |
| 0.2733 | 3.0 | 1869 | 0.2752 | 0.5246 | 0.2752 | 0.4215 |
| 0.2722 | 4.0 | 2492 | 0.2744 | 0.5238 | 0.2744 | 0.4058 |
| 0.2714 | 5.0 | 3115 | 0.2758 | 0.5251 | 0.2758 | 0.4232 |
| 0.2705 | 6.0 | 3738 | 0.2753 | 0.5246 | 0.2753 | 0.4184 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
rebolforces/Reinforce-CartPole-v1-exp1
|
rebolforces
| 2022-08-28T05:11:04Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-28T05:10:50Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1-exp1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 458.90 +/- 80.57
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
paola-md/distilroberta-recipes
|
paola-md
| 2022-08-28T04:57:01Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-28T04:29:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr2e05-wd0.02-bs32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr2e05-wd0.02-bs32
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2784
- Rmse: 0.5277
- Mse: 0.2784
- Mae: 0.4161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2774 | 1.0 | 623 | 0.2749 | 0.5243 | 0.2749 | 0.4184 |
| 0.2741 | 2.0 | 1246 | 0.2741 | 0.5235 | 0.2741 | 0.4173 |
| 0.2724 | 3.0 | 1869 | 0.2855 | 0.5343 | 0.2855 | 0.4428 |
| 0.2713 | 4.0 | 2492 | 0.2758 | 0.5252 | 0.2758 | 0.4013 |
| 0.2695 | 5.0 | 3115 | 0.2777 | 0.5270 | 0.2777 | 0.4245 |
| 0.2674 | 6.0 | 3738 | 0.2784 | 0.5277 | 0.2784 | 0.4161 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
paola-md/recipe-lr2e05-wd0.1-bs32
|
paola-md
| 2022-08-28T04:28:49Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-28T04:15:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr2e05-wd0.1-bs32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr2e05-wd0.1-bs32
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2861
- Rmse: 0.5349
- Mse: 0.2861
- Mae: 0.4436
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2775 | 1.0 | 623 | 0.2744 | 0.5238 | 0.2744 | 0.4159 |
| 0.274 | 2.0 | 1246 | 0.2737 | 0.5232 | 0.2737 | 0.4163 |
| 0.2724 | 3.0 | 1869 | 0.2861 | 0.5349 | 0.2861 | 0.4436 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
rebolforces/Reinforce-CartPole-v1-baseline
|
rebolforces
| 2022-08-28T04:16:35Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-28T04:14:35Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1-baseline
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 215.80 +/- 39.04
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
paola-md/recipe-lr1e05-wd0.02-bs8
|
paola-md
| 2022-08-28T03:44:00Z | 140 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-28T03:18:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr1e05-wd0.02-bs8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr1e05-wd0.02-bs8
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2781
- Rmse: 0.5273
- Mse: 0.2781
- Mae: 0.4279
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2766 | 1.0 | 2490 | 0.2740 | 0.5234 | 0.2740 | 0.4172 |
| 0.2738 | 2.0 | 4980 | 0.2783 | 0.5276 | 0.2783 | 0.4297 |
| 0.2724 | 3.0 | 7470 | 0.2781 | 0.5273 | 0.2781 | 0.4279 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
huggingtweets/pink_rodent
|
huggingtweets
| 2022-08-28T02:33:36Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-28T02:32:47Z |
---
language: en
thumbnail: http://www.huggingtweets.com/pink_rodent/1661654012124/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1558011857838931968/JdtfxNhf_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">mouse</div>
<div style="text-align: center; font-size: 14px;">@pink_rodent</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from mouse.
| Data | mouse |
| --- | --- |
| Tweets downloaded | 242 |
| Retweets | 48 |
| Short tweets | 55 |
| Tweets kept | 139 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/182s7hgh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @pink_rodent's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/35lwy7go) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/35lwy7go/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/pink_rodent')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
paola-md/recipe-lr8e06-wd0.02-bs8
|
paola-md
| 2022-08-28T02:02:34Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-28T01:38:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr8e06-wd0.02-bs8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr8e06-wd0.02-bs8
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2778
- Rmse: 0.5271
- Mse: 0.2778
- Mae: 0.4289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2766 | 1.0 | 2490 | 0.2739 | 0.5233 | 0.2739 | 0.4160 |
| 0.2739 | 2.0 | 4980 | 0.2770 | 0.5263 | 0.2770 | 0.4279 |
| 0.2726 | 3.0 | 7470 | 0.2778 | 0.5271 | 0.2778 | 0.4289 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
paola-md/recipe-lr8e06-wd0.1-bs8
|
paola-md
| 2022-08-28T01:37:28Z | 164 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-28T01:13:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr8e06-wd0.1-bs8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr8e06-wd0.1-bs8
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2778
- Rmse: 0.5270
- Mse: 0.2778
- Mae: 0.4290
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2766 | 1.0 | 2490 | 0.2741 | 0.5235 | 0.2741 | 0.4176 |
| 0.2739 | 2.0 | 4980 | 0.2773 | 0.5266 | 0.2773 | 0.4286 |
| 0.2726 | 3.0 | 7470 | 0.2778 | 0.5270 | 0.2778 | 0.4290 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
anas-awadalla/distilroberta-base-task-specific-distilation-on-squad
|
anas-awadalla
| 2022-08-28T01:17:22Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-08-27T23:50:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilroberta-base-task-specific-distilation-on-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-task-specific-distilation-on-squad
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
infiniteperplexity/xlm-roberta-base-finetuned-panx-de
|
infiniteperplexity
| 2022-08-28T01:09:17Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-28T00:45:38Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8648740833380706
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1365
- F1: 0.8649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2553 | 1.0 | 525 | 0.1575 | 0.8279 |
| 0.1284 | 2.0 | 1050 | 0.1386 | 0.8463 |
| 0.0813 | 3.0 | 1575 | 0.1365 | 0.8649 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
paola-md/recipe-lr8e06-wd0.01-bs8
|
paola-md
| 2022-08-28T00:47:15Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-28T00:22:58Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr8e06-wd0.01-bs8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr8e06-wd0.01-bs8
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2782
- Rmse: 0.5274
- Mse: 0.2782
- Mae: 0.4299
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2766 | 1.0 | 2490 | 0.2739 | 0.5234 | 0.2739 | 0.4152 |
| 0.2739 | 2.0 | 4980 | 0.2769 | 0.5262 | 0.2769 | 0.4274 |
| 0.2725 | 3.0 | 7470 | 0.2782 | 0.5274 | 0.2782 | 0.4299 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
jfrojanoj/distilbert-base-uncased-finetuned-emotion
|
jfrojanoj
| 2022-08-28T00:01:30Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-27T23:33:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.9257579044598276
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2140
- Accuracy: 0.926
- F1: 0.9258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8453 | 1.0 | 250 | 0.3075 | 0.9115 | 0.9083 |
| 0.2467 | 2.0 | 500 | 0.2140 | 0.926 | 0.9258 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.12.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
paola-md/recipe-lr2e05-wd0.1-bs8
|
paola-md
| 2022-08-27T23:57:08Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-27T23:32:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr2e05-wd0.1-bs8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr2e05-wd0.1-bs8
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2768
- Rmse: 0.5262
- Mse: 0.2768
- Mae: 0.4258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.277 | 1.0 | 2490 | 0.2745 | 0.5239 | 0.2745 | 0.4180 |
| 0.2739 | 2.0 | 4980 | 0.2814 | 0.5304 | 0.2814 | 0.4321 |
| 0.2723 | 3.0 | 7470 | 0.2768 | 0.5262 | 0.2768 | 0.4258 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
paola-md/recipe-lr2e05-wd0.005-bs8
|
paola-md
| 2022-08-27T23:32:03Z | 162 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-27T23:07:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr2e05-wd0.005-bs8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr2e05-wd0.005-bs8
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2771
- Rmse: 0.5264
- Mse: 0.2771
- Mae: 0.4266
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.277 | 1.0 | 2490 | 0.2746 | 0.5240 | 0.2746 | 0.4202 |
| 0.274 | 2.0 | 4980 | 0.2827 | 0.5317 | 0.2827 | 0.4360 |
| 0.2723 | 3.0 | 7470 | 0.2771 | 0.5264 | 0.2771 | 0.4266 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
theojolliffe/bart-paraphrase-v4-e1-feedback
|
theojolliffe
| 2022-08-27T22:37:46Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-26T22:26:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-paraphrase-v4-e1-feedback
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-paraphrase-v4-e1-feedback
This model is a fine-tuned version of [theojolliffe/bart-paraphrase-v4-e1](https://huggingface.co/theojolliffe/bart-paraphrase-v4-e1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 27 | 3.9313 | 67.6687 | 57.1881 | 66.7507 | 66.2643 | 20.0 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0
- Datasets 1.18.0
- Tokenizers 0.10.3
|
paola-md/recipe-lr1e05-wd0.1-bs16
|
paola-md
| 2022-08-27T22:24:30Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-27T22:07:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr1e05-wd0.1-bs16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr1e05-wd0.1-bs16
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2794
- Rmse: 0.5286
- Mse: 0.2794
- Mae: 0.4343
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2767 | 1.0 | 1245 | 0.2744 | 0.5239 | 0.2744 | 0.4124 |
| 0.2739 | 2.0 | 2490 | 0.2757 | 0.5250 | 0.2757 | 0.4211 |
| 0.2727 | 3.0 | 3735 | 0.2794 | 0.5286 | 0.2794 | 0.4343 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
jackoyoungblood/Reinforce-PongPolGrad
|
jackoyoungblood
| 2022-08-27T21:43:41Z | 0 | 0 | null |
[
"Pong-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-27T21:41:20Z |
---
tags:
- Pong-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PongPolGrad
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pong-PLE-v0
type: Pong-PLE-v0
metrics:
- type: mean_reward
value: -16.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pong-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pong-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
paola-md/recipe-lr8e06-wd0.02-bs16
|
paola-md
| 2022-08-27T21:31:07Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-27T21:13:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr8e06-wd0.02-bs16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr8e06-wd0.02-bs16
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2795
- Rmse: 0.5287
- Mse: 0.2795
- Mae: 0.4342
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2767 | 1.0 | 1245 | 0.2745 | 0.5239 | 0.2745 | 0.4140 |
| 0.2741 | 2.0 | 2490 | 0.2760 | 0.5254 | 0.2760 | 0.4222 |
| 0.2729 | 3.0 | 3735 | 0.2795 | 0.5287 | 0.2795 | 0.4342 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
RohanK447/swin-tiny-patch4-window7-224-finetuned-eurosat
|
RohanK447
| 2022-08-27T21:27:21Z | 66 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-08-27T21:03:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9748148148148148
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0741
- Accuracy: 0.9748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2868 | 1.0 | 190 | 0.1234 | 0.9574 |
| 0.1519 | 2.0 | 380 | 0.0741 | 0.9748 |
| 0.1211 | 3.0 | 570 | 0.0724 | 0.9744 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Bahushruth/distilbert-base-uncased-distilled-clinc
|
Bahushruth
| 2022-08-27T21:15:24Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-27T20:55:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
model-index:
- name: distilbert-base-uncased-distilled-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
paola-md/recipe-lr2e05-wd0.1-bs16
|
paola-md
| 2022-08-27T20:01:59Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-27T19:44:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr2e05-wd0.1-bs16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr2e05-wd0.1-bs16
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2783
- Rmse: 0.5275
- Mse: 0.2783
- Mae: 0.4319
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2771 | 1.0 | 1245 | 0.2744 | 0.5238 | 0.2744 | 0.4105 |
| 0.2738 | 2.0 | 2490 | 0.2819 | 0.5309 | 0.2819 | 0.4298 |
| 0.2724 | 3.0 | 3735 | 0.2783 | 0.5275 | 0.2783 | 0.4319 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
paola-md/recipe-lr2e05-wd0.005-bs16
|
paola-md
| 2022-08-27T19:44:18Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-27T19:27:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr2e05-wd0.005-bs16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr2e05-wd0.005-bs16
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2780
- Rmse: 0.5272
- Mse: 0.2780
- Mae: 0.4314
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.277 | 1.0 | 1245 | 0.2743 | 0.5237 | 0.2743 | 0.4112 |
| 0.2738 | 2.0 | 2490 | 0.2811 | 0.5302 | 0.2811 | 0.4288 |
| 0.2724 | 3.0 | 3735 | 0.2780 | 0.5272 | 0.2780 | 0.4314 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
theojolliffe/T5-model-1-feedback
|
theojolliffe
| 2022-08-27T19:25:07Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-26T21:31:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: T5-model-1-feedback
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# T5-model-1-feedback
This model is a fine-tuned version of [theojolliffe/T5-model-1-d-4](https://huggingface.co/theojolliffe/T5-model-1-d-4) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 130 | 0.4120 | 61.7277 | 46.2681 | 61.1325 | 61.2797 | 13.2632 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0
- Datasets 1.18.0
- Tokenizers 0.10.3
|
BigSalmon/Infill2
|
BigSalmon
| 2022-08-27T19:24:38Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-27T19:08:51Z |
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/Infill2")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/Infill2")
```
```
Demo:
https://huggingface.co/spaces/BigSalmon/FormalInformalConciseWordy
```
```
prompt = """few sights are as [blank] new york city as the colorful, flashing signage of its bodegas [sep]"""
input_ids = tokenizer.encode(prompt, return_tensors='pt')
outputs = model.generate(input_ids=input_ids,
max_length=10 + len(prompt),
temperature=1.0,
top_k=50,
top_p=0.95,
do_sample=True,
num_return_sequences=5,
early_stopping=True)
for i in range(5):
print(tokenizer.decode(outputs[i]))
```
Most likely outputs (Disclaimer: I highly recommend using this over just generating):
```
prompt = """few sights are as [blank] new york city as the colorful, flashing signage of its bodegas [sep]"""
text = tokenizer.encode(prompt)
myinput, past_key_values = torch.tensor([text]), None
myinput = myinput
myinput= myinput.to(device)
logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False)
logits = logits[0,-1]
probabilities = torch.nn.functional.softmax(logits)
best_logits, best_indices = logits.topk(250)
best_words = [tokenizer.decode([idx.item()]) for idx in best_indices]
text.append(best_indices[0].item())
best_probabilities = probabilities[best_indices].tolist()
words = []
print(best_words)
```
Infill / Infilling / Masking / Phrase Masking
```
his contention [blank] by the evidence [sep] was refuted [answer]
***
few sights are as [blank] new york city as the colorful, flashing signage of its bodegas [sep] synonymous with [answer]
***
when rick won the lottery, all of his distant relatives [blank] his winnings [sep] clamored for [answer]
***
the library’s quiet atmosphere encourages visitors to [blank] in their work [sep] immerse themselves [answer]
***
```
```
original: Other film stars to have appeared in Scrubs include Heather Graham, while Friends actor Matthew Perry has guest-starred and directed an episode of the [MASK] star, who recently played the title role in historical blockbuster Alexander, will make a cameo appearance as an unruly Irishman. Its leading star, Zach Braff, has recently [MASK] the big screen in Garden State, which he also directed. Farrell is pencilled in to [MASK] of Crockett in a film version of 1980s police [MASK] Farrell's appearance is said to be a result of his friendship with Zach Braff, who stars in the programme.
infill: Other film stars to have appeared in Scrubs include Heather Graham, while Friends actor Matthew Perry has guest-starred and directed an episode of the show. The film star, who recently played the title role in historical blockbuster Alexander, will make a cameo appearance as an unruly Irishman. Its leading star, Zach Braff, has recently been seen on the big screen in Garden State, which he also directed. Farrell is pencilled in to play the role of Crockett in a film version of 1980s police drama Miami Vice. Farrell's appearance is said to be a result of his friendship with Zach Braff, who stars in the programme.
```
|
Bahushruth/distilbert-base-uncased-finetuned-clinc
|
Bahushruth
| 2022-08-27T19:19:43Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-27T18:37:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9174193548387096
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7711
- Accuracy: 0.9174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2892 | 1.0 | 318 | 3.2830 | 0.7426 |
| 2.627 | 2.0 | 636 | 1.8728 | 0.8410 |
| 1.5429 | 3.0 | 954 | 1.1555 | 0.8913 |
| 1.0089 | 4.0 | 1272 | 0.8530 | 0.9126 |
| 0.7939 | 5.0 | 1590 | 0.7711 | 0.9174 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ChaoLi/nlp_for_transformer_book_distilbert-base-uncased-finetuned-emotion
|
ChaoLi
| 2022-08-27T19:17:37Z | 103 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-27T19:01:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: nlp_for_transformer_book_distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9242101664142519
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nlp_for_transformer_book_distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2189
- Accuracy: 0.9245
- F1: 0.9242
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8191 | 1.0 | 250 | 0.3159 | 0.9065 | 0.9046 |
| 0.2411 | 2.0 | 500 | 0.2189 | 0.9245 | 0.9242 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
curt-tigges/ppo-LunarLander-v2
|
curt-tigges
| 2022-08-27T19:12:38Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-27T19:12:10Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 252.72 +/- 21.52
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
paola-md/recipe-gauss-wo-outliers
|
paola-md
| 2022-08-27T17:24:48Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-27T16:33:27Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-gauss-wo-outliers
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-gauss-wo-outliers
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2885
- Rmse: 0.5371
- Mse: 0.2885
- Mae: 0.4213
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|
| 0.2768 | 1.0 | 1245 | 0.2747 | 0.5241 | 0.2747 | 0.4081 |
| 0.2737 | 2.0 | 2490 | 0.2793 | 0.5285 | 0.2793 | 0.4288 |
| 0.2722 | 3.0 | 3735 | 0.2792 | 0.5284 | 0.2792 | 0.4332 |
| 0.2703 | 4.0 | 4980 | 0.2770 | 0.5263 | 0.2770 | 0.4000 |
| 0.2682 | 5.0 | 6225 | 0.2758 | 0.5252 | 0.2758 | 0.4183 |
| 0.2658 | 6.0 | 7470 | 0.2792 | 0.5284 | 0.2792 | 0.4212 |
| 0.2631 | 7.0 | 8715 | 0.2769 | 0.5262 | 0.2769 | 0.4114 |
| 0.2599 | 8.0 | 9960 | 0.2802 | 0.5294 | 0.2802 | 0.4107 |
| 0.2572 | 9.0 | 11205 | 0.2885 | 0.5371 | 0.2885 | 0.4213 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
wannaphong/khanomtan-tts-v1.1
|
wannaphong
| 2022-08-27T16:41:51Z | 10 | 3 |
transformers
|
[
"transformers",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-08-26T15:17:07Z |
---
license: apache-2.0
---
# KhanomTan TTS v1.1
KhanomTan TTS (ขนมตาล) is an open-source Thai text-to-speech model that supports multilingual speakers such as Thai, English, and others.
KhanomTan TTS v1.1 is a YourTTS model trained on multilingual languages that supports Thai. We use Thai speech corpora, TSync 1* and TSync 2* [mbarnig/lb-de-fr-en-pt-12800-TTS-CORPUS](https://huggingface.co/datasets/mbarnig/lb-de-fr-en-pt-12800-TTS-CORPUS) to train the YourTTS model by using code from the 🐸 Coqui-TTS and remove the voice that have the license's problem (All voice that doesn't use CC-0 or public license) from model, so the model's license is apache-2.0.
## Speakers
- Linda (English, female, [LJSpeech](https://keithito.com/LJ-Speech-Dataset/))
- Bernard (fr-fr, male, [m-ailabs](https://www.caito.de/2019/01/03/the-m-ailabs-speech-dataset/))
- Kerstin (x-de, female, [Rhasspy](https://github.com/rhasspy/dataset-voice-kerstin))
- Thorsten (x-de, male, [Thorsten](https://www.thorsten-voice.de/))
## Language
- th-th: Thai
- en: English
- fr-fr: French language
- pt-br: Portuguese
- x-de: Danish
- x-lb: Luxembourgish
*Note: Those are not complete corpus. We can access the public corpus only.
|
espnet/americasnlp22-asr-bzd
|
espnet
| 2022-08-27T16:17:43Z | 3 | 0 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"bzd",
"dataset:americasnlp22",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-06-06T19:06:19Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: bzd
datasets:
- americasnlp22
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/americasnlp22-asr-bzd`
This model was trained by Pavel Denisov using americasnlp22 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 66ca5df9f08b6084dbde4d9f312fa8ba0a47ecfc
pip install -e .
cd egs2/americasnlp22/asr1
./run.sh \
--skip_data_prep false \
--skip_train true \
--download_model espnet/americasnlp22-asr-bzd \
--lang bzd \
--local_data_opts "--lang bzd" \
--train_set train_bzd \
--valid_set dev_bzd \
--test_sets dev_bzd \
--gpu_inference false \
--inference_nj 8 \
--lm_train_text data/train_bzd/text \
--bpe_train_text data/train_bzd/text
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Sun Jun 5 01:31:26 CEST 2022`
- python version: `3.9.13 (main, May 18 2022, 00:00:00) [GCC 11.3.1 20220421 (Red Hat 11.3.1-2)]`
- espnet version: `espnet 202204`
- pytorch version: `pytorch 1.11.0+cu115`
- Git hash: `d55704daa36d3dd2ca24ae3162ac40d81957208c`
- Commit date: `Wed Jun 1 02:33:09 2022 +0200`
## asr_train_asr_transformer_raw_bzd_bpe100_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.cer_ctc.best/dev_bzd|250|2056|15.3|65.1|19.6|7.5|92.3|100.0|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.cer_ctc.best/dev_bzd|250|10083|64.0|15.1|20.9|9.2|45.2|100.0|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.cer_ctc.best/dev_bzd|250|7203|52.4|27.9|19.7|7.4|55.1|100.0|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr_transformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_transformer_raw_bzd_bpe100_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 15
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- cer_ctc
- min
keep_nbest_models: 1
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param:
- frontend.upstream.model.feature_extractor
- frontend.upstream.model.encoder.layers.0
- frontend.upstream.model.encoder.layers.1
- frontend.upstream.model.encoder.layers.2
- frontend.upstream.model.encoder.layers.3
- frontend.upstream.model.encoder.layers.4
- frontend.upstream.model.encoder.layers.5
- frontend.upstream.model.encoder.layers.6
- frontend.upstream.model.encoder.layers.7
- frontend.upstream.model.encoder.layers.8
- frontend.upstream.model.encoder.layers.9
- frontend.upstream.model.encoder.layers.10
- frontend.upstream.model.encoder.layers.11
- frontend.upstream.model.encoder.layers.12
- frontend.upstream.model.encoder.layers.13
- frontend.upstream.model.encoder.layers.14
- frontend.upstream.model.encoder.layers.15
- frontend.upstream.model.encoder.layers.16
- frontend.upstream.model.encoder.layers.17
- frontend.upstream.model.encoder.layers.18
- frontend.upstream.model.encoder.layers.19
- frontend.upstream.model.encoder.layers.20
- frontend.upstream.model.encoder.layers.21
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 200000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_bzd_bpe100_sp/train/speech_shape
- exp/asr_stats_raw_bzd_bpe100_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_bzd_bpe100_sp/valid/speech_shape
- exp/asr_stats_raw_bzd_bpe100_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_bzd_sp/wav.scp
- speech
- sound
- - dump/raw/train_bzd_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev_bzd/wav.scp
- speech
- sound
- - dump/raw/dev_bzd/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adamw
optim_conf:
lr: 0.0001
scheduler: warmuplr
scheduler_conf:
warmup_steps: 300
token_list:
- <blank>
- <unk>
- ̠
- ''''
- ▁e
- ▁
- e
- a
- r
- k
- ö
- i
- l
- ̀
- t
- s
- ▁i
- ▁a
- è
- á
- u
- ▁y
- ▁ta
- é
- w
- à
- m
- ▁d
- ́
- ë
- ▁k
- ▁s
- ke
- ▁se
- o
- ì
- ▁b
- ▁sa
- n
- ▁ts
- í
- ▁ie
- ▁m
- b
- la
- ▁tö
- ▁ka
- ▁kë
- ▁ku
- kö
- ▁ki
- na
- ▁é
- ka
- ta
- ▁dör
- ▁wö
- ne
- ▁wa
- ú
- ki
- ù
- pa
- ▁ma
- ▁ñ
- ▁ch
- j
- ñ
- ▁í
- ▁kiè
- ▁ì
- ▁wé
- ▁ë
- ch
- î
- ▁u
- ▁bu
- ▁sö
- ▁p
- p
- ▁wè
- 'no'
- ê
- ▁ajk
- ▁irir
- â
- ̂
- y
- ó
- ò
- d
- c
- û
- ô
- v
- z
- q
- g
- h
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
use_preprocessor: true
token_type: bpe
bpemodel: data/bzd_token_list/bpe_unigram100/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: wav2vec2_url
upstream_ckpt: https://dl.fbaipublicfiles.com/fairseq/wav2vec/xlsr2_300m.pt
download_dir: ./hub
multilayer_feature: true
fs: 16k
specaug: null
specaug_conf: {}
normalize: utterance_mvn
normalize_conf: {}
model: espnet
model_conf:
ctc_weight: 1.0
lsm_weight: 0.0
length_normalized_loss: false
extract_feats_in_collect_stats: false
preencoder: linear
preencoder_conf:
input_size: 1024
output_size: 80
encoder: transformer
encoder_conf:
input_layer: conv2d2
num_blocks: 1
linear_units: 2048
dropout_rate: 0.2
output_size: 256
attention_heads: 8
attention_dropout_rate: 0.2
postencoder: null
postencoder_conf: {}
decoder: rnn
decoder_conf: {}
required:
- output_dir
- token_list
version: '202204'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
espnet/americasnlp22-asr-gvc
|
espnet
| 2022-08-27T16:15:08Z | 1 | 0 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"gvc",
"dataset:americasnlp22",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-06-06T19:07:35Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: gvc
datasets:
- americasnlp22
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/americasnlp22-asr-gvc`
This model was trained by Pavel Denisov using americasnlp22 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 66ca5df9f08b6084dbde4d9f312fa8ba0a47ecfc
pip install -e .
cd egs2/americasnlp22/asr1
./run.sh \
--skip_data_prep false \
--skip_train true \
--download_model espnet/americasnlp22-asr-gvc \
--lang gvc \
--local_data_opts "--lang gvc" \
--train_set train_gvc \
--valid_set dev_gvc \
--test_sets dev_gvc \
--gpu_inference false \
--inference_nj 8 \
--lm_train_text data/train_gvc/text \
--bpe_train_text data/train_gvc/text
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Sun Jun 5 03:29:33 CEST 2022`
- python version: `3.9.13 (main, May 18 2022, 00:00:00) [GCC 11.3.1 20220421 (Red Hat 11.3.1-2)]`
- espnet version: `espnet 202204`
- pytorch version: `pytorch 1.11.0+cu115`
- Git hash: `d55704daa36d3dd2ca24ae3162ac40d81957208c`
- Commit date: `Wed Jun 1 02:33:09 2022 +0200`
## asr_train_asr_transformer_raw_gvc_bpe100_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.cer_ctc.best/dev_gvc|253|2206|12.4|72.4|15.1|6.7|94.2|99.6|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.cer_ctc.best/dev_gvc|253|13453|64.7|15.5|19.9|10.2|45.6|99.6|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.cer_ctc.best/dev_gvc|253|10229|58.3|22.3|19.4|11.0|52.7|99.6|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr_transformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_transformer_raw_gvc_bpe100_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 15
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- cer_ctc
- min
keep_nbest_models: 1
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param:
- frontend.upstream.model.feature_extractor
- frontend.upstream.model.encoder.layers.0
- frontend.upstream.model.encoder.layers.1
- frontend.upstream.model.encoder.layers.2
- frontend.upstream.model.encoder.layers.3
- frontend.upstream.model.encoder.layers.4
- frontend.upstream.model.encoder.layers.5
- frontend.upstream.model.encoder.layers.6
- frontend.upstream.model.encoder.layers.7
- frontend.upstream.model.encoder.layers.8
- frontend.upstream.model.encoder.layers.9
- frontend.upstream.model.encoder.layers.10
- frontend.upstream.model.encoder.layers.11
- frontend.upstream.model.encoder.layers.12
- frontend.upstream.model.encoder.layers.13
- frontend.upstream.model.encoder.layers.14
- frontend.upstream.model.encoder.layers.15
- frontend.upstream.model.encoder.layers.16
- frontend.upstream.model.encoder.layers.17
- frontend.upstream.model.encoder.layers.18
- frontend.upstream.model.encoder.layers.19
- frontend.upstream.model.encoder.layers.20
- frontend.upstream.model.encoder.layers.21
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 200000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_gvc_bpe100_sp/train/speech_shape
- exp/asr_stats_raw_gvc_bpe100_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_gvc_bpe100_sp/valid/speech_shape
- exp/asr_stats_raw_gvc_bpe100_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_gvc_sp/wav.scp
- speech
- sound
- - dump/raw/train_gvc_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev_gvc/wav.scp
- speech
- sound
- - dump/raw/dev_gvc/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adamw
optim_conf:
lr: 0.0001
scheduler: warmuplr
scheduler_conf:
warmup_steps: 300
token_list:
- <blank>
- <unk>
- ▁
- a
- ''''
- u
- i
- o
- h
- U
- .
- ro
- re
- ri
- ka
- s
- na
- p
- e
- ▁ti
- t
- ':'
- d
- ha
- 'no'
- ▁hi
- m
- ▁ni
- '~'
- ã
- ta
- ▁wa
- ti
- ','
- ▁to
- b
- n
- ▁kh
- ma
- r
- se
- w
- l
- k
- '"'
- ñ
- õ
- g
- (
- )
- v
- f
- '?'
- A
- K
- z
- é
- T
- '!'
- D
- ó
- N
- á
- R
- P
- ú
- '0'
- í
- I
- '1'
- L
- '-'
- '8'
- E
- S
- Ã
- F
- '9'
- '6'
- G
- C
- x
- '3'
- '2'
- B
- W
- J
- H
- Y
- M
- j
- ç
- q
- c
- Ñ
- '4'
- '7'
- O
- y
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
use_preprocessor: true
token_type: bpe
bpemodel: data/gvc_token_list/bpe_unigram100/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: wav2vec2_url
upstream_ckpt: https://dl.fbaipublicfiles.com/fairseq/wav2vec/xlsr2_300m.pt
download_dir: ./hub
multilayer_feature: true
fs: 16k
specaug: null
specaug_conf: {}
normalize: utterance_mvn
normalize_conf: {}
model: espnet
model_conf:
ctc_weight: 1.0
lsm_weight: 0.0
length_normalized_loss: false
extract_feats_in_collect_stats: false
preencoder: linear
preencoder_conf:
input_size: 1024
output_size: 80
encoder: transformer
encoder_conf:
input_layer: conv2d2
num_blocks: 1
linear_units: 2048
dropout_rate: 0.2
output_size: 256
attention_heads: 8
attention_dropout_rate: 0.2
postencoder: null
postencoder_conf: {}
decoder: rnn
decoder_conf: {}
required:
- output_dir
- token_list
version: '202204'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
espnet/americasnlp22-asr-tav
|
espnet
| 2022-08-27T16:12:23Z | 4 | 0 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"tav",
"dataset:americasnlp22",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-06-06T19:08:34Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: tav
datasets:
- americasnlp22
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/americasnlp22-asr-tav`
This model was trained by Pavel Denisov using americasnlp22 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 66ca5df9f08b6084dbde4d9f312fa8ba0a47ecfc
pip install -e .
cd egs2/americasnlp22/asr1
./run.sh \
--skip_data_prep false \
--skip_train true \
--download_model espnet/americasnlp22-asr-tav \
--lang tav \
--local_data_opts "--lang tav" \
--train_set train_tav \
--valid_set dev_tav \
--test_sets dev_tav \
--gpu_inference false \
--inference_nj 8 \
--lm_train_text data/train_tav/text \
--bpe_train_text data/train_tav/text
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Sun Jun 5 02:36:59 CEST 2022`
- python version: `3.9.13 (main, May 18 2022, 00:00:00) [GCC 11.3.1 20220421 (Red Hat 11.3.1-2)]`
- espnet version: `espnet 202204`
- pytorch version: `pytorch 1.11.0+cu115`
- Git hash: `d55704daa36d3dd2ca24ae3162ac40d81957208c`
- Commit date: `Wed Jun 1 02:33:09 2022 +0200`
## asr_train_asr_transformer_raw_tav_bpe100_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.cer_ctc.best/dev_tav|250|1201|3.0|83.1|13.9|17.0|114.0|99.6|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.cer_ctc.best/dev_tav|250|8606|57.5|19.9|22.7|12.0|54.5|99.6|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.cer_ctc.best/dev_tav|250|6741|49.2|28.5|22.3|12.6|63.4|99.6|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr_transformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_transformer_raw_tav_bpe100_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 15
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- cer_ctc
- min
keep_nbest_models: 1
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param:
- frontend.upstream.model.feature_extractor
- frontend.upstream.model.encoder.layers.0
- frontend.upstream.model.encoder.layers.1
- frontend.upstream.model.encoder.layers.2
- frontend.upstream.model.encoder.layers.3
- frontend.upstream.model.encoder.layers.4
- frontend.upstream.model.encoder.layers.5
- frontend.upstream.model.encoder.layers.6
- frontend.upstream.model.encoder.layers.7
- frontend.upstream.model.encoder.layers.8
- frontend.upstream.model.encoder.layers.9
- frontend.upstream.model.encoder.layers.10
- frontend.upstream.model.encoder.layers.11
- frontend.upstream.model.encoder.layers.12
- frontend.upstream.model.encoder.layers.13
- frontend.upstream.model.encoder.layers.14
- frontend.upstream.model.encoder.layers.15
- frontend.upstream.model.encoder.layers.16
- frontend.upstream.model.encoder.layers.17
- frontend.upstream.model.encoder.layers.18
- frontend.upstream.model.encoder.layers.19
- frontend.upstream.model.encoder.layers.20
- frontend.upstream.model.encoder.layers.21
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 200000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_tav_bpe100_sp/train/speech_shape
- exp/asr_stats_raw_tav_bpe100_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_tav_bpe100_sp/valid/speech_shape
- exp/asr_stats_raw_tav_bpe100_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_tav_sp/wav.scp
- speech
- sound
- - dump/raw/train_tav_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev_tav/wav.scp
- speech
- sound
- - dump/raw/dev_tav/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adamw
optim_conf:
lr: 0.0001
scheduler: warmuplr
scheduler_conf:
warmup_steps: 300
token_list:
- <blank>
- <unk>
- ▁
- a
- ''''
- i
- h
- o
- e
- u
- U
- do
- ':'
- li
- na
- sa
- ▁ti
- n
- k
- ','
- '~'
- p
- ye
- le
- ka
- ta
- pe
- ▁ni
- ti
- ▁ihi
- ▁ma
- ▁~
- 'no'
- ya
- s
- ▁wa
- aye
- t
- .
- y
- m
- g
- d
- r
- ã
- '"'
- õ
- (
- )
- l
- '!'
- c
- '0'
- I
- '['
- ']'
- '2'
- '-'
- ç
- M
- '6'
- f
- A
- D
- '?'
- J
- j
- Y
- z
- Õ
- K
- '`'
- Ã
- O
- N
- F
- C
- '1'
- S
- P
- L
- T
- G
- v
- ñ
- b
- H
- E
- '3'
- '4'
- '5'
- '7'
- B
- W
- é
- ó
- ́
- w
- í
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
use_preprocessor: true
token_type: bpe
bpemodel: data/tav_token_list/bpe_unigram100/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: wav2vec2_url
upstream_ckpt: https://dl.fbaipublicfiles.com/fairseq/wav2vec/xlsr2_300m.pt
download_dir: ./hub
multilayer_feature: true
fs: 16k
specaug: null
specaug_conf: {}
normalize: utterance_mvn
normalize_conf: {}
model: espnet
model_conf:
ctc_weight: 1.0
lsm_weight: 0.0
length_normalized_loss: false
extract_feats_in_collect_stats: false
preencoder: linear
preencoder_conf:
input_size: 1024
output_size: 80
encoder: transformer
encoder_conf:
input_layer: conv2d2
num_blocks: 1
linear_units: 2048
dropout_rate: 0.2
output_size: 256
attention_heads: 8
attention_dropout_rate: 0.2
postencoder: null
postencoder_conf: {}
decoder: rnn
decoder_conf: {}
required:
- output_dir
- token_list
version: '202204'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
espnet/americasnlp22-asr-gn
|
espnet
| 2022-08-27T16:09:50Z | 1 | 0 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"gn",
"dataset:americasnlp22",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-06-13T17:11:45Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: gn
datasets:
- americasnlp22
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/americasnlp22-asr-gn`
This model was trained by Pavel Denisov using americasnlp22 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout fc62b1ce3e50c5ef8a2ac8cedb0d92ac41df54ca
pip install -e .
cd egs2/americasnlp22/asr1
./run.sh \
--skip_data_prep false \
--skip_train true \
--download_model espnet/americasnlp22-asr-gn \
--lang gn \
--local_data_opts "--lang gn" \
--train_set train_gn \
--valid_set dev_gn \
--test_sets dev_gn \
--gpu_inference false \
--inference_nj 8 \
--lm_train_text data/train_gn/text \
--bpe_train_text data/train_gn/text
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Sun Jun 5 12:17:58 CEST 2022`
- python version: `3.9.13 (main, May 18 2022, 00:00:00) [GCC 11.3.1 20220421 (Red Hat 11.3.1-2)]`
- espnet version: `espnet 202204`
- pytorch version: `pytorch 1.11.0+cu115`
- Git hash: `d55704daa36d3dd2ca24ae3162ac40d81957208c`
- Commit date: `Wed Jun 1 02:33:09 2022 +0200`
## asr_train_asr_transformer_raw_gn_bpe100_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.cer_ctc.best/dev_gn|93|391|11.5|73.7|14.8|12.5|101.0|100.0|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.cer_ctc.best/dev_gn|93|2946|83.4|7.9|8.7|8.7|25.3|100.0|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.cer_ctc.best/dev_gn|93|2439|76.6|13.5|9.9|8.7|32.1|100.0|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr_transformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_transformer_raw_gn_bpe100_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 15
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- cer_ctc
- min
keep_nbest_models: 1
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param:
- frontend.upstream.model.feature_extractor
- frontend.upstream.model.encoder.layers.0
- frontend.upstream.model.encoder.layers.1
- frontend.upstream.model.encoder.layers.2
- frontend.upstream.model.encoder.layers.3
- frontend.upstream.model.encoder.layers.4
- frontend.upstream.model.encoder.layers.5
- frontend.upstream.model.encoder.layers.6
- frontend.upstream.model.encoder.layers.7
- frontend.upstream.model.encoder.layers.8
- frontend.upstream.model.encoder.layers.9
- frontend.upstream.model.encoder.layers.10
- frontend.upstream.model.encoder.layers.11
- frontend.upstream.model.encoder.layers.12
- frontend.upstream.model.encoder.layers.13
- frontend.upstream.model.encoder.layers.14
- frontend.upstream.model.encoder.layers.15
- frontend.upstream.model.encoder.layers.16
- frontend.upstream.model.encoder.layers.17
- frontend.upstream.model.encoder.layers.18
- frontend.upstream.model.encoder.layers.19
- frontend.upstream.model.encoder.layers.20
- frontend.upstream.model.encoder.layers.21
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 200000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_gn_bpe100_sp/train/speech_shape
- exp/asr_stats_raw_gn_bpe100_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_gn_bpe100_sp/valid/speech_shape
- exp/asr_stats_raw_gn_bpe100_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_gn_sp/wav.scp
- speech
- sound
- - dump/raw/train_gn_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev_gn/wav.scp
- speech
- sound
- - dump/raw/dev_gn/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adamw
optim_conf:
lr: 0.0001
scheduler: warmuplr
scheduler_conf:
warmup_steps: 300
token_list:
- <blank>
- <unk>
- ▁
- a
- i
- e
- o
- ''''
- .
- u
- '"'
- p
- r
- n
- y
- h
- ▁"
- ▁o
- é
- re
- va
- pe
- s
- ra
- á
- he
- t
- mb
- g
- ka
- ã
- v
- ve
- je
- ▁ha
- te
- k
- ñ
- ha
- py
- ta
- ku
- ẽ
- ja
- pa
- O
- mi
- ó
- mo
- j
- ko
- ʼ
- ña
- me
- ma
- c
- M
- í
- H
- ú
- A
- ̃
- õ
- ý
- m
- P
- U
- ','
- ũ
- l
- ỹ
- N
- ĩ
- E
- I
- J
- L
- Á
- V
- S
- z
- '-'
- '?'
- Ñ
- R
- G
- Y
- T
- K
- C
- d
- “
- B
- ’
- ”
- D
- b
- f
- q
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
use_preprocessor: true
token_type: bpe
bpemodel: data/gn_token_list/bpe_unigram100/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: wav2vec2_url
upstream_ckpt: https://dl.fbaipublicfiles.com/fairseq/wav2vec/xlsr2_300m.pt
download_dir: ./hub
multilayer_feature: true
fs: 16k
specaug: null
specaug_conf: {}
normalize: utterance_mvn
normalize_conf: {}
model: espnet
model_conf:
ctc_weight: 1.0
lsm_weight: 0.0
length_normalized_loss: false
extract_feats_in_collect_stats: false
preencoder: linear
preencoder_conf:
input_size: 1024
output_size: 80
encoder: transformer
encoder_conf:
input_layer: conv2d2
num_blocks: 1
linear_units: 2048
dropout_rate: 0.2
output_size: 256
attention_heads: 8
attention_dropout_rate: 0.2
postencoder: null
postencoder_conf: {}
decoder: rnn
decoder_conf: {}
required:
- output_dir
- token_list
version: '202204'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
danieladejumo/Reinforce-Pixelcopter-PLE-v0
|
danieladejumo
| 2022-08-27T16:05:55Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-27T16:05:49Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 9.30 +/- 8.66
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
huggingtweets/tojibaceo-tojibawhiteroom
|
huggingtweets
| 2022-08-27T15:47:39Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-26T15:54:01Z |
---
language: en
thumbnail: http://www.huggingtweets.com/tojibaceo-tojibawhiteroom/1661615254424/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1508824472924659725/267f4Lkm_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1509337156787003394/WjOdf_-m_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Tojiba CPU Corp BUDDIES MINTING NOW (🏭,🏭) & Tojiba White Room (T__T).1</div>
<div style="text-align: center; font-size: 14px;">@tojibaceo-tojibawhiteroom</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Tojiba CPU Corp BUDDIES MINTING NOW (🏭,🏭) & Tojiba White Room (T__T).1.
| Data | Tojiba CPU Corp BUDDIES MINTING NOW (🏭,🏭) | Tojiba White Room (T__T).1 |
| --- | --- | --- |
| Tweets downloaded | 1613 | 704 |
| Retweets | 774 | 0 |
| Short tweets | 279 | 82 |
| Tweets kept | 560 | 622 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1kju2ojf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tojibaceo-tojibawhiteroom's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/15twdubf) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/15twdubf/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/tojibaceo-tojibawhiteroom')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
brightink/Stable_Diffusion_Demo
|
brightink
| 2022-08-27T14:51:44Z | 0 | 0 | null |
[
"license:afl-3.0",
"region:us"
] | null | 2022-08-27T14:49:16Z |
---
title: Stable Diffusion
emoji: 🏃
colorFrom: red
colorTo: red
sdk: gradio
sdk_version: 3.1.7
app_file: app.py
pinned: false
license: afl-3.0
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
theojolliffe/T5-model-1-d-4
|
theojolliffe
| 2022-08-27T14:20:07Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-26T21:54:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: T5-model-1-d-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# T5-model-1-d-4
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0456
- Rouge1: 93.3486
- Rouge2: 82.1873
- Rougel: 92.8611
- Rougelsum: 92.7768
- Gen Len: 14.9953
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.0873 | 1.0 | 8043 | 0.0456 | 93.3486 | 82.1873 | 92.8611 | 92.7768 | 14.9953 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Tokenizers 0.12.1
|
Dallasmorningstar/Hb
|
Dallasmorningstar
| 2022-08-27T14:11:54Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-07-29T08:51:13Z |
git lfs install
git clone https://huggingface.co/Dallasmorningstar/Hb
|
huggingtweets/nickjr-paramountplus-sesamestreet
|
huggingtweets
| 2022-08-27T14:08:32Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-27T14:08:26Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1326222819248791552/u6HtLEIV_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1478805340212838413/YAJM_fei_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1508543786737090570/k9hp_5-2_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Sesame Street & Nick Jr. & Paramount+</div>
<div style="text-align: center; font-size: 14px;">@nickjr-paramountplus-sesamestreet</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Sesame Street & Nick Jr. & Paramount+.
| Data | Sesame Street | Nick Jr. | Paramount+ |
| --- | --- | --- | --- |
| Tweets downloaded | 3250 | 3250 | 3250 |
| Retweets | 746 | 51 | 60 |
| Short tweets | 41 | 754 | 40 |
| Tweets kept | 2463 | 2445 | 3150 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3lbv4k51/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nickjr-paramountplus-sesamestreet's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/339dkoxu) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/339dkoxu/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/nickjr-paramountplus-sesamestreet')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
nrazavi/xlm-roberta-base-finetuned-panx-en
|
nrazavi
| 2022-08-27T14:01:26Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-27T13:50:42Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.en
split: train
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.6833890746934226
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4085
- F1: 0.6834
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1943 | 1.0 | 50 | 0.6081 | 0.5020 |
| 0.5325 | 2.0 | 100 | 0.4455 | 0.6193 |
| 0.3915 | 3.0 | 150 | 0.4085 | 0.6834 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu116
- Datasets 2.4.0
- Tokenizers 0.12.1
|
nrazavi/xlm-roberta-base-finetuned-panx-it
|
nrazavi
| 2022-08-27T13:50:27Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-27T13:39:29Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.it
split: train
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8094848732624693
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2619
- F1: 0.8095
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7908 | 1.0 | 70 | 0.3093 | 0.7437 |
| 0.2824 | 2.0 | 140 | 0.2580 | 0.8015 |
| 0.1834 | 3.0 | 210 | 0.2619 | 0.8095 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu116
- Datasets 2.4.0
- Tokenizers 0.12.1
|
nrazavi/xlm-roberta-base-finetuned-panx-fr
|
nrazavi
| 2022-08-27T13:39:10Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-27T13:27:06Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.fr
split: train
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8367792906370819
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2772
- F1: 0.8368
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.581 | 1.0 | 191 | 0.3798 | 0.7573 |
| 0.2625 | 2.0 | 382 | 0.2806 | 0.8260 |
| 0.1748 | 3.0 | 573 | 0.2772 | 0.8368 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu116
- Datasets 2.4.0
- Tokenizers 0.12.1
|
chum76/chiron0076
|
chum76
| 2022-08-27T12:27:38Z | 0 | 0 | null |
[
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2022-08-27T12:27:38Z |
---
license: cc-by-nc-sa-4.0
---
|
akkasayaz/q-FrozenLake-v1-4x4-noSlippery
|
akkasayaz
| 2022-08-27T12:22:50Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-27T12:22:44Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="akkasayaz/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Shamus/mBART_skr-en_longerrun
|
Shamus
| 2022-08-27T11:28:03Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-27T07:38:38Z |
---
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mBART_skr-en_longerrun
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mBART_skr-en_longerrun
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4577
- Bleu: 30.8071
- Gen Len: 34.548
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 0.5444 | 0.72 | 500 | 1.3416 | 28.7505 | 34.228 |
| 0.8576 | 1.45 | 1000 | 1.3411 | 30.1776 | 34.328 |
| 0.6422 | 2.18 | 1500 | 1.3882 | 30.2815 | 34.164 |
| 0.532 | 2.9 | 2000 | 1.3716 | 30.8947 | 34.556 |
| 0.4473 | 3.63 | 2500 | 1.4577 | 30.8071 | 34.548 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
silviacamplani/distilbert-finetuned-dapt_tapt-ner-ai
|
silviacamplani
| 2022-08-27T11:12:23Z | 65 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-27T11:09:10Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: silviacamplani/distilbert-finetuned-dapt_tapt-ner-ai
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# silviacamplani/distilbert-finetuned-dapt_tapt-ner-ai
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.8595
- Validation Loss: 0.8604
- Train Precision: 0.3378
- Train Recall: 0.3833
- Train F1: 0.3591
- Train Accuracy: 0.7860
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 350, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 2.5333 | 1.7392 | 0.0 | 0.0 | 0.0 | 0.6480 | 0 |
| 1.5890 | 1.4135 | 0.0 | 0.0 | 0.0 | 0.6480 | 1 |
| 1.3635 | 1.2627 | 0.0 | 0.0 | 0.0 | 0.6483 | 2 |
| 1.2366 | 1.1526 | 0.1538 | 0.0920 | 0.1151 | 0.6921 | 3 |
| 1.1296 | 1.0519 | 0.2147 | 0.2147 | 0.2147 | 0.7321 | 4 |
| 1.0374 | 0.9753 | 0.2743 | 0.2981 | 0.2857 | 0.7621 | 5 |
| 0.9639 | 0.9202 | 0.3023 | 0.3373 | 0.3188 | 0.7693 | 6 |
| 0.9097 | 0.8829 | 0.3215 | 0.3714 | 0.3447 | 0.7795 | 7 |
| 0.8756 | 0.8635 | 0.3280 | 0.3850 | 0.3542 | 0.7841 | 8 |
| 0.8595 | 0.8604 | 0.3378 | 0.3833 | 0.3591 | 0.7860 | 9 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Shamus/mbart-large-50-many-to-many-mmt-finetuned-acw-to-en
|
Shamus
| 2022-08-27T07:46:17Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-23T02:45:09Z |
---
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-large-50-many-to-many-mmt-finetuned-acw-to-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-large-50-many-to-many-mmt-finetuned-ar-to-en
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5204
- Bleu: 34.8213
- Gen Len: 33.544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 1.4657 | 1.0 | 816 | 1.1739 | 30.1212 | 32.868 |
| 0.8541 | 2.0 | 1632 | 1.1190 | 33.0098 | 32.808 |
| 0.6176 | 3.0 | 2448 | 1.1681 | 33.3634 | 32.756 |
| 0.3397 | 4.0 | 3264 | 1.3327 | 33.2941 | 33.6 |
| 0.2227 | 5.0 | 4080 | 1.4211 | 33.9298 | 33.128 |
| 0.1597 | 6.0 | 4896 | 1.5157 | 34.7405 | 33.496 |
| 0.1426 | 6.13 | 5000 | 1.5204 | 34.8213 | 33.544 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Shamus/mbart-large-50-many-to-many-mmt-finetuned-skr-en_2.8k
|
Shamus
| 2022-08-27T07:22:46Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-25T03:12:18Z |
---
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-large-50-many-to-many-mmt-finetuned-skr-en_2.8k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-large-50-many-to-many-mmt-finetuned-ar_AR-to-en_XX
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2315
- Bleu: 28.2149
- Gen Len: 35.188
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 1.3194 | 1.0 | 2759 | 1.2315 | 28.2149 | 35.188 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
pinot/wav2vec2-large-xls-r-300m-ja-colab-3
|
pinot
| 2022-08-27T06:14:51Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_10_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-08-26T23:39:55Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_10_0
model-index:
- name: wav2vec2-large-xls-r-300m-ja-colab-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-ja-colab-3
This model is a fine-tuned version of [pinot/wav2vec2-large-xls-r-300m-ja-colab-2](https://huggingface.co/pinot/wav2vec2-large-xls-r-300m-ja-colab-2) on the common_voice_10_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2696
- Wer: 0.2299
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 637 | 1.4666 | 0.2862 |
| No log | 2.0 | 1274 | 1.4405 | 0.2866 |
| No log | 3.0 | 1911 | 1.4162 | 0.2762 |
| No log | 4.0 | 2548 | 1.4128 | 0.2709 |
| 0.2814 | 5.0 | 3185 | 1.3927 | 0.2613 |
| 0.2814 | 6.0 | 3822 | 1.3629 | 0.2536 |
| 0.2814 | 7.0 | 4459 | 1.3349 | 0.2429 |
| 0.2814 | 8.0 | 5096 | 1.3116 | 0.2356 |
| 0.1624 | 9.0 | 5733 | 1.2774 | 0.2307 |
| 0.1624 | 10.0 | 6370 | 1.2696 | 0.2299 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.10.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
bnsh/ddpm-butterflies-128
|
bnsh
| 2022-08-27T05:56:30Z | 5 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-08-27T04:43:24Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/bnsh/ddpm-butterflies-128/tensorboard?#scalars)
|
rajistics/layoutlmv2-finetuned-cord
|
rajistics
| 2022-08-27T04:45:12Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-27T03:25:11Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2-finetuned-cord
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-finetuned-cord
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 3000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.21.2
- Pytorch 1.10.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
JNK789/distilbert-base-uncased-finetuned-emotion
|
JNK789
| 2022-08-27T03:55:45Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-31T18:53:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9305
- name: F1
type: f1
value: 0.9307950942842982
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1712
- Accuracy: 0.9305
- F1: 0.9308
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7721 | 1.0 | 250 | 0.2778 | 0.9145 | 0.9131 |
| 0.2103 | 2.0 | 500 | 0.1818 | 0.925 | 0.9249 |
| 0.1446 | 3.0 | 750 | 0.1712 | 0.9305 | 0.9308 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
mindofmadness/faces01
|
mindofmadness
| 2022-08-27T02:11:32Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-08-27T02:08:30Z |
short narrow face, mid size lips, light freckles on upper cheeks, light grey eyes, brunette hair, nerd glasses
|
theojolliffe/T5-model-1-d-6
|
theojolliffe
| 2022-08-27T00:15:29Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-26T22:53:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: T5-model-1-d-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# T5-model-1-d-6
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0229
- Rouge1: 94.972
- Rouge2: 84.9842
- Rougel: 94.7792
- Rougelsum: 94.758
- Gen Len: 15.0918
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|:-------:|:---------:|:-------:|
| 0.0449 | 1.0 | 16085 | 0.0229 | 94.972 | 84.9842 | 94.7792 | 94.758 | 15.0918 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.