modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-11 18:29:29
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-11 18:25:24
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
abdusah/aradia-ctc-hubert-ft
|
abdusah
| 2022-03-31T20:56:27Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"hubert",
"automatic-speech-recognition",
"abdusahmbzuai/arabic_speech_massive_300hrs",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-31T08:14:31Z |
---
tags:
- automatic-speech-recognition
- abdusahmbzuai/arabic_speech_massive_300hrs
- generated_from_trainer
model-index:
- name: aradia-ctc-hubert-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aradia-ctc-hubert-ft
This model is a fine-tuned version of [/l/users/abdulwahab.sahyoun/aradia/aradia-ctc-hubert-ft](https://huggingface.co//l/users/abdulwahab.sahyoun/aradia/aradia-ctc-hubert-ft) on the ABDUSAHMBZUAI/ARABIC_SPEECH_MASSIVE_300HRS - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8536
- Wer: 0.3737
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.43 | 100 | 3.6934 | 1.0 |
| No log | 0.87 | 200 | 3.0763 | 1.0 |
| No log | 1.3 | 300 | 2.9737 | 1.0 |
| No log | 1.74 | 400 | 2.5734 | 1.0 |
| 5.0957 | 2.17 | 500 | 1.1900 | 0.9011 |
| 5.0957 | 2.61 | 600 | 0.9726 | 0.7572 |
| 5.0957 | 3.04 | 700 | 0.8960 | 0.6209 |
| 5.0957 | 3.48 | 800 | 0.7851 | 0.5515 |
| 5.0957 | 3.91 | 900 | 0.7271 | 0.5115 |
| 1.0312 | 4.35 | 1000 | 0.7053 | 0.4955 |
| 1.0312 | 4.78 | 1100 | 0.6823 | 0.4737 |
| 1.0312 | 5.22 | 1200 | 0.6768 | 0.4595 |
| 1.0312 | 5.65 | 1300 | 0.6635 | 0.4488 |
| 1.0312 | 6.09 | 1400 | 0.6602 | 0.4390 |
| 0.6815 | 6.52 | 1500 | 0.6464 | 0.4310 |
| 0.6815 | 6.95 | 1600 | 0.6455 | 0.4394 |
| 0.6815 | 7.39 | 1700 | 0.6630 | 0.4312 |
| 0.6815 | 7.82 | 1800 | 0.6521 | 0.4126 |
| 0.6815 | 8.26 | 1900 | 0.6282 | 0.4284 |
| 0.544 | 8.69 | 2000 | 0.6248 | 0.4178 |
| 0.544 | 9.13 | 2100 | 0.6510 | 0.4104 |
| 0.544 | 9.56 | 2200 | 0.6527 | 0.4013 |
| 0.544 | 10.0 | 2300 | 0.6511 | 0.4064 |
| 0.544 | 10.43 | 2400 | 0.6734 | 0.4061 |
| 0.4478 | 10.87 | 2500 | 0.6756 | 0.4145 |
| 0.4478 | 11.3 | 2600 | 0.6727 | 0.3990 |
| 0.4478 | 11.74 | 2700 | 0.6619 | 0.4007 |
| 0.4478 | 12.17 | 2800 | 0.6614 | 0.4019 |
| 0.4478 | 12.61 | 2900 | 0.6695 | 0.4004 |
| 0.3919 | 13.04 | 3000 | 0.6778 | 0.3966 |
| 0.3919 | 13.48 | 3100 | 0.6872 | 0.3971 |
| 0.3919 | 13.91 | 3200 | 0.6882 | 0.3945 |
| 0.3919 | 14.35 | 3300 | 0.7177 | 0.4010 |
| 0.3919 | 14.78 | 3400 | 0.6888 | 0.4043 |
| 0.3767 | 15.22 | 3500 | 0.7124 | 0.4202 |
| 0.3767 | 15.65 | 3600 | 0.7276 | 0.4120 |
| 0.3767 | 16.09 | 3700 | 0.7265 | 0.4034 |
| 0.3767 | 16.52 | 3800 | 0.7392 | 0.4077 |
| 0.3767 | 16.95 | 3900 | 0.7403 | 0.3965 |
| 0.3603 | 17.39 | 4000 | 0.7445 | 0.4016 |
| 0.3603 | 17.82 | 4100 | 0.7579 | 0.4012 |
| 0.3603 | 18.26 | 4200 | 0.7225 | 0.3963 |
| 0.3603 | 18.69 | 4300 | 0.7355 | 0.3951 |
| 0.3603 | 19.13 | 4400 | 0.7482 | 0.3925 |
| 0.3153 | 19.56 | 4500 | 0.7723 | 0.3972 |
| 0.3153 | 20.0 | 4600 | 0.7469 | 0.3898 |
| 0.3153 | 20.43 | 4700 | 0.7800 | 0.3944 |
| 0.3153 | 20.87 | 4800 | 0.7827 | 0.3897 |
| 0.3153 | 21.3 | 4900 | 0.7935 | 0.3914 |
| 0.286 | 21.74 | 5000 | 0.7984 | 0.3750 |
| 0.286 | 22.17 | 5100 | 0.7945 | 0.3830 |
| 0.286 | 22.61 | 5200 | 0.8011 | 0.3775 |
| 0.286 | 23.04 | 5300 | 0.7978 | 0.3824 |
| 0.286 | 23.48 | 5400 | 0.8161 | 0.3833 |
| 0.2615 | 23.91 | 5500 | 0.7823 | 0.3858 |
| 0.2615 | 24.35 | 5600 | 0.8312 | 0.3863 |
| 0.2615 | 24.78 | 5700 | 0.8427 | 0.3819 |
| 0.2615 | 25.22 | 5800 | 0.8432 | 0.3802 |
| 0.2615 | 25.65 | 5900 | 0.8286 | 0.3794 |
| 0.2408 | 26.09 | 6000 | 0.8224 | 0.3824 |
| 0.2408 | 26.52 | 6100 | 0.8228 | 0.3823 |
| 0.2408 | 26.95 | 6200 | 0.8324 | 0.3795 |
| 0.2408 | 27.39 | 6300 | 0.8564 | 0.3744 |
| 0.2408 | 27.82 | 6400 | 0.8629 | 0.3774 |
| 0.2254 | 28.26 | 6500 | 0.8545 | 0.3778 |
| 0.2254 | 28.69 | 6600 | 0.8492 | 0.3767 |
| 0.2254 | 29.13 | 6700 | 0.8511 | 0.3751 |
| 0.2254 | 29.56 | 6800 | 0.8491 | 0.3753 |
| 0.2254 | 30.0 | 6900 | 0.8536 | 0.3737 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
magitz/distilbert-base-uncased-finetuned-emotion
|
magitz
| 2022-03-31T20:48:43Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-31T20:41:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9265
- name: F1
type: f1
value: 0.9267965474109292
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2235
- Accuracy: 0.9265
- F1: 0.9268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8101 | 1.0 | 250 | 0.3177 | 0.9045 | 0.9010 |
| 0.2472 | 2.0 | 500 | 0.2235 | 0.9265 | 0.9268 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.8.1
- Datasets 1.18.3
- Tokenizers 0.11.0
|
ghees/FatimeFellowship
|
ghees
| 2022-03-31T20:47:24Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-31T20:45:21Z |
Preprocessing before feeding to model
```
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('paraphrase-MiniLM-L6-v2', device='cuda')
...
embeddings = model.encode([text])
return embeddings[0]
```
|
arampacha/gpt-neo-therapist-small
|
arampacha
| 2022-03-31T20:34:26Z | 17 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"onnx",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-30T08:40:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: gpt-neo-therapist-small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-neo-therapist-small
This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.6731
- Rouge1: 39.5028
- Rouge2: 6.43
- Rougel: 24.0091
- Rougelsum: 35.4481
- Gen Len: 204.1329
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 24
- gradient_accumulation_steps: 64
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:--------:|
| 9.9955 | 0.97 | 7 | 6.8195 | 18.6047 | 1.0194 | 14.8565 | 17.9774 | 212.0983 |
| 6.9729 | 1.97 | 14 | 5.6783 | 26.3789 | 3.0779 | 18.5195 | 24.8592 | 203.0925 |
| 5.2614 | 2.97 | 21 | 5.0506 | 34.9428 | 4.921 | 21.9741 | 32.1122 | 206.2775 |
| 5.0599 | 3.97 | 28 | 4.7372 | 38.5235 | 6.2251 | 23.5923 | 34.5633 | 204.2428 |
| 4.5479 | 4.97 | 35 | 4.6731 | 39.5028 | 6.43 | 24.0091 | 35.4481 | 204.1329 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
WENGSYX/Deberta-Chinese-Large
|
WENGSYX
| 2022-03-31T20:08:59Z | 56 | 16 |
transformers
|
[
"transformers",
"pytorch",
"deberta",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# Deberta-Chinese
本项目,基于微软开源的Deberta模型,在中文领域进行预训练。开源本模型,旨在为其他人提供更多预训练语言模型选择。
本预训练模型,基于WuDaoCorpora语料库预训练而成。WuDaoCorpora是北京智源人工智能研究院(智源研究院)构建的大规模、高质量数据集,用于支撑“悟道”大模型项目研究。
使用WWM与n-gramMLM 等预训练方法进行预训练。
| 预训练模型 | 学习率 | batchsize | 设备 | 语料库 | 时间 | 优化器 |
| --------------------- | ------ | --------- | ------ | ------ | ---- | ------ |
| Deberta-Chinese-Large | 1e-5 | 512 | 2*3090 | 200G | 14天 | AdamW |
### 加载与使用
依托于huggingface-transformers
```
tokenizer = BertTokenizer.from_pretrained("WENGSYX/Deberta-Chinese-Large")
model = AutoModel.from_pretrained("WENGSYX/Deberta-Chinese-Large")
```
#### 注意,请使用BertTokenizer加载中文词表
|
Tahsin-Mayeesha/distilbert-finetuned-fakenews
|
Tahsin-Mayeesha
| 2022-03-31T17:11:42Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-31T15:58:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-finetuned-fakenews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-finetuned-fakenews
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0049
- Accuracy: 0.9995
- F1: 0.9995
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.0392 | 1.0 | 500 | 0.0059 | 0.999 | 0.999 |
| 0.002 | 2.0 | 1000 | 0.0047 | 0.9995 | 0.9995 |
| 0.0001 | 3.0 | 1500 | 0.0047 | 0.9995 | 0.9995 |
| 0.0001 | 4.0 | 2000 | 0.0049 | 0.9995 | 0.9995 |
| 0.0 | 5.0 | 2500 | 0.0049 | 0.9995 | 0.9995 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.12.0
|
rahulacj/bertweet-base-finetuned-sentiment-analysis
|
rahulacj
| 2022-03-31T16:21:16Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-31T09:42:31Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bertweet-base-finetuned-sentiment-analysis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertweet-base-finetuned-sentiment-analysis
This model is a fine-tuned version of [cardiffnlp/bertweet-base-sentiment](https://huggingface.co/cardiffnlp/bertweet-base-sentiment) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8458
- Accuracy: 0.6426
- F1: 0.6397
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8904 | 1.0 | 630 | 0.8509 | 0.6381 | 0.6340 |
| 0.7655 | 2.0 | 1260 | 0.8345 | 0.6579 | 0.6559 |
| 0.66 | 3.0 | 1890 | 0.9199 | 0.6548 | 0.6514 |
| 0.447 | 4.0 | 2520 | 1.0324 | 0.6429 | 0.6417 |
| 0.3585 | 5.0 | 3150 | 1.1234 | 0.6452 | 0.6424 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.12.0
|
eren23/pneumonia-bielefeld-dl-course
|
eren23
| 2022-03-31T15:55:27Z | 61 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-27T12:17:21Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: pneumonia-bielefeld-dl-course
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8456632494926453
---
# pneumonia-bielefeld-dl-course
This registry contains the model for making pneumonia predictions and was prepared for
Bielefeld University Deep Learning course homework.
The code used for this implementation mostly comes from here: https://github.com/nateraw/huggingpics it was a ready pipeline for model fine-tuning with huggingface and PyTorch Lightning for another dataset.
|
Nonem100/Test-Model
|
Nonem100
| 2022-03-31T15:19:38Z | 62 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-31T15:19:30Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: Test-Model
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9017857313156128
---
# Test-Model
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### cotton candy

#### hamburger

#### hot dog

#### nachos

#### popcorn

|
huggingtweets/timdingmanlive
|
huggingtweets
| 2022-03-31T14:30:05Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-31T14:26:57Z |
---
language: en
thumbnail: http://www.huggingtweets.com/timdingmanlive/1648736999131/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/2844974270/7bb6450b90b65f8712d9433b8d5e1971_400x400.jpeg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Tim Dingman</div>
<div style="text-align: center; font-size: 14px;">@timdingmanlive</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Tim Dingman.
| Data | Tim Dingman |
| --- | --- |
| Tweets downloaded | 3240 |
| Retweets | 555 |
| Short tweets | 138 |
| Tweets kept | 2547 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/7yvdv2z7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @timdingmanlive's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/311pu3zj) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/311pu3zj/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/timdingmanlive')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
oferweintraub/bert-base-finance-sentiment-noisy-search
|
oferweintraub
| 2022-03-31T14:13:45Z | 23 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"Finance-sentiment-analysis",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- Finance-sentiment-analysis
- generated_from_trainer
metrics:
- f1
- accuracy
- precision
- recall
model-index:
- name: bert-base-finance-sentiment-noisy-search
results: []
widget:
- text: "Third quarter reported revenues were $10.9 billion, up 5 percent compared to prior year and up 8 percent on a currency-neutral basis"
example_title: "Positive"
- text: "The London-listed website for businesses reported a pretax loss of $26.6 million compared with a loss of $12.9 million the previous year"
example_title: "Negative"
- text: "Microsoft updates Outlook, Teams, and PowerPoint to be hybrid work ready"
example_title: "Neutral"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-finance-sentiment-noisy-search
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on Kaggle finance news sentiment analysis with data enhancement using noisy search. The process is explained below:
1. First "bert-base-uncased" was fine-tuned on Kaggle's finance news sentiment analysis https://www.kaggle.com/ankurzing/sentiment-analysis-for-financial-news dataset achieving accuracy of about 88%
2. We then used a logistic-regression classifier on the same data. Here we looked at coefficients that contributed the most to the "Positive" and "Negative" classes by inspecting only bi-grams.
3. Using the top 25 bi-grams per class (i.e. "Positive" / "Negative") we invoked Bing news search with those bi-grams and retrieved up to 50 news items per bi-gram phrase.
4. We called it "noisy-search" because it is assumed the positive bi-grams (e.g. "profit rose" , "growth net") give rise to positive examples whereas negative bi-grams (e.g. "loss increase", "share loss") result in negative examples but note that we didn't test for the validity of this assumption (hence: noisy-search)
5. For each article we kept the title + excerpt and labeled it according to pre-assumptions on class associations.
6. We then trained the same model on the noisy data and apply it to an held-out test set from the original data set split.
7. Training with couple of thousands noisy "positives" and "negatives" examples yielded a test set accuracy of about 95%.
8. It shows that by automatically collecting noisy examples using search we can boost accuracy performance from about 88% to more than 95%.
Accuracy results for Logistic Regression (LR) and BERT (base-cased) are shown in the attached pdf:
https://drive.google.com/file/d/1MI9gRdppactVZ_XvhCwvoaOV1aRfprrd/view?usp=sharing
## Model description
BERT model trained on noisy data from search results. See PDF for more details.
## Intended uses & limitations
Intended for use on finance news sentiment analysis with 3 options: "Positive", "Neutral" and "Negative"
To get the best results feed the classifier with the title and either the 1st paragraph or a short news summarization e.g. of up to 64 tokens.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
huggingtweets/youtube
|
huggingtweets
| 2022-03-31T14:06:33Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-31T14:05:50Z |
---
language: en
thumbnail: http://www.huggingtweets.com/youtube/1648735587597/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1427292844612595720/RC1YSvuT_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">YouTube</div>
<div style="text-align: center; font-size: 14px;">@youtube</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from YouTube.
| Data | YouTube |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 23 |
| Short tweets | 104 |
| Tweets kept | 3123 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2dx34obn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @youtube's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/p527w5q3) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/p527w5q3/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/youtube')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Edresson/wav2vec2-large-xlsr-coraa-portuguese
|
Edresson
| 2022-03-31T13:28:43Z | 632 | 15 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"pt",
"portuguese-speech-corpus",
"hf-asr-leaderboard",
"PyTorch",
"dataset:CORAA",
"arxiv:2110.15731",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language: pt
datasets:
- CORAA
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- portuguese-speech-corpus
- automatic-speech-recognition
- hf-asr-leaderboard
- speech
- PyTorch
license: apache-2.0
model-index:
- name: Edresson Casanova XLSR Wav2Vec2 Large 53 Portuguese
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: CORAA
type: CORAA
args: pt
metrics:
- name: Test CORAA WER
type: wer
value: 25.26
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: pt
metrics:
- name: Test WER on Common Voice 7
type: wer
value: 20.08
---
# Wav2vec 2.0 trained with CORAA Portuguese Dataset
This a the demonstration of a fine-tuned Wav2vec model for Portuguese using the following [CORAA dataset](https://github.com/nilc-nlp/CORAA)
# Use this model
```python
from transformers import AutoTokenizer, Wav2Vec2ForCTC
tokenizer = AutoTokenizer.from_pretrained("Edresson/wav2vec2-large-xlsr-coraa-portuguese")
model = Wav2Vec2ForCTC.from_pretrained("Edresson/wav2vec2-large-xlsr-coraa-portuguese")
```
# Results
For the results check the [CORAA article](https://arxiv.org/abs/2110.15731)
# Example test with Common Voice Dataset
```python
dataset = load_dataset("common_voice", "pt", split="test", data_dir="./cv-corpus-6.1-2020-12-11")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
```
```python
ds = dataset.map(map_to_array)
result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys()))
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
|
Khalsuu/2nd-wav2vec2-l-xls-r-300m-turkish-test
|
Khalsuu
| 2022-03-31T12:09:32Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-31T08:45:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: 2nd-wav2vec2-l-xls-r-300m-turkish-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 2nd-wav2vec2-l-xls-r-300m-turkish-test
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6019
- Wer: 0.4444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.0522 | 3.67 | 400 | 0.7773 | 0.7296 |
| 0.5369 | 7.34 | 800 | 0.6282 | 0.5888 |
| 0.276 | 11.01 | 1200 | 0.5998 | 0.5330 |
| 0.1725 | 14.68 | 1600 | 0.5859 | 0.4908 |
| 0.1177 | 18.35 | 2000 | 0.6019 | 0.4444 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
YiTian/wav2vec2-common_voice-tr-demo
|
YiTian
| 2022-03-31T11:40:04Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"tr",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-31T09:39:08Z |
---
language:
- tr
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-common_voice-tr-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-common_voice-tr-demo
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9841
- Wer: 0.9999
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 128
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 7.14 | 100 | 3.6689 | 1.0 |
| No log | 14.29 | 200 | 3.0280 | 0.9999 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.9.0
- Datasets 1.18.0
- Tokenizers 0.11.6
|
scasutt/wav2vec2-base_toy_train_data_random_low_pass
|
scasutt
| 2022-03-31T10:42:02Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-31T08:21:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base_toy_train_data_random_low_pass
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base_toy_train_data_random_low_pass
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3227
- Wer: 0.7288
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.0795 | 2.1 | 500 | 3.2227 | 0.9982 |
| 1.21 | 4.2 | 1000 | 1.3713 | 0.8879 |
| 0.742 | 6.3 | 1500 | 1.2660 | 0.8296 |
| 0.5877 | 8.4 | 2000 | 1.2921 | 0.7794 |
| 0.4823 | 10.5 | 2500 | 1.2899 | 0.7565 |
| 0.4036 | 12.6 | 3000 | 1.3486 | 0.7494 |
| 0.391 | 14.7 | 3500 | 1.2701 | 0.7466 |
| 0.3426 | 16.81 | 4000 | 1.3570 | 0.7279 |
| 0.3015 | 18.91 | 4500 | 1.3227 | 0.7288 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
unjustify/autotrain-IWant-689220804
|
unjustify
| 2022-03-31T06:46:48Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain",
"unk",
"dataset:unjustify/autotrain-data-IWant",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-31T06:09:55Z |
---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- unjustify/autotrain-data-IWant
co2_eq_emissions: 39.40549299946679
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 689220804
- CO2 Emissions (in grams): 39.40549299946679
## Validation Metrics
- Loss: 2.0426149368286133
- Rouge1: 54.9813
- Rouge2: 44.923
- RougeL: 54.0399
- RougeLsum: 54.2553
- Gen Len: 16.6211
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/unjustify/autotrain-IWant-689220804
```
|
michiyasunaga/BioLinkBERT-base
|
michiyasunaga
| 2022-03-31T00:51:21Z | 6,225 | 36 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"exbert",
"linkbert",
"biolinkbert",
"fill-mask",
"question-answering",
"text-classification",
"token-classification",
"en",
"dataset:pubmed",
"arxiv:2203.15827",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-08T07:22:12Z |
---
license: apache-2.0
language: en
datasets:
- pubmed
tags:
- bert
- exbert
- linkbert
- biolinkbert
- feature-extraction
- fill-mask
- question-answering
- text-classification
- token-classification
widget:
- text: "Sunitinib is a tyrosine kinase inhibitor"
---
## BioLinkBERT-base
BioLinkBERT-base model pretrained on [PubMed](https://pubmed.ncbi.nlm.nih.gov/) abstracts along with citation link information. It is introduced in the paper [LinkBERT: Pretraining Language Models with Document Links (ACL 2022)](https://arxiv.org/abs/2203.15827). The code and data are available in [this repository](https://github.com/michiyasunaga/LinkBERT).
This model achieves state-of-the-art performance on several biomedical NLP benchmarks such as [BLURB](https://microsoft.github.io/BLURB/) and [MedQA-USMLE](https://github.com/jind11/MedQA).
## Model description
LinkBERT is a transformer encoder (BERT-like) model pretrained on a large corpus of documents. It is an improvement of BERT that newly captures **document links** such as hyperlinks and citation links to include knowledge that spans across multiple documents. Specifically, it was pretrained by feeding linked documents into the same language model context, besides a single document.
LinkBERT can be used as a drop-in replacement for BERT. It achieves better performance for general language understanding tasks (e.g. text classification), and is also particularly effective for **knowledge-intensive** tasks (e.g. question answering) and **cross-document** tasks (e.g. reading comprehension, document retrieval).
## Intended uses & limitations
The model can be used by fine-tuning on a downstream task, such as question answering, sequence classification, and token classification.
You can also use the raw model for feature extraction (i.e. obtaining embeddings for input text).
### How to use
To use the model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('michiyasunaga/BioLinkBERT-base')
model = AutoModel.from_pretrained('michiyasunaga/BioLinkBERT-base')
inputs = tokenizer("Sunitinib is a tyrosine kinase inhibitor", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
For fine-tuning, you can use [this repository](https://github.com/michiyasunaga/LinkBERT) or follow any other BERT fine-tuning codebases.
## Evaluation results
When fine-tuned on downstream tasks, LinkBERT achieves the following results.
**Biomedical benchmarks ([BLURB](https://microsoft.github.io/BLURB/), [MedQA](https://github.com/jind11/MedQA), [MMLU](https://github.com/hendrycks/test), etc.):** BioLinkBERT attains new state-of-the-art.
| | BLURB score | PubMedQA | BioASQ | MedQA-USMLE |
| ---------------------- | -------- | -------- | ------- | -------- |
| PubmedBERT-base | 81.10 | 55.8 | 87.5 | 38.1 |
| **BioLinkBERT-base** | **83.39** | **70.2** | **91.4** | **40.0** |
| **BioLinkBERT-large** | **84.30** | **72.2** | **94.8** | **44.6** |
| | MMLU-professional medicine |
| ---------------------- | -------- |
| GPT-3 (175 params) | 38.7 |
| UnifiedQA (11B params) | 43.2 |
| **BioLinkBERT-large (340M params)** | **50.7** |
## Citation
If you find LinkBERT useful in your project, please cite the following:
```bibtex
@InProceedings{yasunaga2022linkbert,
author = {Michihiro Yasunaga and Jure Leskovec and Percy Liang},
title = {LinkBERT: Pretraining Language Models with Document Links},
year = {2022},
booktitle = {Association for Computational Linguistics (ACL)},
}
```
|
hoangbinhmta99/wav2vec-NCKH-2022
|
hoangbinhmta99
| 2022-03-31T00:28:52Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"feature-extraction",
"audio",
"speech",
"Transformer",
"automatic-speech-recognition",
"vi",
"dataset:vivos",
"dataset:common_voice",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-30T04:39:46Z |
---
language: vi
datasets:
- vivos
- common_voice
metrics:
- wer
pipeline_tag: automatic-speech-recognition
tags:
- audio
- speech
- Transformer
license: cc-by-nc-4.0
model-index:
- name: Wav2vec2 NCKH Vietnamese 2022
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice vi
type: common_voice
args: vi
metrics:
- name: Test WER
type: wer
value: No
---
Convert from model .pt to transformer
Link: https://huggingface.co/tommy19970714/wav2vec2-base-960h
Bash:
```bash
pip install transformers[sentencepiece]
pip install fairseq -U
git clone https://github.com/huggingface/transformers.git
cp transformers/src/transformers/models/wav2vec2/convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py .
wget https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_small.pt -O ./wav2vec_small.pt
mkdir dict
wget https://dl.fbaipublicfiles.com/fairseq/wav2vec/dict.ltr.txt
mkdir outputs
python convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py
--pytorch_dump_folder_path ./outputs --checkpoint_path ./finetuned/wav2vec_small.pt
--dict_path ./dict/dict.ltr.txt --not_finetuned
```
# install and upload model
```
curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash
git lfs install
sudo apt-get install git-lfs
git lfs install
git clone https://huggingface.co/hoangbinhmta99/wav2vec-demo
ls
cd wav2vec-demo/
git status
git add .
git commit -m "First model version"
git config --global user.email [yourname]
git config --global user.name [yourpass]
git commit -m "First model version"
git push
```
|
mrm8488/biomedtra-small-es
|
mrm8488
| 2022-03-30T21:07:50Z | 3 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"pretraining",
"Spanish",
"Electra",
"Bio",
"Medical",
"es",
"dataset:cowese",
"arxiv:1406.2661",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: es
tags:
- Spanish
- Electra
- Bio
- Medical
datasets:
- cowese
---
## 🦠 BIOMEDtra 🏥
**BIOMEDtra** (small) is an Electra like model (discriminator in this case) trained on [Spanish Biomedical Crawled Corpus](https://zenodo.org/record/5510033#.Yhdk1ZHMLJx).
As mentioned in the original [paper](https://openreview.net/pdf?id=r1xMH1BtvB):
**ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset.
For a detailed description and experimental results, please refer the paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB).
## Training details
The model was trained using the Electra base code for 3 days on 1 GPU (Tesla V100 16GB).
## Dataset details
The largest Spanish biomedical and heath corpus to date gathered from a massive Spanish health domain crawler over more than 3,000 URLs were downloaded and preprocessed. The collected data have been preprocessed to produce the **CoWeSe** (Corpus Web Salud Español) resource, a large-scale and high-quality corpus intended for biomedical and health NLP in Spanish.
## Model details ⚙
|Param| # Value|
|-----|--------|
|Layers| 12 |
|Hidden | 256 |
|Params| 14M |
## Evaluation metrics (for discriminator) 🧾
|Metric | # Score |
|-------|---------|
|Accuracy| 0.9561|
|Precision| 0.808|
|Recall | 0.531 |
|AUC | 0.949|
## Benchmarks 🔨
WIP 🚧
## How to use the discriminator in `transformers`
```py
from transformers import ElectraForPreTraining, ElectraTokenizerFast
import torch
discriminator = ElectraForPreTraining.from_pretrained("mrm8488/biomedtra-small-es")
tokenizer = ElectraTokenizerFast.from_pretrained("mrm8488/biomedtra-small-es")
sentence = "Los españoles tienden a sufir déficit de vitamina c"
fake_sentence = "Los españoles tienden a déficit sufrir de vitamina c"
fake_tokens = tokenizer.tokenize(fake_sentence)
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
discriminator_outputs = discriminator(fake_inputs)
predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)
[print("%7s" % token, end="") for token in fake_tokens]
[print("%7s" % prediction, end="") for prediction in predictions.tolist()]
```
## Acknowledgments
TBA
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{mromero2022biomedtra,
title={Spanish BioMedical Electra (small)},
author={Romero, Manuel},
publisher={Hugging Face},
journal={Hugging Face Hub},
howpublished={\url{https://huggingface.co/mrm8488/biomedtra-small-es},
year={2022}
}
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
mrm8488/legalectra-small-spanish
|
mrm8488
| 2022-03-30T21:06:31Z | 41 | 3 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"pretraining",
"Spanish",
"Electra",
"Legal",
"es",
"dataset:Spanish-legal-corpora",
"arxiv:1406.2661",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: es
tags:
- Spanish
- Electra
- Legal
datasets:
- Spanish-legal-corpora
---
## LEGALECTRA ⚖️
**LEGALECTRA** (small) is an Electra like model (discriminator in this case) trained on [A collection of corpora of Spanish legal domain](https://zenodo.org/record/5495529#.YZItp3vMLJw).
As mentioned in the original [paper](https://openreview.net/pdf?id=r1xMH1BtvB):
**ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset.
For a detailed description and experimental results, please refer the paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB).
## Training details
The model was trained using the Electra base code for 3 days on 1 Tesla V100 16GB.
## Model details ⚙
|Param| # Value|
|-----|--------|
|Layers| 12 |
|Hidden | 256 |
|Params| 14M |
## Evaluation metrics (for discriminator) 🧾
|Metric | # Score |
|-------|---------|
|Accuracy| 0.955|
|Precision| 0.790|
|AUC | 0.971|
## Benchmarks 🔨
WIP 🚧
## How to use the discriminator in `transformers`
TBA
## Acknowledgments
TBA
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{mromero2022legalectra,
title={Spanish Legal Electra (small)},
author={Romero, Manuel},
publisher={Hugging Face},
journal={Hugging Face Hub},
howpublished={\url{https://huggingface.co/mrm8488/legalectra-small-spanish},
year={2022}
}
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
vlsb/autotrain-security-text-classification-albert-688320769
|
vlsb
| 2022-03-30T20:59:32Z | 15 | 2 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"autotrain",
"unk",
"dataset:vlsb/autotrain-data-security-text-classification-albert",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-30T20:55:59Z |
---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- vlsb/autotrain-data-security-text-classification-albert
co2_eq_emissions: 3.670416179055797
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 688320769
- CO2 Emissions (in grams): 3.670416179055797
## Validation Metrics
- Loss: 0.3046899139881134
- Accuracy: 0.8826530612244898
- Precision: 0.9181818181818182
- Recall: 0.8782608695652174
- AUC: 0.9423510466988727
- F1: 0.8977777777777778
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/vlsb/autotrain-security-text-classification-albert-688320769
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("vlsb/autotrain-security-text-classification-albert-688320769", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("vlsb/autotrain-security-text-classification-albert-688320769", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
mrm8488/electricidad-small-discriminator
|
mrm8488
| 2022-03-30T20:44:50Z | 9 | 5 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"pretraining",
"Spanish",
"Electra",
"es",
"dataset:large_spanish_corpus",
"arxiv:1406.2661",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: es
thumbnail: https://i.imgur.com/uxAvBfh.png
tags:
- Spanish
- Electra
datasets:
- large_spanish_corpus
---
## ELECTRICIDAD: The Spanish Electra [Imgur](https://imgur.com/uxAvBfh)
**ELECTRICIDAD** is a small Electra like model (discriminator in this case) trained on a [Large Spanish Corpus](https://github.com/josecannete/spanish-corpora) (aka BETO's corpus).
As mentioned in the original [paper](https://openreview.net/pdf?id=r1xMH1BtvB):
**ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset.
For a detailed description and experimental results, please refer the paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB).
## Model details ⚙
|Param| # Value|
|-----|--------|
|Layers|\t12 |
|Hidden |256 \t|
|Params| 14M|
## Evaluation metrics (for discriminator) 🧾
|Metric | # Score |
|-------|---------|
|Accuracy| 0.94|
|Precision| 0.76|
|AUC | 0.92|
## Benchmarks 🔨
WIP 🚧
## How to use the discriminator in `transformers`
```python
from transformers import ElectraForPreTraining, ElectraTokenizerFast
import torch
discriminator = ElectraForPreTraining.from_pretrained("mrm8488/electricidad-small-discriminator")
tokenizer = ElectraTokenizerFast.from_pretrained("mrm8488/electricidad-small-discriminator")
sentence = "el zorro rojo es muy rápido"
fake_sentence = "el zorro rojo es muy ser"
fake_tokens = tokenizer.tokenize(sentence)
fake_inputs = tokenizer.encode(sentence, return_tensors="pt")
discriminator_outputs = discriminator(fake_inputs)
predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)
[print("%7s" % token, end="") for token in fake_tokens]
[print("%7s" % int(prediction), end="") for prediction in predictions.tolist()[1:-1]]
# Output:
'''
el zorro rojo es muy ser 0 0 0 0 0 1[None, None, None, None, None, None]
'''
```
As you can see there is a **1** in the place where the model detected the fake token (**ser**). So, it works! 🎉
[Electricidad-small fine-tuned models](https://huggingface.co/models?search=electricidad-small)
## Acknowledgments
I thank [🤗/transformers team](https://github.com/huggingface/transformers) for answering my doubts and Google for helping me with the [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc) program.
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{mromero2020electricidad-small-discriminator,
title={Spanish Electra (small) by Manuel Romero},
author={Romero, Manuel},
publisher={Hugging Face},
journal={Hugging Face Hub},
howpublished={\url{https://huggingface.co/mrm8488/electricidad-small-discriminator}},
year={2020}
}
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
waboucay/camembert-base-finetuned-xnli_fr
|
waboucay
| 2022-03-30T17:47:05Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"camembert",
"text-classification",
"nli",
"fr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-11T08:54:07Z |
---
language:
- fr
tags:
- nli
metrics:
- f1
---
## Eval results
We obtain the following results on ```validation``` and ```test``` sets:
| Set | F1<sub>micro</sub> | F1<sub>macro</sub> |
|------------|--------------------|--------------------|
| validation | 89.2 | 87.6 |
| test | 88.9 | 87.4 |
|
hoangbinhmta99/wav2vec-demo
|
hoangbinhmta99
| 2022-03-30T17:18:48Z | 9 | 2 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
Convert from model .pt to transformer
Link: https://huggingface.co/tommy19970714/wav2vec2-base-960h
Bash:
```bash
pip install transformers[sentencepiece]
pip install fairseq -U
git clone https://github.com/huggingface/transformers.git
cp transformers/src/transformers/models/wav2vec2/convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py .
wget https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_small.pt -O ./wav2vec_small.pt
mkdir dict
wget https://dl.fbaipublicfiles.com/fairseq/wav2vec/dict.ltr.txt
mkdir outputs
python convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py
--pytorch_dump_folder_path ./outputs --checkpoint_path ./finetuned/wav2vec_small.pt
--dict_path ./dict/dict.ltr.txt --not_finetuned
```
# install and upload model
```
curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash
git lfs install
sudo apt-get install git-lfs
git lfs install
git clone https://huggingface.co/hoangbinhmta99/wav2vec-demo
ls
cd wav2vec-demo/
git status
git add .
git commit -m "First model version"
git config --global user.email [yourname]
git config --global user.name [yourpass]
git commit -m "First model version"
git push
```
|
abdusah/aradia-ctc-v1
|
abdusah
| 2022-03-30T13:48:41Z | 23 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"abdusahmbzuai/arabic_speech_massive_300hrs",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-23T10:58:05Z |
---
tags:
- automatic-speech-recognition
- abdusahmbzuai/arabic_speech_massive_300hrs
- generated_from_trainer
model-index:
- name: aradia-ctc-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aradia-ctc-v1
This model is a fine-tuned version of [/l/users/abdulwahab.sahyoun/aradia/aradia-ctc-v1](https://huggingface.co//l/users/abdulwahab.sahyoun/aradia/aradia-ctc-v1) on the ABDUSAHMBZUAI/ARABIC_SPEECH_MASSIVE_300HRS - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7171
- Wer: 0.3336
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.22 | 100 | 5.1889 | 1.0 |
| No log | 0.43 | 200 | 3.1129 | 1.0 |
| No log | 0.65 | 300 | 3.0503 | 1.0 |
| No log | 0.87 | 400 | 3.0279 | 1.0 |
| 6.2756 | 1.09 | 500 | 2.9965 | 1.0 |
| 6.2756 | 1.3 | 600 | 2.3618 | 0.9993 |
| 6.2756 | 1.52 | 700 | 1.2715 | 0.8758 |
| 6.2756 | 1.74 | 800 | 0.9971 | 0.7156 |
| 6.2756 | 1.96 | 900 | 0.8927 | 0.6382 |
| 1.712 | 2.17 | 1000 | 0.8252 | 0.5926 |
| 1.712 | 2.39 | 1100 | 0.7794 | 0.5434 |
| 1.712 | 2.61 | 1200 | 0.7557 | 0.5092 |
| 1.712 | 2.83 | 1300 | 0.7347 | 0.5203 |
| 1.712 | 3.04 | 1400 | 0.7189 | 0.4929 |
| 0.9305 | 3.26 | 1500 | 0.6820 | 0.4595 |
| 0.9305 | 3.48 | 1600 | 0.6792 | 0.4504 |
| 0.9305 | 3.69 | 1700 | 0.6596 | 0.4442 |
| 0.9305 | 3.91 | 1800 | 0.6756 | 0.4432 |
| 0.9305 | 4.13 | 1900 | 0.6663 | 0.4392 |
| 0.737 | 4.35 | 2000 | 0.6479 | 0.4372 |
| 0.737 | 4.56 | 2100 | 0.6353 | 0.4203 |
| 0.737 | 4.78 | 2200 | 0.6251 | 0.4088 |
| 0.737 | 5.0 | 2300 | 0.6209 | 0.4177 |
| 0.737 | 5.22 | 2400 | 0.6639 | 0.4094 |
| 0.6247 | 5.43 | 2500 | 0.6408 | 0.3970 |
| 0.6247 | 5.65 | 2600 | 0.6373 | 0.3932 |
| 0.6247 | 5.87 | 2700 | 0.6411 | 0.3928 |
| 0.6247 | 6.09 | 2800 | 0.6378 | 0.3897 |
| 0.6247 | 6.3 | 2900 | 0.6396 | 0.3929 |
| 0.5443 | 6.52 | 3000 | 0.6544 | 0.3864 |
| 0.5443 | 6.74 | 3100 | 0.6218 | 0.3786 |
| 0.5443 | 6.96 | 3200 | 0.6200 | 0.3784 |
| 0.5443 | 7.17 | 3300 | 0.6157 | 0.3791 |
| 0.5443 | 7.39 | 3400 | 0.6317 | 0.3798 |
| 0.4845 | 7.61 | 3500 | 0.6540 | 0.3771 |
| 0.4845 | 7.83 | 3600 | 0.6436 | 0.3670 |
| 0.4845 | 8.04 | 3700 | 0.6335 | 0.3695 |
| 0.4845 | 8.26 | 3800 | 0.6579 | 0.3610 |
| 0.4845 | 8.48 | 3900 | 0.6170 | 0.3613 |
| 0.4279 | 8.69 | 4000 | 0.6523 | 0.3617 |
| 0.4279 | 8.91 | 4100 | 0.6349 | 0.3577 |
| 0.4279 | 9.13 | 4200 | 0.6344 | 0.3673 |
| 0.4279 | 9.35 | 4300 | 0.6215 | 0.3641 |
| 0.4279 | 9.56 | 4400 | 0.6513 | 0.3608 |
| 0.3825 | 9.78 | 4500 | 0.6386 | 0.3605 |
| 0.3825 | 10.0 | 4600 | 0.6724 | 0.3549 |
| 0.3825 | 10.22 | 4700 | 0.6776 | 0.3602 |
| 0.3825 | 10.43 | 4800 | 0.6739 | 0.3544 |
| 0.3825 | 10.65 | 4900 | 0.6688 | 0.3557 |
| 0.3477 | 10.87 | 5000 | 0.6674 | 0.3564 |
| 0.3477 | 11.09 | 5100 | 0.6786 | 0.3476 |
| 0.3477 | 11.3 | 5200 | 0.6818 | 0.3478 |
| 0.3477 | 11.52 | 5300 | 0.6874 | 0.3470 |
| 0.3477 | 11.74 | 5400 | 0.6993 | 0.3424 |
| 0.3101 | 11.96 | 5500 | 0.6950 | 0.3404 |
| 0.3101 | 12.17 | 5600 | 0.6872 | 0.3406 |
| 0.3101 | 12.39 | 5700 | 0.6846 | 0.3424 |
| 0.3101 | 12.61 | 5800 | 0.7051 | 0.3405 |
| 0.3101 | 12.83 | 5900 | 0.7051 | 0.3378 |
| 0.2859 | 13.04 | 6000 | 0.6955 | 0.3403 |
| 0.2859 | 13.26 | 6100 | 0.7115 | 0.3390 |
| 0.2859 | 13.48 | 6200 | 0.7074 | 0.3384 |
| 0.2859 | 13.69 | 6300 | 0.7002 | 0.3376 |
| 0.2859 | 13.91 | 6400 | 0.7171 | 0.3360 |
| 0.2714 | 14.13 | 6500 | 0.7193 | 0.3341 |
| 0.2714 | 14.35 | 6600 | 0.7132 | 0.3347 |
| 0.2714 | 14.56 | 6700 | 0.7184 | 0.3353 |
| 0.2714 | 14.78 | 6800 | 0.7171 | 0.3331 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
javilonso/classificationEsp3_Attraction
|
javilonso
| 2022-03-30T12:09:19Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-30T11:07:40Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: javilonso/classificationEsp3_Attraction
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# javilonso/classificationEsp3_Attraction
This model is a fine-tuned version of [PlanTL-GOB-ES/gpt2-base-bne](https://huggingface.co/PlanTL-GOB-ES/gpt2-base-bne) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0055
- Validation Loss: 0.0515
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 17958, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.0964 | 0.0662 | 0 |
| 0.0265 | 0.0500 | 1 |
| 0.0055 | 0.0515 | 2 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.6.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
yinde/dummy-model
|
yinde
| 2022-03-30T11:59:15Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-30T11:37:44Z |
Fake news classifier
This model trains a text classification model to detect fake news articles,
it uses distilbert-base-uncased-finetuned-sst-2-english pretrained model to work on
fake and real news dataset from kaggle (https://www.kaggle.com/clmentbisaillon/fake-and-real-news-dataset)
|
joe5campbell/Horovod_Tweet_Sentiment_1K_4eps
|
joe5campbell
| 2022-03-30T11:38:32Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-24T12:35:50Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Horovod_Tweet_Sentiment_1K_4eps
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Horovod_Tweet_Sentiment_1K_4eps
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6803332
- Train Accuracy: 0.57187504
- Validation Loss: 0.6883397
- Validation Accuracy: 0.54375
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning_rate': 0.0003, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.70931095 | 0.5078125 | 0.81717503 | 0.528125 | 0 |
| 0.77384466 | 0.5296875 | 0.68696874 | 0.51875 | 1 |
| 0.68944424 | 0.53125 | 0.6837756 | 0.53125 | 2 |
| 0.6803332 | 0.57187504 | 0.6883397 | 0.54375 | 3 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.6.0
- Tokenizers 0.11.6
|
mimicheng/codeparrot-ds-sample-2ep-29mar
|
mimicheng
| 2022-03-30T09:50:15Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-30T03:41:46Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds-sample-2ep-29mar
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds-sample-2ep-29mar
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6283
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- distributed_type: tpu
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2585 | 1.86 | 5000 | 1.6283 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.8.2+cpu
- Datasets 2.0.0
- Tokenizers 0.11.6
|
jeniakim/hedgehog
|
jeniakim
| 2022-03-30T09:27:38Z | 51 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"en",
"license:mit",
"autotrain_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
language: en
license: mit
inference: false
---
🦔 HEDGEhog 🦔: BERT-based multi-class uncertainty cues recognition
====================================================================
# Description
A fine-tuned multi-class classification model that detects four different types of uncertainty cues (a.k.a hedges) on a token level.
# Uncertainty types
label | type | description | example
---| ---| ---| ---
E | Epistemic | The proposition is possible, but its truth-value cannot be decided at the moment. | She **may** be already asleep.
I | Investigation | The proposition is in the process of having its truth-value determined. | She **examined** the role of NF-kappaB in protein activation.
D | Doxatic | The proposition expresses beliefs and hypotheses, which may be known as true or false by others. | She **believes** that the Earth is flat.
N | Condition | The proposition is true or false based on the truth-value of another proposition. | **If** she gets the job, she will move to Utrecht.
C | *certain* | *n/a* | *n/a*
# Intended uses and limitations
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
# How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.ner import NERModel
model = NERModel(
'bert',
'jeniakim/hedgehog',
use_cuda=False,
labels=["C", "D", "E", "I", "N"],
)
example = "As much as I definitely enjoy solitude, I wouldn't mind perhaps spending little time with you (Björk)"
predictions, raw_outputs = model.predict([example])
```
The predictions look like this:
```
[[{'As': 'C'},
{'much': 'C'},
{'as': 'C'},
{'I': 'C'},
{'definitely': 'C'},
{'enjoy': 'C'},
{'solitude,': 'C'},
{'I': 'C'},
{"wouldn't": 'C'},
{'mind': 'C'},
{'perhaps': 'E'},
{'spending': 'C'},
{'little': 'C'},
{'time': 'C'},
{'with': 'C'},
{'you': 'C'},
{'(Björk)': 'C'}]]
```
In other words, the token 'perhaps' is recognized as an **epistemic uncertainty cue** and all the other tokens are not uncertainty cues.
# Training Data
HEDGEhog is trained and evaluated on the [Szeged Uncertainty Corpus](https://rgai.inf.u-szeged.hu/node/160) (Szarvas et al. 2012<sup>1</sup>). The original sentence-level XML version of this dataset is available [here](https://rgai.inf.u-szeged.hu/node/160).
The token-level version that was used for the training can be downloaded from [here](https://1drv.ms/u/s!AvPkt_QxBozXk7BiazucDqZkVxLo6g?e=IisuM6) in a form of pickled pandas DataFrame's. You can download either the split sets (```train.pkl``` 137MB, ```test.pkl``` 17MB, ```dev.pkl``` 17MB) or the full dataset (```szeged_fixed.pkl``` 172MB). Each row in the df contains a token, its features (these are not relevant for HEDGEhog; they were used to train the baseline CRF model, see [here](https://github.com/vanboefer/uncertainty_crf)), its sentence ID, and its label.
# Training Procedure
The following training parameters were used:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 16
# Evaluation Results
class | precision | recall | F1-score | support
---|---|---|---|---
Epistemic | 0.90 | 0.85 | 0.88 | 624
Doxatic | 0.88 | 0.92 | 0.90 | 142
Investigation | 0.83 | 0.86 | 0.84 | 111
Condition | 0.85 | 0.87 | 0.86 | 86
Certain | 1.00 | 1.00 | 1.00 | 104,751
**macro average** | **0.89** | **0.90** | **0.89** | 105,714
# References
<sup>1</sup> Szarvas, G., Vincze, V., Farkas, R., Móra, G., & Gurevych, I. (2012). Cross-genre and cross-domain detection of semantic uncertainty. *Computational Linguistics, 38*(2), 335-367.
|
markussagen/xlm-roberta-longformer-base-4096
|
markussagen
| 2022-03-30T09:24:39Z | 9,277 | 36 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"longformer",
"multilingual",
"dataset:wikitext",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
tags:
- longformer
language: multilingual
license: apache-2.0
datasets:
- wikitext
---
## XLM-R Longformer Model / XLM-Long
XLM-R Longformer (or XLM-Long for short) is a XLM-R model that has been extended to allow sequence lengths up to 4096 tokens, instead of the regular 512. The model was pre-trained from the XLM-RoBERTa checkpoint using the Longformer [pre-training scheme](https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) on the English WikiText-103 corpus.
The reason for this was to investigate methods for creating efficient Transformers for low-resource languages, such as Swedish, without the need to pre-train them on long-context datasets in each respecitve language. The trained model came as a result of a master thesis project at [Peltarion](https://peltarion.com/) and was fine-tuned on multilingual quesion-answering tasks, with code available [here](https://github.com/MarkusSagen/Master-Thesis-Multilingual-Longformer#xlm-r).
Since both XLM-R model and Longformer models are large models, it it recommended to run the models with NVIDIA Apex (16bit precision), large GPU and several gradient accumulation steps.
## How to Use
The model can be used as expected to fine-tune on a downstream task.
For instance for QA.
```python
import torch
from transformers import AutoModel, AutoTokenizer
MAX_SEQUENCE_LENGTH = 4096
MODEL_NAME_OR_PATH = "markussagen/xlm-roberta-longformer-base-4096"
tokenizer = AutoTokenizer.from_pretrained(
MODEL_NAME_OR_PATH,
max_length=MAX_SEQUENCE_LENGTH,
padding="max_length",
truncation=True,
)
model = AutoModelForQuestionAnswering.from_pretrained(
MODEL_NAME_OR_PATH,
max_length=MAX_SEQUENCE_LENGTH,
)
```
## Training Procedure
The model have been trained on the WikiText-103 corpus, using a **48GB** GPU with the following training script and parameters. The model was pre-trained for 6000 iterations and took ~5 days. See the full [training script](https://github.com/MarkusSagen/Master-Thesis-Multilingual-Longformer/blob/main/scripts/finetune_qa_models.py) and [Github repo](https://github.com/MarkusSagen/Master-Thesis-Multilingual-Longformer) for more information
```sh
wget https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-raw-v1.zip
unzip wikitext-103-raw-v1.zip
export DATA_DIR=./wikitext-103-raw
scripts/run_long_lm.py \
--model_name_or_path xlm-roberta-base \
--model_name xlm-roberta-to-longformer \
--output_dir ./output \
--logging_dir ./logs \
--val_file_path $DATA_DIR/wiki.valid.raw \
--train_file_path $DATA_DIR/wiki.train.raw \
--seed 42 \
--max_pos 4096 \
--adam_epsilon 1e-8 \
--warmup_steps 500 \
--learning_rate 3e-5 \
--weight_decay 0.01 \
--max_steps 6000 \
--evaluate_during_training \
--logging_steps 50 \
--eval_steps 50 \
--save_steps 6000 \
--max_grad_norm 1.0 \
--per_device_eval_batch_size 2 \
--per_device_train_batch_size 1 \
--gradient_accumulation_steps 64 \
--overwrite_output_dir \
--fp16 \
--do_train \
--do_eval
```
|
Aureliano/electra-if
|
Aureliano
| 2022-03-30T09:07:27Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"electra",
"feature-extraction",
"en",
"arxiv:1406.2661",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-11T15:40:21Z |
---
language: en
license: apache-2.0
---
## ELECTRA for IF
**ELECTRA** is a method for self-supervised language representation learning. They are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf).
For a detailed description and experimental results, please refer to the original paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB).
This repository contains a small ELECTRA discriminator finetuned on a corpus of interactive fiction commands labelled with the WordNet synset offset of the verb in the sentence. The original dataset has been collected from the list of action in the walkthroughs for the game included in the [Jericho](https://github.com/microsoft/jericho) framework and manually annotated. For more information visit https://github.com/aporporato/electra and https://github.com/aporporato/jericho-corpora.
## How to use the discriminator in `transformers`
(Heavily based on: https://github.com/huggingface/notebooks/blob/master/examples/text_classification-tf.ipynb)
```python
import math
import numpy as np
import tensorflow as tf
from datasets import load_metric, Dataset, DatasetDict
from transformers import TFAutoModelForSequenceClassification, AutoTokenizer, DataCollatorWithPadding, create_optimizer
from transformers.keras_callbacks import KerasMetricCallback
# This example shows how this model can be used:
# you should finetune the model of your specific corpus if commands, bigger than this
dict_train = {
"idx": ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18",
"19", "20"],
"sentence": ["e", "get pen", "drop book", "x paper", "i", "south", "get paper", "drop the pen", "x book",
"inventory", "n", "get the book", "drop paper", "look at Pen", "inv", "g", "s", "get sandwich",
"drop sandwich", "x sandwich", "agin"],
"label": ["travel.v.01", "take.v.04", "drop.v.01", "examine.v.02", "inventory.v.01", "travel.v.01", "take.v.04",
"drop.v.01", "examine.v.02", "inventory.v.01", "travel.v.01", "take.v.04", "drop.v.01", "examine.v.02",
"inventory.v.01", "repeat.v.01", "travel.v.01", "take.v.04", "drop.v.01", "examine.v.02", "repeat.v.01"]
}
dict_val = {
"idx": ["0", "1", "2", "3", "4", "5"],
"sentence": ["w", "get shield", "drop sword", "x spikes", "i", "repeat"],
"label": ["travel.v.01", "take.v.04", "drop.v.01", "examine.v.02", "inventory.v.01", "repeat.v.01"]
}
raw_train_dataset = Dataset.from_dict(dict_train)
raw_val_dataset = Dataset.from_dict(dict_val)
raw_dataset = DatasetDict()
raw_dataset["train"] = raw_train_dataset
raw_dataset["val"] = raw_val_dataset
raw_dataset = raw_dataset.class_encode_column("label")
print(raw_dataset)
print(raw_dataset["train"].features)
print(raw_dataset["val"].features)
print(raw_dataset["train"][1])
label2id = {}
id2label = {}
for i, l in enumerate(raw_dataset["train"].features["label"].names):
label2id[l] = i
id2label[i] = l
discriminator = TFAutoModelForSequenceClassification.from_pretrained("Aureliano/electra-if",
label2id=label2id,
id2label=id2label)
tokenizer = AutoTokenizer.from_pretrained("Aureliano/electra-if")
tokenize_function = lambda example: tokenizer(example["sentence"], truncation=True)
pre_tokenizer_columns = set(raw_dataset["train"].features)
encoded_dataset = raw_dataset.map(tokenize_function, batched=True)
tokenizer_columns = list(set(encoded_dataset["train"].features) - pre_tokenizer_columns)
data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf")
batch_size = len(encoded_dataset["train"])
tf_train_dataset = encoded_dataset["train"].to_tf_dataset(
columns=tokenizer_columns,
label_cols=["labels"],
shuffle=True,
batch_size=batch_size,
collate_fn=data_collator
)
tf_validation_dataset = encoded_dataset["val"].to_tf_dataset(
columns=tokenizer_columns,
label_cols=["labels"],
shuffle=False,
batch_size=batch_size,
collate_fn=data_collator
)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
num_epochs = 25
batches_per_epoch = math.ceil(len(encoded_dataset["train"]) / batch_size)
total_train_steps = int(batches_per_epoch * num_epochs)
optimizer, schedule = create_optimizer(
init_lr=5e-5, num_warmup_steps=total_train_steps // 5, num_train_steps=total_train_steps
)
metric = load_metric("accuracy")
def compute_metrics(eval_predictions):
logits, labels = eval_predictions
predictions = np.argmax(logits, axis=-1)
return metric.compute(predictions=predictions, references=labels)
metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_dataset)
callbacks = [metric_callback]
discriminator.compile(optimizer=optimizer, loss=loss, metrics=["sparse_categorical_accuracy"])
discriminator.fit(
tf_train_dataset,
epochs=num_epochs,
validation_data=tf_validation_dataset,
callbacks=callbacks
)
print("Evaluate on test data")
results = discriminator.evaluate(tf_validation_dataset)
print("test loss, test acc:", results)
text = "i"
encoded_input = tokenizer(text, return_tensors='tf')
output = discriminator(encoded_input)
prediction = tf.nn.softmax(output["logits"][0], -1)
label = id2label[tf.math.argmax(prediction).numpy()]
print("\n", text, ":", label,
"\n") # ideally 'inventory.v.01' (-> "make or include in an itemized record or report"), but probably only with a better finetuning dataset
text = "get lamp"
encoded_input = tokenizer(text, return_tensors='tf')
output = discriminator(encoded_input)
prediction = tf.nn.softmax(output["logits"][0], -1)
label = id2label[tf.math.argmax(prediction).numpy()]
print("\n", text, ":", label,
"\n") # ideally 'take.v.04' (-> "get into one's hands, take physically"), but probably only with a better finetuning dataset
text = "w"
encoded_input = tokenizer(text, return_tensors='tf')
output = discriminator(encoded_input)
prediction = tf.nn.softmax(output["logits"][0], -1)
label = id2label[tf.math.argmax(prediction).numpy()]
print("\n", text, ":", label,
"\n") # ideally 'travel.v.01' (-> "change location; move, travel, or proceed, also metaphorically"), but probably only with a better finetuning dataset
```
|
javilonso/classificationPolEsp1
|
javilonso
| 2022-03-30T09:02:50Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-30T07:49:20Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: javilonso/classificationPolEsp1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# javilonso/classificationPolEsp1
This model is a fine-tuned version of [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3728
- Validation Loss: 0.6217
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 17958, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.6282 | 0.6017 | 0 |
| 0.5129 | 0.6177 | 1 |
| 0.3728 | 0.6217 | 2 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.6.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
neibla/distilbert-base-uncased-finetuned-emotion
|
neibla
| 2022-03-30T08:56:26Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-30T08:22:55Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9255
- name: F1
type: f1
value: 0.9254917237562972
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2187
- Accuracy: 0.9255
- F1: 0.9255
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.855 | 1.0 | 250 | 0.3211 | 0.905 | 0.9017 |
| 0.2561 | 2.0 | 500 | 0.2187 | 0.9255 | 0.9255 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
nlp-waseda/gpt2-small-japanese
|
nlp-waseda
| 2022-03-30T04:28:17Z | 26 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"ja",
"dataset:wikipedia",
"dataset:cc100",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-30T03:34:11Z |
---
language:
- ja
license: cc-by-sa-4.0
datasets:
- wikipedia
- cc100
widget:
- text: "早稲田 大学 で 自然 言語 処理 を"
---
# nlp-waseda/gpt2-small-japanese
This model is Japanese GPT-2 pretrained on Japanese Wikipedia and CC-100.
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task.
Note that the texts should be segmented into words using Juman++ in advance.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='nlp-waseda/gpt2-small-japanese')
>>> set_seed(42)
>>> generator("早稲田 大学 で 自然 言語 処理 を", max_length=30, do_sample=True, pad_token_id=2, num_return_sequences=5)
[{'generated_text': '早稲田 大学 で 自然 言語 処理 を 学び 、 帰国 後 、 早稲田 大学 理工 学部 に 入学 し ます 。 卒業 後 、 早稲田 大学 工学 研究 科 、'},
{'generated_text': '早稲田 大学 で 自然 言語 処理 を 学び 、 アメリカ の 大学 で 学士 号 を 取得 、 修士 の 取得 で 博士 号 を 取得 。 2008 年'},
{'generated_text': '早稲田 大学 で 自然 言語 処理 を 勉強 して い ます 。 学部 は 日本 語 学科 を 専攻 して い ます 。 英語 が 話せる と いう'},
{'generated_text': '早稲田 大学 で 自然 言語 処理 を 専攻 して いた 。 2011 年 に 第 26 回 日本 化学 会 学生 委員 会 奨励 賞 ( 第 2 年次 審査'},
{'generated_text': '早稲田 大学 で 自然 言語 処理 を 中心 と する 言語 学 研究 を 行って いる 。 東京 都 ・ 豊島 区 の お 見合い 相手 。'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import ReformerTokenizer, GPT2Model
tokenizer = ReformerTokenizer.from_pretrained('nlp-waseda/gpt2-small-japanese')
model = GPT2Model.from_pretrained('nlp-waseda/gpt2-small-japanese')
text = "早稲田 大学 で 自然 言語 処理 を"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Training data
The GPT-2 model was pretrained on Japanese Wikipedia, dumped on 2022-03-20, and the Japanese portion of CC-100.
## Training procedure
### Preprocessing
The texts are normalized using zenhan, segmented into words using Juman++, and tokenized using SentencePiece. Juman++ 2.0.0-rc3 was used for pretraining.
The model was trained on 8 NVIDIA A100 GPUs.
|
samayash/finetuning-financial-news-sentiment
|
samayash
| 2022-03-30T03:36:40Z | 4 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-30T03:27:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-financial-news-sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-financial-news-sentiment
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3345
- Accuracy: 0.8751
- F1: 0.8751
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
scasutt/wav2vec2-large-xlsr-53_toy_train_data_masked_audio
|
scasutt
| 2022-03-30T03:35:01Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-29T11:30:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-53_toy_train_data_masked_audio
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53_toy_train_data_masked_audio
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6445
- Wer: 0.4938
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.3761 | 1.05 | 250 | 3.4022 | 0.9954 |
| 3.0858 | 2.1 | 500 | 3.4684 | 0.9954 |
| 2.6302 | 3.15 | 750 | 1.7989 | 0.9865 |
| 1.1292 | 4.2 | 1000 | 0.8558 | 0.7355 |
| 0.8371 | 5.25 | 1250 | 0.7319 | 0.6621 |
| 0.5992 | 6.3 | 1500 | 0.6848 | 0.6147 |
| 0.5189 | 7.35 | 1750 | 0.6522 | 0.5742 |
| 0.454 | 8.4 | 2000 | 0.6601 | 0.5531 |
| 0.3896 | 9.45 | 2250 | 0.6138 | 0.5439 |
| 0.3678 | 10.5 | 2500 | 0.6436 | 0.5320 |
| 0.3232 | 11.55 | 2750 | 0.5920 | 0.5174 |
| 0.2926 | 12.6 | 3000 | 0.6615 | 0.5107 |
| 0.3041 | 13.65 | 3250 | 0.6311 | 0.5015 |
| 0.2882 | 14.7 | 3500 | 0.6182 | 0.5004 |
| 0.2868 | 15.75 | 3750 | 0.6266 | 0.4943 |
| 0.2508 | 16.81 | 4000 | 0.6587 | 0.4965 |
| 0.2563 | 17.86 | 4250 | 0.6634 | 0.4939 |
| 0.2213 | 18.91 | 4500 | 0.6441 | 0.4925 |
| 0.2255 | 19.96 | 4750 | 0.6445 | 0.4938 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
javilonso/classificationEsp2_Attraction
|
javilonso
| 2022-03-30T03:04:09Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-29T23:17:31Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: javilonso/classificationEsp2_Attraction
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# javilonso/classificationEsp2_Attraction
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-large-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.9927
- Validation Loss: 0.9926
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 35916, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.8200 | 0.9930 | 0 |
| 0.9942 | 0.9947 | 1 |
| 0.9927 | 0.9926 | 2 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.6.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
aaraki/vit-base-patch16-224-in21k-finetuned-cifar10
|
aaraki
| 2022-03-30T01:41:47Z | 8,239 | 10 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:cifar10",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-30T00:18:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cifar10
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-in21k-finetuned-cifar10
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: cifar10
type: cifar10
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9788
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-finetuned-cifar10
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the cifar10 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2564
- Accuracy: 0.9788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4291 | 1.0 | 390 | 0.2564 | 0.9788 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
BigSalmon/InformalToFormalLincoln33
|
BigSalmon
| 2022-03-30T01:24:08Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-30T01:19:07Z |
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln33")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln33")
```
```
- moviepass to return
- this summer
- swooped up by
- original co-founder stacy spikes
text: the re-launch of moviepass is set to transpire this summer, ( rescued at the hands of / under the stewardship of / spearheaded by ) its founding father, stacy spikes.
***
- middle schools do not have recess
- should get back to doing it
- amazing for communication
- and getting kids to move around
text: a casualty of the education reform craze, recess has been excised from middle schools. this is tragic, for it is instrumental in honing children's communication skills and encouraging physical activity.
***
-
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
Keywords to sentences or sentence.
|
cammiemw/bert-marco-hdct
|
cammiemw
| 2022-03-30T01:21:38Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-30T01:09:55Z |
---
license: cc-by-nc-4.0
---
|
DrishtiSharma/poem-gen-spanish-t5-small-v7
|
DrishtiSharma
| 2022-03-30T00:34:41Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-29T19:14:40Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: poem-gen-spanish-t5-small-v7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# poem-gen-spanish-t5-small-v7
This model is a fine-tuned version of [hackathon-pln-es/poem-gen-spanish-t5-small](https://huggingface.co/hackathon-pln-es/poem-gen-spanish-t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9201
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000333
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 3.1716 | 0.73 | 30000 | 3.1114 |
| 2.9666 | 1.46 | 60000 | 3.0271 |
| 2.8292 | 2.19 | 90000 | 2.9531 |
| 2.7264 | 2.93 | 120000 | 2.9126 |
| 2.6057 | 3.66 | 150000 | 2.9175 |
| 2.4876 | 4.39 | 180000 | 2.9077 |
| 2.3791 | 5.12 | 210000 | 2.9240 |
| 2.3515 | 5.85 | 240000 | 2.9169 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
DrishtiSharma/poem-gen-spanish-t5-small-v6
|
DrishtiSharma
| 2022-03-29T23:45:09Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-29T18:58:46Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: poem-gen-spanish-t5-small-v6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# poem-gen-spanish-t5-small-v6
This model is a fine-tuned version of [hackathon-pln-es/poem-gen-spanish-t5-small](https://huggingface.co/hackathon-pln-es/poem-gen-spanish-t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8831
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 2.8551 | 0.73 | 30000 | 2.9296 |
| 2.6961 | 1.46 | 60000 | 2.9005 |
| 2.5756 | 2.19 | 90000 | 2.8786 |
| 2.5095 | 2.93 | 120000 | 2.8621 |
| 2.4061 | 3.66 | 150000 | 2.8830 |
| 2.3161 | 4.39 | 180000 | 2.8865 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
DrishtiSharma/poem-gen-spanish-t5-small-v5
|
DrishtiSharma
| 2022-03-29T23:25:30Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-29T18:54:38Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: poem-gen-spanish-t5-small-v5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# poem-gen-spanish-t5-small-v5
This model is a fine-tuned version of [hackathon-pln-es/poem-gen-spanish-t5-small](https://huggingface.co/hackathon-pln-es/poem-gen-spanish-t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8881
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000125
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 2.9366 | 0.73 | 30000 | 2.9656 |
| 2.7518 | 1.46 | 60000 | 2.9120 |
| 2.6018 | 2.19 | 90000 | 2.8870 |
| 2.5262 | 2.93 | 120000 | 2.8646 |
| 2.3886 | 3.66 | 150000 | 2.8816 |
| 2.2758 | 4.39 | 180000 | 2.8900 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
BigSalmon/PointsToSentence
|
BigSalmon
| 2022-03-29T23:11:32Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-29T22:58:46Z |
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/PointsToSentence")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/PointsToSentence")
```
```
- moviepass to return
- this summer
- swooped up by
- original co-founder stacy spikes
text: the re-launch of moviepass is set to transpire this summer, ( rescued at the hands of / under the stewardship of / spearheaded by ) its founding father, stacy spikes.
***
- middle schools do not have recess
- should get back to doing it
- amazing for communication
- and getting kids to move around
text: a casualty of the education reform craze, recess has been excised from middle schools. this is tragic, for it is instrumental in honing children's communication skills and encouraging physical activity.
***
-
```
It should also be able to do all that this can: https://huggingface.co/BigSalmon/InformalToFormalLincoln27
Keywords to sentences or sentence.
|
krinal214/augmented
|
krinal214
| 2022-03-29T16:58:16Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-29T15:02:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: augmented
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# augmented
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5104
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0609 | 1.0 | 9787 | 0.5104 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.11.6
|
GleamEyeBeast/ascend
|
GleamEyeBeast
| 2022-03-29T16:49:48Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-29T01:37:59Z |
---
tags:
- generated_from_trainer
model-index:
- name: ascend
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ascend
This model is a fine-tuned version of [GleamEyeBeast/ascend](https://huggingface.co/GleamEyeBeast/ascend) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3718
- Wer: 0.6412
- Cer: 0.2428
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 0.5769 | 1.0 | 688 | 1.1864 | 0.7716 | 0.3159 |
| 0.5215 | 2.0 | 1376 | 1.1613 | 0.7504 | 0.2965 |
| 0.4188 | 3.0 | 2064 | 1.1644 | 0.7389 | 0.2950 |
| 0.3695 | 4.0 | 2752 | 1.1937 | 0.7184 | 0.2815 |
| 0.3404 | 5.0 | 3440 | 1.1947 | 0.7083 | 0.2719 |
| 0.2885 | 6.0 | 4128 | 1.2314 | 0.7108 | 0.2685 |
| 0.2727 | 7.0 | 4816 | 1.2243 | 0.6850 | 0.2616 |
| 0.2417 | 8.0 | 5504 | 1.2506 | 0.6767 | 0.2608 |
| 0.2207 | 9.0 | 6192 | 1.2804 | 0.6922 | 0.2595 |
| 0.2195 | 10.0 | 6880 | 1.2582 | 0.6818 | 0.2575 |
| 0.1896 | 11.0 | 7568 | 1.3101 | 0.6814 | 0.2545 |
| 0.1961 | 12.0 | 8256 | 1.2793 | 0.6706 | 0.2526 |
| 0.1752 | 13.0 | 8944 | 1.2643 | 0.6584 | 0.2509 |
| 0.1638 | 14.0 | 9632 | 1.3152 | 0.6588 | 0.2482 |
| 0.1522 | 15.0 | 10320 | 1.3098 | 0.6433 | 0.2439 |
| 0.1351 | 16.0 | 11008 | 1.3253 | 0.6537 | 0.2447 |
| 0.1266 | 17.0 | 11696 | 1.3394 | 0.6365 | 0.2418 |
| 0.1289 | 18.0 | 12384 | 1.3718 | 0.6412 | 0.2443 |
| 0.1204 | 19.0 | 13072 | 1.3708 | 0.6433 | 0.2433 |
| 0.1189 | 20.0 | 13760 | 1.3718 | 0.6412 | 0.2428 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
gabitoo1234/autotrain-mut_all_text-680820343
|
gabitoo1234
| 2022-03-29T16:09:31Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"es",
"dataset:gabitoo1234/autotrain-data-mut_all_text",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-29T14:22:14Z |
---
tags: autotrain
language: es
widget:
- text: "I love AutoTrain 🤗"
datasets:
- gabitoo1234/autotrain-data-mut_all_text
co2_eq_emissions: 115.48848403681228
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 680820343
- CO2 Emissions (in grams): 115.48848403681228
## Validation Metrics
- Loss: 0.3041240870952606
- Accuracy: 0.9462770369425126
- Macro F1: 0.7836898686625933
- Micro F1: 0.9462770369425126
- Weighted F1: 0.9449148298990091
- Macro Precision: 0.8344505891491089
- Micro Precision: 0.9462770369425126
- Weighted Precision: 0.9451247372908952
- Macro Recall: 0.7568785255994025
- Micro Recall: 0.9462770369425126
- Weighted Recall: 0.9462770369425126
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/gabitoo1234/autotrain-mut_all_text-680820343
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("gabitoo1234/autotrain-mut_all_text-680820343", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("gabitoo1234/autotrain-mut_all_text-680820343", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
tbosse/bert-base-german-cased-finetuned-subj_v1
|
tbosse
| 2022-03-29T15:59:49Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-29T14:22:30Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-german-cased-finetuned-subj_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-german-cased-finetuned-subj_v1
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1594
- Precision: 0.1875
- Recall: 0.0077
- F1: 0.0147
- Accuracy: 0.9508
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 136 | 0.1591 | 1.0 | 0.0051 | 0.0102 | 0.9523 |
| No log | 2.0 | 272 | 0.1571 | 0.375 | 0.0077 | 0.015 | 0.9518 |
| No log | 3.0 | 408 | 0.1594 | 0.1875 | 0.0077 | 0.0147 | 0.9508 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Daryaflp/roberta-retrained_ru_covid_papers
|
Daryaflp
| 2022-03-29T13:30:45Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-29T07:12:02Z |
---
tags:
- generated_from_trainer
model-index:
- name: roberta-retrained_ru_covid_papers
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-retrained_ru_covid_papers
This model is a fine-tuned version of [Daryaflp/roberta-retrained_ru_covid](https://huggingface.co/Daryaflp/roberta-retrained_ru_covid) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9998
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
ArtemChistyakov-2/f
|
ArtemChistyakov-2
| 2022-03-29T12:21:18Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-03-29T12:21:18Z |
---
license: apache-2.0
---
|
gayanin/bart-med-term-conditional-masking-0
|
gayanin
| 2022-03-29T12:03:56Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-28T22:12:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-med-term-conditional-masking-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-med-term-conditional-masking-0
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5041
- Rouge2 Precision: 0.7497
- Rouge2 Recall: 0.5246
- Rouge2 Fmeasure: 0.5986
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.6381 | 1.0 | 13915 | 0.5595 | 0.734 | 0.5152 | 0.5873 |
| 0.5429 | 2.0 | 27830 | 0.5243 | 0.7441 | 0.5225 | 0.5956 |
| 0.5002 | 3.0 | 41745 | 0.5078 | 0.7482 | 0.5238 | 0.5976 |
| 0.4607 | 4.0 | 55660 | 0.5041 | 0.7497 | 0.5246 | 0.5986 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Rishav-hub/xlm-roberta-base-finetuned-panx-de
|
Rishav-hub
| 2022-03-29T11:05:37Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-29T10:26:12Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8591260810195721
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1352
- F1: 0.8591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.257 | 1.0 | 525 | 0.1512 | 0.8302 |
| 0.1305 | 2.0 | 1050 | 0.1401 | 0.8447 |
| 0.0817 | 3.0 | 1575 | 0.1352 | 0.8591 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
KeithHorgan/TweetClimateAnalysis
|
KeithHorgan
| 2022-03-29T10:01:24Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain",
"unk",
"dataset:KeithHorgan98/autotrain-data-TweetClimateAnalysis",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-29T10:16:42Z |
---
tags: autotrain
language: unk
widget:
- text: "Climate Change is a hoax"
- text: "It is freezing, where is global warming"
datasets:
- KeithHorgan98/autotrain-data-TweetClimateAnalysis
co2_eq_emissions: 133.19491276284793
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 678720226
- CO2 Emissions (in grams): 133.19491276284793
## Validation Metrics
- Loss: 0.4864234924316406
- Accuracy: 0.865424430641822
- Macro F1: 0.7665472174344069
- Micro F1: 0.8654244306418221
- Weighted F1: 0.8586375445115083
- Macro Precision: 0.8281449061702826
- Micro Precision: 0.865424430641822
- Weighted Precision: 0.8619727477790186
- Macro Recall: 0.736576343905098
- Micro Recall: 0.865424430641822
- Weighted Recall: 0.865424430641822
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/KeithHorgan98/autotrain-TweetClimateAnalysis-678720226
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("KeithHorgan98/autotrain-TweetClimateAnalysis-678720226", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("KeithHorgan98/autotrain-TweetClimateAnalysis-678720226", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
Davlan/m2m100_418M-eng-yor-mt
|
Davlan
| 2022-03-29T09:21:53Z | 820 | 1 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"arxiv:2103.08647",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
Hugging Face's logo
---
language:
- yo
- en
datasets:
- JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
---
# m2m100_418M-eng-yor-mt
## Model description
**m2m100_418M-eng-yor-mt** is a **machine translation** model from English language to Yorùbá language based on a fine-tuned facebook/m2m100_418M model. It establishes a **strong baseline** for automatically translating texts from English to Yorùbá.
Specifically, this model is a *facebook/m2m100_418M* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt).
#### Limitations and bias
This model is limited by its training dataset. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on JW300 corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset
## Training procedure
This model was trained on NVIDIA V100 GPU
## Eval results on Test set (BLEU score)
Fine-tuning m2m100_418M achieves **13.39 BLEU** on [Menyo-20k test set](https://arxiv.org/abs/2103.08647) while mt5-base achieves 9.82
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/m2m100_418M-yor-eng-mt
|
Davlan
| 2022-03-29T09:21:03Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"arxiv:2103.08647",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
Hugging Face's logo
---
language:
- yo
- en
datasets:
- JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
---
# m2m100_418M-eng-yor-mt
## Model description
**m2m100_418M-yor-eng-mt** is a **machine translation** model from Yorùbá language to English language based on a fine-tuned facebook/m2m100_418M model. It establishes a **strong baseline** for automatically translating texts from Yorùbá to English.
Specifically, this model is a *facebook/m2m100_418M* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt).
#### Limitations and bias
This model is limited by its training dataset. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on JW300 corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset
## Training procedure
This model was trained on NVIDIA V100 GPU
## Eval results on Test set (BLEU score)
Fine-tuning m2m100_418M achieves **16.76 BLEU** on [Menyo-20k test set](https://arxiv.org/abs/2103.08647) while mt5-base achieves 15.57
### BibTeX entry and citation info
By David Adelani
```
```
|
PereLluis13/wav2vec2-xls-r-1b-ca-lm
|
PereLluis13
| 2022-03-29T08:41:46Z | 3,126 | 4 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"collectivat/tv3_parla",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"projecte-aina/parlament_parla",
"robust-speech-event",
"ca",
"dataset:mozilla-foundation/common_voice_8_0",
"dataset:collectivat/tv3_parla",
"dataset:projecte-aina/parlament_parla",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- ca
license: apache-2.0
tags:
- automatic-speech-recognition
- collectivat/tv3_parla
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- projecte-aina/parlament_parla
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
- collectivat/tv3_parla
- projecte-aina/parlament_parla
model-index:
- name: wav2vec2-xls-r-1b-ca-lm
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_8_0 ca
type: mozilla-foundation/common_voice_8_0
args: ca
metrics:
- name: Test WER
type: wer
value: 6.0722669958130644
- name: Test CER
type: cer
value: 1.9180697705166526
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: projecte-aina/parlament_parla ca
type: projecte-aina/parlament_parla
args: clean
metrics:
- name: Test WER
type: wer
value: 5.139820371024042
- name: Test CER
type: cer
value: 2.0163620128164722
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: collectivat/tv3_parla ca
type: collectivat/tv3_parla
args: ca
metrics:
- name: Test WER
type: wer
value: 11.207991684952073
- name: Test CER
type: cer
value: 7.32119307305963
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Catalan Dev Data
type: speech-recognition-community-v2/dev_data
args: ca
metrics:
- name: Test WER
type: wer
value: 22.870153690468661
- name: Test CER
type: cer
value: 13.59039190897598
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ca
metrics:
- name: Test WER
type: wer
value: 15.41
---
# wav2vec2-xls-r-1b-ca-lm
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - CA, the [tv3_parla](https://huggingface.co/datasets/collectivat/tv3_parla) and [parlament_parla](https://huggingface.co/datasets/projecte-aina/parlament_parla) datasets.
## Model description
Please check the original [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) Model card. This is just a finetuned version of that model.
## Intended uses & limitations
As any model trained on crowdsourced data, this model can show the biases and particularities of the data and model used to train this model. Moreover, since this is a speech recognition model, it may underperform for some lower-resourced dialects for the catalan language.
## Training and evaluation data
## Training procedure
The data is preprocessed to remove characters not on the catalan alphabet. Moreover, numbers are verbalized using code provided by [@ccoreilly](https://github.com/ccoreilly), which can be found on the text/ folder or [here](https://github.com/CollectivaT-dev/catotron-cpu/blob/master/text/numbers_ca.py).
### Training results
Check the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
# Thanks
Want to thank both [@ccoreilly](https://github.com/ccoreilly) and [@gullabi](https://github.com/gullabi) who have contributed with their own resources and knowledge into making this model possible.
|
STARBORN/MMC
|
STARBORN
| 2022-03-29T07:14:35Z | 0 | 1 | null |
[
"license:mit",
"region:us"
] | null | 2022-03-29T07:12:26Z |
---
license: mit
---
Metamodel Card (MMC) builds on MC and DC schemas by adding system level abstraction to the data. MMC instantiations follow
|
gayanin/t5-small-med-term-conditional-masking-0
|
gayanin
| 2022-03-29T03:19:04Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-28T22:04:47Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-med-term-conditional-masking-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-med-term-conditional-masking-0
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6688
- Rouge2 Precision: 0.694
- Rouge2 Recall: 0.4781
- Rouge2 Fmeasure: 0.5479
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:------:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.9525 | 1.0 | 13915 | 0.8148 | 0.6657 | 0.4581 | 0.5252 |
| 0.8541 | 2.0 | 27830 | 0.7562 | 0.6779 | 0.4694 | 0.5371 |
| 0.8183 | 3.0 | 41745 | 0.7268 | 0.6827 | 0.4722 | 0.5405 |
| 0.8033 | 4.0 | 55660 | 0.7074 | 0.6861 | 0.4729 | 0.5419 |
| 0.7727 | 5.0 | 69575 | 0.6934 | 0.6872 | 0.4726 | 0.5419 |
| 0.7704 | 6.0 | 83490 | 0.6832 | 0.6901 | 0.4742 | 0.544 |
| 0.7485 | 7.0 | 97405 | 0.6771 | 0.6926 | 0.4772 | 0.5469 |
| 0.7528 | 8.0 | 111320 | 0.6722 | 0.6934 | 0.4782 | 0.5478 |
| 0.7535 | 9.0 | 125235 | 0.6696 | 0.6944 | 0.4782 | 0.5481 |
| 0.7444 | 10.0 | 139150 | 0.6688 | 0.694 | 0.4781 | 0.5479 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
i-was-neo-first/hubert-large-ami-shard-experiment-colab
|
i-was-neo-first
| 2022-03-29T00:39:37Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-20T02:10:11Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: hubert-large-ami-shard-experiment-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert-large-ami-shard-experiment-colab
This model is a fine-tuned version of [facebook/hubert-large-ls960-ft](https://huggingface.co/facebook/hubert-large-ls960-ft) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: nan
- eval_wer: 1.0
- eval_runtime: 6.0682
- eval_samples_per_second: 16.479
- eval_steps_per_second: 2.142
- epoch: 1.02
- step: 1000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
sanchit-gandhi/wav2vec2-2-bart-large-cnn
|
sanchit-gandhi
| 2022-03-29T00:24:41Z | 25 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:librispeech_asr",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-22T16:26:40Z |
---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3524
- Wer: 0.1042
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.7605 | 4.5 | 500 | 2.6299 | 1.4451 |
| 0.1177 | 9.01 | 1000 | 0.3524 | 0.1042 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
frtna/ted_mt-Spanish-to-Italian
|
frtna
| 2022-03-28T22:04:21Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:new_dataset",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- new_dataset
model-index:
- name: ted_mt-Spanish-to-Italian
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ted_mt-Spanish-to-Italian
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-es-it](https://huggingface.co/Helsinki-NLP/opus-mt-es-it) on the new_dataset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Sacrebleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| No log | 1.0 | 46 | 1.4873 | 29.6133 | 26.9081 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Chikashi/t5-small-finetuned-cnndm1
|
Chikashi
| 2022-03-28T22:00:26Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-28T14:55:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cnndm1
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 24.4246
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnndm1
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6853
- Rouge1: 24.4246
- Rouge2: 11.6944
- Rougel: 20.1717
- Rougelsum: 23.0424
- Gen Len: 18.9996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.912 | 0.14 | 5000 | 1.7167 | 24.4232 | 11.7049 | 20.1758 | 23.0345 | 18.9997 |
| 1.8784 | 0.28 | 10000 | 1.7018 | 24.4009 | 11.6918 | 20.1561 | 23.0073 | 18.9997 |
| 1.8628 | 0.42 | 15000 | 1.6934 | 24.385 | 11.683 | 20.1285 | 22.9823 | 18.9997 |
| 1.8594 | 0.56 | 20000 | 1.6902 | 24.4407 | 11.6835 | 20.1734 | 23.0369 | 18.9996 |
| 1.8537 | 0.7 | 25000 | 1.6864 | 24.3635 | 11.658 | 20.1318 | 22.9782 | 18.9993 |
| 1.8505 | 0.84 | 30000 | 1.6856 | 24.4267 | 11.6991 | 20.1629 | 23.0361 | 18.9994 |
| 1.8505 | 0.98 | 35000 | 1.6853 | 24.4246 | 11.6944 | 20.1717 | 23.0424 | 18.9996 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
DrishtiSharma/xls-r-es-test-lm-finetuned-sentiment-mesd
|
DrishtiSharma
| 2022-03-28T19:03:37Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2022-03-28T14:54:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xls-r-es-test-lm-finetuned-sentiment-mesd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-r-es-test-lm-finetuned-sentiment-mesd
This model is a fine-tuned version of [glob-asr/xls-r-es-test-lm](https://huggingface.co/glob-asr/xls-r-es-test-lm) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7851
- Accuracy: 0.2385
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.25e-05
- train_batch_size: 64
- eval_batch_size: 40
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.86 | 3 | 1.7876 | 0.1923 |
| 1.9709 | 1.86 | 6 | 1.7869 | 0.2 |
| 1.9709 | 2.86 | 9 | 1.7859 | 0.2308 |
| 2.146 | 3.86 | 12 | 1.7851 | 0.2385 |
| 1.9622 | 4.86 | 15 | 1.7842 | 0.1923 |
| 1.9622 | 5.86 | 18 | 1.7834 | 0.1769 |
| 2.137 | 6.86 | 21 | 1.7823 | 0.1923 |
| 2.137 | 7.86 | 24 | 1.7812 | 0.1923 |
| 2.1297 | 8.86 | 27 | 1.7800 | 0.1846 |
| 1.9502 | 9.86 | 30 | 1.7787 | 0.1846 |
| 1.9502 | 10.86 | 33 | 1.7772 | 0.1846 |
| 2.1234 | 11.86 | 36 | 1.7760 | 0.1846 |
| 2.1234 | 12.86 | 39 | 1.7748 | 0.1846 |
| 2.1186 | 13.86 | 42 | 1.7736 | 0.1846 |
| 1.9401 | 14.86 | 45 | 1.7725 | 0.1846 |
| 1.9401 | 15.86 | 48 | 1.7715 | 0.1923 |
| 2.112 | 16.86 | 51 | 1.7706 | 0.1923 |
| 2.112 | 17.86 | 54 | 1.7701 | 0.1923 |
| 2.1094 | 18.86 | 57 | 1.7697 | 0.2 |
| 1.934 | 19.86 | 60 | 1.7696 | 0.2 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
scasutt/wav2vec2-large-xlsr-53_toy_train_data_fast_10pct
|
scasutt
| 2022-03-28T18:53:54Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-28T12:30:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-53_toy_train_data_fast_10pct
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53_toy_train_data_fast_10pct
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6983
- Wer: 0.5026
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.3619 | 1.05 | 250 | 3.4334 | 1.0 |
| 3.0818 | 2.1 | 500 | 3.4914 | 1.0 |
| 2.3245 | 3.15 | 750 | 1.6483 | 0.9486 |
| 1.0233 | 4.2 | 1000 | 0.8817 | 0.7400 |
| 0.7522 | 5.25 | 1250 | 0.7374 | 0.6529 |
| 0.5343 | 6.3 | 1500 | 0.6972 | 0.6068 |
| 0.4452 | 7.35 | 1750 | 0.6757 | 0.5740 |
| 0.4275 | 8.4 | 2000 | 0.6789 | 0.5551 |
| 0.3688 | 9.45 | 2250 | 0.6468 | 0.5394 |
| 0.3363 | 10.5 | 2500 | 0.6798 | 0.5358 |
| 0.3036 | 11.55 | 2750 | 0.6439 | 0.5265 |
| 0.3173 | 12.6 | 3000 | 0.6898 | 0.5196 |
| 0.2985 | 13.65 | 3250 | 0.6791 | 0.5169 |
| 0.288 | 14.7 | 3500 | 0.6442 | 0.5090 |
| 0.2673 | 15.75 | 3750 | 0.6984 | 0.5119 |
| 0.2575 | 16.81 | 4000 | 0.7146 | 0.5084 |
| 0.239 | 17.86 | 4250 | 0.6847 | 0.5040 |
| 0.2266 | 18.91 | 4500 | 0.6900 | 0.5028 |
| 0.22 | 19.96 | 4750 | 0.6983 | 0.5026 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
kingabzpro/CELEB-GANs
|
kingabzpro
| 2022-03-28T18:08:29Z | 0 | 2 | null |
[
"huggan",
"gan",
"dcgans",
"dataset:huggan/CelebA-faces",
"license:apache-2.0",
"region:us"
] | null | 2022-03-28T16:05:34Z |
---
tags:
- huggan
- gan
- dcgans
task: image-generation
license: apache-2.0
datasets:
- huggan/CelebA-faces
---
# Fake Faces with DCGANs
## Model description
Describe the model here (what it does, what it's used for, etc.)
## Intended uses & limitations
#### How to use
```python
# You can include sample code which will be formatted
```
#### Limitations and bias
Provide examples of latent issues and potential remediations.
## Training data
Describe the data you used to train the model.
If you initialized it with pre-trained weights, add a link to the pre-trained model card or repository with description of the pre-training data.
## Training procedure
Preprocessing, hardware used, hyperparameters...
## Eval results
- Generator_loss: 22.7
- Discriminator_loss: 7.9
## Generated Images
You can embed local or remote images using ``
### BibTeX entry and citation info
```bibtex
@inproceedings{...,
year={2020}
}
```
|
aapot/wav2vec2-large-xlsr-53-finnish
|
aapot
| 2022-03-28T17:56:36Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"fi",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: fi
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Finnish by Aapo Tanskanen
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice fi
type: common_voice
args: fi
metrics:
- name: Test WER
type: wer
value: 32.378771
---
# NOTE: this is an old model and should not be used anymore!! There are a lot better newer models available at our orgnization hub: [Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2) and [Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm)
# Wav2Vec2-Large-XLSR-53-Finnish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Finnish using the [Common Voice](https://huggingface.co/datasets/common_voice), [CSS10 Finnish](https://www.kaggle.com/bryanpark/finnish-single-speaker-speech-dataset) and [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) datasets.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import librosa
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "fi", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("aapot/wav2vec2-large-xlsr-53-finnish")
model = Wav2Vec2ForCTC.from_pretrained("aapot/wav2vec2-large-xlsr-53-finnish")
resampler = lambda sr, y: librosa.resample(y.numpy().squeeze(), sr, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(sampling_rate, speech_array).squeeze()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Finnish test data of Common Voice.
```python
import librosa
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "fi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("aapot/wav2vec2-large-xlsr-53-finnish")
model = Wav2Vec2ForCTC.from_pretrained("aapot/wav2vec2-large-xlsr-53-finnish")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\'\...\…\–\é]'
resampler = lambda sr, y: librosa.resample(y.numpy().squeeze(), sr, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(sampling_rate, speech_array).squeeze()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 32.378771 %
## Training
The Common Voice `train`, `validation` and `other` datasets were used for training as well as `CSS10 Finnish` and `Finnish parliament session 2` datasets.
The script used for training can be found from [Google Colab](https://colab.research.google.com/drive/1vnEGC9BnNRmVyIHj-0UsVulh_cUYSGWA?usp=sharing)
|
aapot/wav2vec2-xlsr-1b-finnish-v2
|
aapot
| 2022-03-28T17:49:48Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fi",
"finnish",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"arxiv:2111.09296",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
language: fi
metrics:
- wer
- cer
tags:
- automatic-speech-recognition
- fi
- finnish
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: wav2vec2-xlsr-1b-finnish-v2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: fi
metrics:
- name: Test WER
type: wer
value: 9.73
- name: Test CER
type: cer
value: 1.65
---
# Wav2Vec2 XLS-R for Finnish ASR
This acoustic model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) for Finnish ASR. The model has been fine-tuned with 275.6 hours of Finnish transcribed speech data. Wav2Vec2 XLS-R was introduced in
[this paper](https://arxiv.org/abs/2111.09296) and first released at [this page](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec#wav2vec-20).
**Note**: there is a version with KenLM language model used in the decoding phase producing better transcriptions: [Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2)
## Model description
Wav2Vec2 XLS-R is Facebook AI's large-scale multilingual pretrained model for speech. It is pretrained on 436k hours of unlabeled speech, including VoxPopuli, MLS, CommonVoice, BABEL, and VoxLingua107. It uses the wav2vec 2.0 objective, in 128 languages.
You can read more about the pretrained model from [this blog](https://ai.facebook.com/blog/xls-r-self-supervised-speech-processing-for-128-languages) and [this paper](https://arxiv.org/abs/2111.09296).
This model is fine-tuned version of the pretrained model (1 billion parameter variant) for Finnish ASR.
## Intended uses & limitations
You can use this model for Finnish ASR (speech-to-text) task.
### How to use
Check the [run-finnish-asr-models.ipynb](https://huggingface.co/aapot/wav2vec2-xlsr-1b-finnish-v2/blob/main/run-finnish-asr-models.ipynb) notebook in this repository for an detailed example on how to use this model.
### Limitations and bias
This model was fine-tuned with audio samples which maximum length was 20 seconds so this model most likely works the best for quite short audios of similar length. However, you can try this model with a lot longer audios too and see how it works. If you encounter out of memory errors with very long audio files you can use the audio chunking method introduced in [this blog post](https://huggingface.co/blog/asr-chunking).
A vast majority of the data used for fine-tuning was from the Finnish Parliament dataset so this model may not generalize so well to very different domains like common daily spoken Finnish with dialects etc. In addition, audios of the datasets tend to be adult male dominated so this model may not work as well for speeches of children and women, for example.
## Training data
This model was fine-tuned with 275.6 hours of Finnish transcribed speech data from following datasets:
| Dataset | Hours | % of total hours |
|:------------------------------------------------------------------------------------------------------------------------------ |:--------:|:----------------:|
| [Common Voice 7.0 Finnish train + evaluation + other splits](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | 9.70 h | 3.52 % |
| [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) | 0.24 h | 0.09 % |
| [VoxPopuli Finnish](https://github.com/facebookresearch/voxpopuli) | 21.97 h | 7.97 % |
| [CSS10 Finnish](https://github.com/kyubyong/css10) | 10.32 h | 3.74 % |
| [Aalto Finnish Parliament ASR Corpus](http://urn.fi/urn:nbn:fi:lb-2021051903) | 228.00 h | 82.73 % |
| [Finnish Broadcast Corpus](http://urn.fi/urn:nbn:fi:lb-2016042502) | 5.37 h | 1.95 % |
Datasets were filtered to include maximum length of 20 seconds long audio samples.
## Training procedure
This model was trained during [Robust Speech Challenge Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) organized by Hugging Face. Training was done on a Tesla V100 GPU, sponsored by OVHcloud.
Training script was provided by Hugging Face and it is available [here](https://github.com/huggingface/transformers/blob/main/examples/research_projects/robust-speech-event/run_speech_recognition_ctc_bnb.py). We only modified its data loading for our custom datasets.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: [8-bit Adam](https://github.com/facebookresearch/bitsandbytes) with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
The pretrained `facebook/wav2vec2-xls-r-1b` model was initialized with following hyperparameters:
- attention_dropout: 0.094
- hidden_dropout: 0.047
- feat_proj_dropout: 0.04
- mask_time_prob: 0.082
- layerdrop: 0.041
- activation_dropout: 0.055
- ctc_loss_reduction: "mean"
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.7778 | 0.17 | 500 | 0.2851 | 0.3572 |
| 0.5506 | 0.34 | 1000 | 0.1595 | 0.2130 |
| 0.6569 | 0.5 | 1500 | 0.1458 | 0.2046 |
| 0.5997 | 0.67 | 2000 | 0.1374 | 0.1975 |
| 0.542 | 0.84 | 2500 | 0.1390 | 0.1956 |
| 0.4815 | 1.01 | 3000 | 0.1266 | 0.1813 |
| 0.6982 | 1.17 | 3500 | 0.1441 | 0.1965 |
| 0.4522 | 1.34 | 4000 | 0.1232 | 0.1822 |
| 0.4655 | 1.51 | 4500 | 0.1209 | 0.1702 |
| 0.4069 | 1.68 | 5000 | 0.1149 | 0.1688 |
| 0.4226 | 1.84 | 5500 | 0.1121 | 0.1560 |
| 0.3993 | 2.01 | 6000 | 0.1091 | 0.1557 |
| 0.406 | 2.18 | 6500 | 0.1115 | 0.1553 |
| 0.4098 | 2.35 | 7000 | 0.1144 | 0.1560 |
| 0.3995 | 2.51 | 7500 | 0.1028 | 0.1476 |
| 0.4101 | 2.68 | 8000 | 0.1129 | 0.1511 |
| 0.3636 | 2.85 | 8500 | 0.1025 | 0.1517 |
| 0.3534 | 3.02 | 9000 | 0.1068 | 0.1480 |
| 0.3836 | 3.18 | 9500 | 0.1072 | 0.1459 |
| 0.3531 | 3.35 | 10000 | 0.0928 | 0.1367 |
| 0.3649 | 3.52 | 10500 | 0.1042 | 0.1426 |
| 0.3645 | 3.69 | 11000 | 0.0979 | 0.1433 |
| 0.3685 | 3.85 | 11500 | 0.0947 | 0.1346 |
| 0.3325 | 4.02 | 12000 | 0.0991 | 0.1352 |
| 0.3497 | 4.19 | 12500 | 0.0919 | 0.1358 |
| 0.3303 | 4.36 | 13000 | 0.0888 | 0.1272 |
| 0.3323 | 4.52 | 13500 | 0.0888 | 0.1277 |
| 0.3452 | 4.69 | 14000 | 0.0894 | 0.1279 |
| 0.337 | 4.86 | 14500 | 0.0917 | 0.1289 |
| 0.3114 | 5.03 | 15000 | 0.0942 | 0.1313 |
| 0.3099 | 5.19 | 15500 | 0.0902 | 0.1239 |
| 0.3079 | 5.36 | 16000 | 0.0871 | 0.1256 |
| 0.3293 | 5.53 | 16500 | 0.0861 | 0.1263 |
| 0.3123 | 5.7 | 17000 | 0.0876 | 0.1203 |
| 0.3093 | 5.86 | 17500 | 0.0848 | 0.1226 |
| 0.2903 | 6.03 | 18000 | 0.0914 | 0.1221 |
| 0.297 | 6.2 | 18500 | 0.0841 | 0.1185 |
| 0.2797 | 6.37 | 19000 | 0.0858 | 0.1165 |
| 0.2878 | 6.53 | 19500 | 0.0874 | 0.1161 |
| 0.2974 | 6.7 | 20000 | 0.0835 | 0.1173 |
| 0.3051 | 6.87 | 20500 | 0.0835 | 0.1178 |
| 0.2941 | 7.04 | 21000 | 0.0852 | 0.1155 |
| 0.258 | 7.21 | 21500 | 0.0832 | 0.1132 |
| 0.2778 | 7.37 | 22000 | 0.0829 | 0.1110 |
| 0.2751 | 7.54 | 22500 | 0.0822 | 0.1069 |
| 0.2887 | 7.71 | 23000 | 0.0819 | 0.1103 |
| 0.2509 | 7.88 | 23500 | 0.0787 | 0.1055 |
| 0.2501 | 8.04 | 24000 | 0.0807 | 0.1076 |
| 0.2399 | 8.21 | 24500 | 0.0784 | 0.1052 |
| 0.2539 | 8.38 | 25000 | 0.0772 | 0.1075 |
| 0.248 | 8.55 | 25500 | 0.0772 | 0.1055 |
| 0.2689 | 8.71 | 26000 | 0.0763 | 0.1027 |
| 0.2855 | 8.88 | 26500 | 0.0756 | 0.1035 |
| 0.2421 | 9.05 | 27000 | 0.0771 | 0.0998 |
| 0.2497 | 9.22 | 27500 | 0.0756 | 0.0971 |
| 0.2367 | 9.38 | 28000 | 0.0741 | 0.0974 |
| 0.2473 | 9.55 | 28500 | 0.0739 | 0.0982 |
| 0.2396 | 9.72 | 29000 | 0.0756 | 0.0991 |
| 0.2602 | 9.89 | 29500 | 0.0737 | 0.0975 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
## Evaluation results
Evaluation was done with the [Common Voice 7.0 Finnish test split](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
To evaluate this model, run the `eval.py` script in this repository:
```bash
python3 eval.py --model_id aapot/wav2vec2-xlsr-1b-finnish-v2 --dataset mozilla-foundation/common_voice_7_0 --config fi --split test
```
This model (the first row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models:
| | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) |
|-----------------------------------------|---------------|------------------|---------------|------------------|
|aapot/wav2vec2-xlsr-1b-finnish-lm-v2 |**4.09** |**9.73** |**0.88** |**1.65** |
|aapot/wav2vec2-xlsr-1b-finnish-lm |5.65 |13.11 |1.20 |2.23 |
|aapot/wav2vec2-xlsr-300m-finnish-lm |8.16 |17.92 |1.97 |3.36 |
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗
|
aapot/wav2vec2-xlsr-1b-finnish-lm
|
aapot
| 2022-03-28T17:31:03Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fi",
"finnish",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"arxiv:2111.09296",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
language: fi
metrics:
- wer
- cer
tags:
- automatic-speech-recognition
- fi
- finnish
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: wav2vec2-xlsr-1b-finnish-lm
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: fi
metrics:
- name: Test WER
type: wer
value: 5.65
- name: Test CER
type: cer
value: 1.2
---
# Wav2Vec2 XLS-R for Finnish ASR
This acoustic model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) for Finnish ASR. The model has been fine-tuned with 259.57 hours of Finnish transcribed speech data. Wav2Vec2 XLS-R was introduced in
[this paper](https://arxiv.org/abs/2111.09296) and first released at [this page](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec#wav2vec-20).
This repository also includes Finnish KenLM language model used in the decoding phase with the acoustic model.
**Note**: this model is exactly the same as the [Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm) model so this model has just been copied/moved to the `Finnish-NLP` Hugging Face organization.
**Note**: there is a better V2 version of this model which has been fine-tuned longer with 16 hours of more data: [Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2)
## Model description
Wav2Vec2 XLS-R is Facebook AI's large-scale multilingual pretrained model for speech. It is pretrained on 436k hours of unlabeled speech, including VoxPopuli, MLS, CommonVoice, BABEL, and VoxLingua107. It uses the wav2vec 2.0 objective, in 128 languages.
You can read more about the pretrained model from [this blog](https://ai.facebook.com/blog/xls-r-self-supervised-speech-processing-for-128-languages) and [this paper](https://arxiv.org/abs/2111.09296).
This model is fine-tuned version of the pretrained model (1 billion parameter variant) for Finnish ASR.
## Intended uses & limitations
You can use this model for Finnish ASR (speech-to-text) task.
### How to use
Check the [run-finnish-asr-models.ipynb](https://huggingface.co/aapot/wav2vec2-xlsr-1b-finnish-lm/blob/main/run-finnish-asr-models.ipynb) notebook in this repository for an detailed example on how to use this model.
### Limitations and bias
This model was fine-tuned with audio samples which maximum length was 20 seconds so this model most likely works the best for quite short audios of similar length. However, you can try this model with a lot longer audios too and see how it works. If you encounter out of memory errors with very long audio files you can use the audio chunking method introduced in [this blog post](https://huggingface.co/blog/asr-chunking).
A vast majority of the data used for fine-tuning was from the Finnish Parliament dataset so this model may not generalize so well to very different domains like common daily spoken Finnish with dialects etc. In addition, audios of the datasets tend to be adult male dominated so this model may not work as well for speeches of children and women, for example.
The Finnish KenLM language model used in the decoding phase has been trained with text data from the audio transcriptions. Thus, the decoder's language model may not generalize to very different language, for example to spoken daily language with dialects. It may be beneficial to train your own KenLM language model for your domain language and use that in the decoding.
## Training data
This model was fine-tuned with 259.57 hours of Finnish transcribed speech data from following datasets:
| Dataset | Hours | % of total hours |
|:----------------------------------------------------------------------------------------------------------------------------------|:--------:|:----------------:|
| [Common Voice 7.0 Finnish train + evaluation + other splits](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | 9.70 h | 3.74 % |
| [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) | 0.24 h | 0.09 % |
| [VoxPopuli Finnish](https://github.com/facebookresearch/voxpopuli) | 5.94 h | 2.29 % |
| [CSS10 Finnish](https://github.com/kyubyong/css10) | 10.32 h | 3.98 % |
| [Aalto Finnish Parliament ASR Corpus](http://urn.fi/urn:nbn:fi:lb-2021051903) | 228.00 h | 87.84 % |
| [Finnish Broadcast Corpus](http://urn.fi/urn:nbn:fi:lb-2016042502) | 5.37 h | 2.07 % |
Datasets were filtered to include maximum length of 20 seconds long audio samples.
## Training procedure
This model was trained during [Robust Speech Challenge Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) organized by Hugging Face. Training was done on a Tesla V100 GPU, sponsored by OVHcloud.
Training script was provided by Hugging Face and it is available [here](https://github.com/huggingface/transformers/blob/main/examples/research_projects/robust-speech-event/run_speech_recognition_ctc_bnb.py). We only modified its data loading for our custom datasets.
For the KenLM language model training, we followed the [blog post tutorial](https://huggingface.co/blog/wav2vec2-with-ngram) provided by Hugging Face. Training data for the 5-gram KenLM were text transcriptions of the audio training data.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: [8-bit Adam](https://github.com/facebookresearch/bitsandbytes) with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
The pretrained `facebook/wav2vec2-xls-r-1b` model was initialized with following hyperparameters:
- attention_dropout: 0.094
- hidden_dropout: 0.047
- feat_proj_dropout: 0.04
- mask_time_prob: 0.082
- layerdrop: 0.041
- activation_dropout: 0.055
- ctc_loss_reduction: "mean"
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.968 | 0.18 | 500 | 0.4870 | 0.4720 |
| 0.6557 | 0.36 | 1000 | 0.2450 | 0.2931 |
| 0.647 | 0.54 | 1500 | 0.1818 | 0.2255 |
| 0.5297 | 0.72 | 2000 | 0.1698 | 0.2354 |
| 0.5802 | 0.9 | 2500 | 0.1581 | 0.2355 |
| 0.6351 | 1.07 | 3000 | 0.1689 | 0.2336 |
| 0.4626 | 1.25 | 3500 | 0.1719 | 0.3099 |
| 0.4526 | 1.43 | 4000 | 0.1434 | 0.2069 |
| 0.4692 | 1.61 | 4500 | 0.1645 | 0.2192 |
| 0.4584 | 1.79 | 5000 | 0.1483 | 0.1987 |
| 0.4234 | 1.97 | 5500 | 0.1499 | 0.2178 |
| 0.4243 | 2.15 | 6000 | 0.1345 | 0.2070 |
| 0.4108 | 2.33 | 6500 | 0.1383 | 0.1850 |
| 0.4048 | 2.51 | 7000 | 0.1338 | 0.1811 |
| 0.4085 | 2.69 | 7500 | 0.1290 | 0.1780 |
| 0.4026 | 2.87 | 8000 | 0.1239 | 0.1650 |
| 0.4033 | 3.04 | 8500 | 0.1346 | 0.1657 |
| 0.3986 | 3.22 | 9000 | 0.1310 | 0.1850 |
| 0.3867 | 3.4 | 9500 | 0.1273 | 0.1741 |
| 0.3658 | 3.58 | 10000 | 0.1219 | 0.1672 |
| 0.382 | 3.76 | 10500 | 0.1306 | 0.1698 |
| 0.3847 | 3.94 | 11000 | 0.1230 | 0.1577 |
| 0.3691 | 4.12 | 11500 | 0.1310 | 0.1615 |
| 0.3593 | 4.3 | 12000 | 0.1296 | 0.1622 |
| 0.3619 | 4.48 | 12500 | 0.1285 | 0.1601 |
| 0.3361 | 4.66 | 13000 | 0.1261 | 0.1569 |
| 0.3603 | 4.84 | 13500 | 0.1235 | 0.1533 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
## Evaluation results
Evaluation was done with the [Common Voice 7.0 Finnish test split](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
To evaluate this model, run the `eval.py` script in this repository:
```bash
python3 eval.py --model_id aapot/wav2vec2-xlsr-1b-finnish-lm --dataset mozilla-foundation/common_voice_7_0 --config fi --split test
```
This model (the second row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models:
| | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) |
|-----------------------------------------|---------------|------------------|---------------|------------------|
|aapot/wav2vec2-xlsr-1b-finnish-lm-v2 |**4.09** |**9.73** |**0.88** |**1.65** |
|aapot/wav2vec2-xlsr-1b-finnish-lm |5.65 |13.11 |1.20 |2.23 |
|aapot/wav2vec2-xlsr-300m-finnish-lm |8.16 |17.92 |1.97 |3.36 |
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗
|
aapot/wav2vec2-xlsr-300m-finnish-lm
|
aapot
| 2022-03-28T17:22:08Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fi",
"finnish",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"arxiv:2111.09296",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
language: fi
metrics:
- wer
- cer
tags:
- automatic-speech-recognition
- fi
- finnish
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: wav2vec2-xlsr-300m-finnish-lm
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: fi
metrics:
- name: Test WER
type: wer
value: 8.16
- name: Test CER
type: cer
value: 1.97
---
# Wav2Vec2 XLS-R for Finnish ASR
This acoustic model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for Finnish ASR. The model has been fine-tuned with 275.6 hours of Finnish transcribed speech data. Wav2Vec2 XLS-R was introduced in
[this paper](https://arxiv.org/abs/2111.09296) and first released at [this page](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec#wav2vec-20).
This repository also includes Finnish KenLM language model used in the decoding phase with the acoustic model.
**Note**: this model is exactly the same as the [Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm) model so this model has just been copied/moved to the `Finnish-NLP` Hugging Face organization.
## Model description
Wav2Vec2 XLS-R is Facebook AI's large-scale multilingual pretrained model for speech. It is pretrained on 436k hours of unlabeled speech, including VoxPopuli, MLS, CommonVoice, BABEL, and VoxLingua107. It uses the wav2vec 2.0 objective, in 128 languages.
You can read more about the pretrained model from [this blog](https://ai.facebook.com/blog/xls-r-self-supervised-speech-processing-for-128-languages) and [this paper](https://arxiv.org/abs/2111.09296).
This model is fine-tuned version of the pretrained model (300 million parameter variant) for Finnish ASR.
## Intended uses & limitations
You can use this model for Finnish ASR (speech-to-text) task.
### How to use
Check the [run-finnish-asr-models.ipynb](https://huggingface.co/aapot/wav2vec2-xlsr-300m-finnish-lm/blob/main/run-finnish-asr-models.ipynb) notebook in this repository for an detailed example on how to use this model.
### Limitations and bias
This model was fine-tuned with audio samples which maximum length was 20 seconds so this model most likely works the best for quite short audios of similar length. However, you can try this model with a lot longer audios too and see how it works. If you encounter out of memory errors with very long audio files you can use the audio chunking method introduced in [this blog post](https://huggingface.co/blog/asr-chunking).
A vast majority of the data used for fine-tuning was from the Finnish Parliament dataset so this model may not generalize so well to very different domains like common daily spoken Finnish with dialects etc. In addition, audios of the datasets tend to be adult male dominated so this model may not work as well for speeches of children and women, for example.
The Finnish KenLM language model used in the decoding phase has been trained with text data from the audio transcriptions and from a subset of Finnish Wikipedia. Thus, the decoder's language model may not generalize to very different language, for example to spoken daily language with dialects (because especially the Wikipedia contains mostly formal Finnish language). It may be beneficial to train your own KenLM language model for your domain language and use that in the decoding.
## Training data
This model was fine-tuned with 275.6 hours of Finnish transcribed speech data from following datasets:
| Dataset | Hours | % of total hours |
|:------------------------------------------------------------------------------------------------------------------------------ |:--------:|:----------------:|
| [Common Voice 7.0 Finnish train + evaluation + other splits](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | 9.70 h | 3.52 % |
| [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) | 0.24 h | 0.09 % |
| [VoxPopuli Finnish](https://github.com/facebookresearch/voxpopuli) | 21.97 h | 7.97 % |
| [CSS10 Finnish](https://github.com/kyubyong/css10) | 10.32 h | 3.74 % |
| [Aalto Finnish Parliament ASR Corpus](http://urn.fi/urn:nbn:fi:lb-2021051903) | 228.00 h | 82.73 % |
| [Finnish Broadcast Corpus](http://urn.fi/urn:nbn:fi:lb-2016042502) | 5.37 h | 1.95 % |
Datasets were filtered to include maximum length of 20 seconds long audio samples.
## Training procedure
This model was trained during [Robust Speech Challenge Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) organized by Hugging Face. Training was done on a Tesla V100 GPU, sponsored by OVHcloud.
Training script was provided by Hugging Face and it is available [here](https://github.com/huggingface/transformers/blob/main/examples/research_projects/robust-speech-event/run_speech_recognition_ctc_bnb.py). We only modified its data loading for our custom datasets.
For the KenLM language model training, we followed the [blog post tutorial](https://huggingface.co/blog/wav2vec2-with-ngram) provided by Hugging Face. Training data for the 5-gram KenLM were text transcriptions of the audio training data and 100k random samples of cleaned [Finnish Wikipedia](https://huggingface.co/datasets/wikipedia) (August 2021) dataset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-04
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: [8-bit Adam](https://github.com/facebookresearch/bitsandbytes) with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
The pretrained `facebook/wav2vec2-xls-r-300m` model was initialized with following hyperparameters:
- attention_dropout: 0.094
- hidden_dropout: 0.047
- feat_proj_dropout: 0.04
- mask_time_prob: 0.082
- layerdrop: 0.041
- activation_dropout: 0.055
- ctc_loss_reduction: "mean"
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.973 | 0.17 | 500 | 0.5750 | 0.6844 |
| 0.713 | 0.34 | 1000 | 0.3356 | 0.4518 |
| 0.6563 | 0.5 | 1500 | 0.3007 | 0.4039 |
| 0.642 | 0.67 | 2000 | 0.2619 | 0.3674 |
| 0.6203 | 0.84 | 2500 | 0.2488 | 0.3558 |
| 0.6016 | 1.01 | 3000 | 0.2795 | 0.3835 |
| 0.5423 | 1.17 | 3500 | 0.2652 | 0.3310 |
| 0.5639 | 1.34 | 4000 | 0.2479 | 0.3462 |
| 0.586 | 1.51 | 4500 | 0.2409 | 0.3295 |
| 0.5169 | 1.68 | 5000 | 0.2728 | 0.3352 |
| 0.5176 | 1.84 | 5500 | 0.2254 | 0.3149 |
| 0.4983 | 2.01 | 6000 | 0.2169 | 0.3009 |
| 0.4982 | 2.18 | 6500 | 0.2215 | 0.3079 |
| 0.4898 | 2.35 | 7000 | 0.2174 | 0.3023 |
| 0.4922 | 2.51 | 7500 | 0.2217 | 0.3081 |
| 0.5025 | 2.68 | 8000 | 0.2002 | 0.2710 |
| 0.4745 | 2.85 | 8500 | 0.1935 | 0.2783 |
| 0.4377 | 3.02 | 9000 | 0.1859 | 0.2742 |
| 0.4511 | 3.18 | 9500 | 0.2038 | 0.2786 |
| 0.4411 | 3.35 | 10000 | 0.1863 | 0.2651 |
| 0.4501 | 3.52 | 10500 | 0.1948 | 0.2605 |
| 0.4557 | 3.69 | 11000 | 0.1872 | 0.2695 |
| 0.4493 | 3.85 | 11500 | 0.1888 | 0.2632 |
| 0.4047 | 4.02 | 12000 | 0.1818 | 0.2559 |
| 0.4319 | 4.19 | 12500 | 0.1896 | 0.2648 |
| 0.4162 | 4.36 | 13000 | 0.1953 | 0.2595 |
| 0.4046 | 4.52 | 13500 | 0.1864 | 0.2606 |
| 0.4195 | 4.69 | 14000 | 0.1843 | 0.2467 |
| 0.4146 | 4.86 | 14500 | 0.1686 | 0.2450 |
| 0.378 | 5.03 | 15000 | 0.1731 | 0.2401 |
| 0.3792 | 5.19 | 15500 | 0.1676 | 0.2325 |
| 0.3855 | 5.36 | 16000 | 0.1740 | 0.2326 |
| 0.4029 | 5.53 | 16500 | 0.1674 | 0.2345 |
| 0.386 | 5.7 | 17000 | 0.1735 | 0.2280 |
| 0.3811 | 5.86 | 17500 | 0.1692 | 0.2258 |
| 0.3607 | 6.03 | 18000 | 0.1797 | 0.2279 |
| 0.3604 | 6.2 | 18500 | 0.1651 | 0.2206 |
| 0.3362 | 6.37 | 19000 | 0.1627 | 0.2199 |
| 0.3611 | 6.53 | 19500 | 0.1652 | 0.2172 |
| 0.3671 | 6.7 | 20000 | 0.1564 | 0.2140 |
| 0.3769 | 6.87 | 20500 | 0.1525 | 0.2101 |
| 0.3539 | 7.04 | 21000 | 0.1639 | 0.2096 |
| 0.3225 | 7.21 | 21500 | 0.1611 | 0.2087 |
| 0.3323 | 7.37 | 22000 | 0.1633 | 0.2008 |
| 0.3327 | 7.54 | 22500 | 0.1692 | 0.1975 |
| 0.3456 | 7.71 | 23000 | 0.1555 | 0.1991 |
| 0.3058 | 7.88 | 23500 | 0.1590 | 0.1959 |
| 0.3034 | 8.04 | 24000 | 0.1531 | 0.1973 |
| 0.2925 | 8.21 | 24500 | 0.1583 | 0.1978 |
| 0.2967 | 8.38 | 25000 | 0.1546 | 0.1906 |
| 0.2974 | 8.55 | 25500 | 0.1540 | 0.1869 |
| 0.3131 | 8.71 | 26000 | 0.1534 | 0.1850 |
| 0.3306 | 8.88 | 26500 | 0.1482 | 0.1844 |
| 0.2842 | 9.05 | 27000 | 0.1490 | 0.1854 |
| 0.2879 | 9.22 | 27500 | 0.1463 | 0.1799 |
| 0.27 | 9.38 | 28000 | 0.1454 | 0.1798 |
| 0.2874 | 9.55 | 28500 | 0.1504 | 0.1787 |
| 0.2757 | 9.72 | 29000 | 0.1512 | 0.1784 |
| 0.3017 | 9.89 | 29500 | 0.1484 | 0.1800 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
## Evaluation results
Evaluation was done with the [Common Voice 7.0 Finnish test split](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
To evaluate this model, run the `eval.py` script in this repository:
```bash
python3 eval.py --model_id aapot/wav2vec2-xlsr-300m-finnish-lm --dataset mozilla-foundation/common_voice_7_0 --config fi --split test
```
This model (the third row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models:
| | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) |
|-----------------------------------------|---------------|------------------|---------------|------------------|
|aapot/wav2vec2-xlsr-1b-finnish-lm-v2 |**4.09** |**9.73** |**0.88** |**1.65** |
|aapot/wav2vec2-xlsr-1b-finnish-lm |5.65 |13.11 |1.20 |2.23 |
|aapot/wav2vec2-xlsr-300m-finnish-lm |8.16 |17.92 |1.97 |3.36 |
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗
|
ntoldalagi/C0_LID_DEV
|
ntoldalagi
| 2022-03-28T15:46:21Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-21T21:34:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
name: C0_LID_DEV
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# C0_LID_DEV
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: inf
- Wer: 0.8267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| No log | 0.0 | 25 | inf | 0.8426 |
| 1.5354 | 0.17 | 2000 | inf | 0.8198 |
| 1.5688 | 0.33 | 4000 | inf | 0.8271 |
| 1.5294 | 0.5 | 6000 | inf | 0.8339 |
| 1.1947 | 0.67 | 8000 | inf | 0.8260 |
| 1.1534 | 0.83 | 10000 | inf | 0.8267 |
| 1.1484 | 1.0 | 12000 | inf | 0.8267 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
mikeadimech/punctuation-test-4
|
mikeadimech
| 2022-03-28T15:09:06Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-28T14:31:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: punctuation-test-4
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 39.1294
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# punctuation-test-4
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3411
- Bleu: 39.1294
- Gen Len: 18.4812
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 0.3331 | 1.0 | 625 | 0.3411 | 39.1294 | 18.4812 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
dhlee347/distilbert-imdb
|
dhlee347
| 2022-03-28T14:07:15Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-28T14:01:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: distilbert-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9302
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1796
- Accuracy: 0.9302
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2808 | 1.0 | 782 | 0.1796 | 0.9302 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.6
|
Chikashi/t5-small-finetuned-cnndm
|
Chikashi
| 2022-03-28T14:04:38Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-28T09:07:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cnndm
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 24.417
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnndm
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6854
- Rouge1: 24.417
- Rouge2: 11.6924
- Rougel: 20.1756
- Rougelsum: 23.0414
- Gen Len: 18.9996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|:-------:|:---------:|:-------:|
| 1.8522 | 1.0 | 35890 | 1.6854 | 24.417 | 11.6924 | 20.1756 | 23.0414 | 18.9996 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
dennisowusuk/wav2vec2-large-xls-r-300m-turkish-colab
|
dennisowusuk
| 2022-03-28T13:28:30Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-28T05:29:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3863
- Wer: 0.3095
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.8284 | 3.67 | 400 | 0.6782 | 0.6739 |
| 0.4174 | 7.34 | 800 | 0.4524 | 0.4811 |
| 0.2015 | 11.01 | 1200 | 0.4736 | 0.4311 |
| 0.1371 | 14.68 | 1600 | 0.4254 | 0.3929 |
| 0.0997 | 18.35 | 2000 | 0.4254 | 0.3636 |
| 0.082 | 22.02 | 2400 | 0.3807 | 0.3474 |
| 0.0665 | 25.69 | 2800 | 0.3987 | 0.3236 |
| 0.0523 | 29.36 | 3200 | 0.3863 | 0.3095 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
huggingtweets/hirox246
|
huggingtweets
| 2022-03-28T13:12:56Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/hirox246/1648473171015/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/646595746905620480/oeKI14gB_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ひろゆき, Hiroyuki Nishimura</div>
<div style="text-align: center; font-size: 14px;">@hirox246</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ひろゆき, Hiroyuki Nishimura.
| Data | ひろゆき, Hiroyuki Nishimura |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 288 |
| Short tweets | 2002 |
| Tweets kept | 956 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1fs862rv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @hirox246's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ktc28kc0) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ktc28kc0/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/hirox246')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
scasutt/wav2vec2-large-xlsr-53_toy_train_data_augmented
|
scasutt
| 2022-03-28T12:29:16Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-27T17:08:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-53_toy_train_data_augmented
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53_toy_train_data_augmented
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5016
- Wer: 0.4656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.418 | 1.05 | 250 | 3.4171 | 1.0 |
| 3.0886 | 2.1 | 500 | 3.4681 | 1.0 |
| 2.9422 | 3.15 | 750 | 2.6151 | 1.0 |
| 1.3195 | 4.2 | 1000 | 0.8789 | 0.7739 |
| 0.9154 | 5.25 | 1250 | 0.6364 | 0.6518 |
| 0.6519 | 6.3 | 1500 | 0.5682 | 0.5949 |
| 0.5622 | 7.35 | 1750 | 0.5273 | 0.5625 |
| 0.4965 | 8.4 | 2000 | 0.4891 | 0.5283 |
| 0.4283 | 9.45 | 2250 | 0.5018 | 0.5260 |
| 0.4019 | 10.5 | 2500 | 0.5016 | 0.5006 |
| 0.3585 | 11.55 | 2750 | 0.5047 | 0.5003 |
| 0.3275 | 12.6 | 3000 | 0.5148 | 0.4866 |
| 0.3427 | 13.65 | 3250 | 0.5035 | 0.4786 |
| 0.3229 | 14.7 | 3500 | 0.4855 | 0.4768 |
| 0.3332 | 15.75 | 3750 | 0.5040 | 0.4769 |
| 0.2861 | 16.81 | 4000 | 0.5138 | 0.4669 |
| 0.3029 | 17.86 | 4250 | 0.5133 | 0.4670 |
| 0.2633 | 18.91 | 4500 | 0.5063 | 0.4637 |
| 0.2621 | 19.96 | 4750 | 0.5016 | 0.4656 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Champion/SA-models
|
Champion
| 2022-03-28T11:56:45Z | 0 | 0 | null |
[
"pchampio",
"audio",
"region:us"
] | null | 2022-03-10T18:32:17Z |
---
tags:
- pchampio
- audio
inference: false
---
|
VincentC12/rh_classification_kara
|
VincentC12
| 2022-03-28T11:53:41Z | 9 | 0 |
pytorch
|
[
"pytorch",
"distilbert",
"sentiment-analysis",
"en",
"region:us"
] | null | 2022-03-23T16:19:02Z |
---
language:
- en
library_name: pytorch
metrics:
- satisfaction
- culture organisationnelle
- leadership
- conditions de travail
tags:
- sentiment-analysis
widget:
- text: "My work is recognized by my superiors and I would even say that I feel like I have more recognition since we are on telework."
example_title: "Exemple leadership"
- text: "For Working conditions and wages in particular."
example_title: "Exemple conditions de travail"
- text: "A climate of overperformance is in place in the company."
example_title: "Exemple culture organisationnelle"
- text: "With regard to telework, I look forward to setting up the hybrid week, so 2 3 days at home and at the office."
example_title: "Exemple satisfaction"
---
Ce modèle est développé pour KARA.
Ce modèle est :
- Un outil de classification thématique des commentaires RH
- Entrainé pour être utilisé en ANGLAIS (les commentaires doivent êtres traduits)
- Spécialisé pour des commentaires entre 10 et 512 charactères
Ce modèle n'est pas :
- Utilisable pour détecter un discours haineux ou bien une lettre de suicide
Étiquettes :
- Label_0 = Satisfaction
- Label_1 = Culture Organisationnelle
- Label_2 = Leadership
- Label_3 = Conditions de travail
version 0.0.1
Performances sur le jeux de données du HRM : 84.3% de précision
|
robvanderg/Sem-RemmmBERT
|
robvanderg
| 2022-03-28T11:29:41Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"rembert",
"feature-extraction",
"STILT",
"retraining",
"multi-task learning",
"multilingual",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-28T11:20:13Z |
---
language:
- multilingual
tags:
- STILT
- retraining
- multi-task learning
datasets:
- SemEval 2022
---
## Sem-RemmmBERT
This is the SemEval MaChAmp Multitask Multilingual BERT model. This model is retrained from remBERT (https://huggingface.co/google/rembertased).
The retraining is done based on all SemEval 2022 tasks that are text based, and have annotation on the word, sentence or paragraph level. The retraining is done with MaChAmp (https://machamp-nlp.github.io/), a toolkit focusing on multi-task learning for NLP. More information can be found in the paper (which should be released when the SemEval proceedings are online).
|
robvanderg/Sem-mmmBERT
|
robvanderg
| 2022-03-28T11:28:17Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"STILT",
"retraining",
"multi-task learning",
"multilingual",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-28T11:15:17Z |
---
language:
- multilingual
tags:
- STILT
- retraining
- multi-task learning
datasets:
- SemEval 2022
---
## Sem-mmmBERT
This is the SemEval MaChAmp Multitask Multilingual BERT model. This model is retrained from mBERT (https://huggingface.co/bert-base-multilingual-cased).
The retraining is done based on all SemEval 2022 tasks that are text based, and have annotation on the word, sentence or paragraph level. The retraining is done with MaChAmp (https://machamp-nlp.github.io/), a toolkit focusing on multi-task learning for NLP. More information can be found in the paper (which should be released when the SemEval proceedings are online).
|
sanchit-gandhi/wav2vec2-2-bart-large-cnn-no-adapter
|
sanchit-gandhi
| 2022-03-28T11:26:30Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:librispeech_asr",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-26T17:08:05Z |
---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9938
- Wer: 0.9745
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.9301 | 2.24 | 500 | 4.6291 | 0.9601 |
| 4.4562 | 4.48 | 1000 | 4.3604 | 0.9608 |
| 3.8356 | 6.73 | 1500 | 4.0728 | 0.9530 |
| 3.2716 | 8.97 | 2000 | 3.9938 | 0.9745 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
21iridescent/distilroberta-base-finetuned-squad2-lwt
|
21iridescent
| 2022-03-28T11:18:44Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-28T08:54:28Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilroberta-base-finetuned-squad2-lwt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-squad2-lwt
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.1702 | 1.0 | 4120 | 1.1220 |
| 0.9787 | 2.0 | 8240 | 1.0500 |
| 0.8153 | 3.0 | 12360 | 1.1356 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
{'HasAns_exact': 71.39001349527665,
'HasAns_f1': 77.71740687727831,
'HasAns_total': 5928,
'NoAns_exact': 68.59545836837678,
'NoAns_f1': 68.59545836837678,
'NoAns_total': 5945,
'best_exact': 69.9991577528847,
'best_exact_thresh': 0.0,
'best_f1': 73.1583245993857,
'best_f1_thresh': 0.0,
'exact': 69.99073528173166,
'f1': 73.1499021282327,
'total': 11873}
|
mrm8488/t5-base-iterater
|
mrm8488
| 2022-03-28T11:00:41Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"IteraTeR",
"en",
"dataset:wanyu/IteraTeR_full_sent",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-27T18:48:43Z |
---
license: apache-2.0
language:
- en
datasets:
- wanyu/IteraTeR_full_sent
tags:
- generated_from_trainer
- IteraTeR
widget:
- text: "<clarity> Delay-based schemes have the potential to resolve this last packet problem by scheduling the link based on the delay for the packet has encountered."
model-index:
- name: t5-base-iterater
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# T5 (base) fine-tuned on IteraTeR
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an [IteraTeR](https://huggingface.co/datasets/wanyu/IteraTeR_full_sent) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2580
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.3286 | 0.09 | 2000 | 0.3010 |
| 0.3194 | 0.18 | 4000 | 0.2872 |
| 0.3208 | 0.27 | 6000 | 0.2792 |
| 0.3091 | 0.36 | 8000 | 0.2731 |
| 0.3164 | 0.45 | 10000 | 0.2678 |
| 0.2941 | 0.54 | 12000 | 0.2682 |
| 0.2981 | 0.63 | 14000 | 0.2696 |
| 0.2975 | 0.72 | 16000 | 0.2643 |
| 0.3109 | 0.81 | 18000 | 0.2624 |
| 0.2965 | 0.9 | 20000 | 0.2648 |
| 0.3053 | 0.99 | 22000 | 0.2627 |
| 0.2779 | 1.08 | 24000 | 0.2632 |
| 0.2692 | 1.17 | 26000 | 0.2608 |
| 0.2755 | 1.26 | 28000 | 0.2600 |
| 0.2771 | 1.35 | 30000 | 0.2584 |
| 0.2774 | 1.44 | 32000 | 0.2609 |
| 0.2976 | 1.53 | 34000 | 0.2593 |
| 0.2646 | 1.62 | 36000 | 0.2616 |
| 0.2705 | 1.71 | 38000 | 0.2574 |
| 0.2714 | 1.8 | 40000 | 0.2577 |
| 0.2857 | 1.9 | 42000 | 0.2576 |
| 0.2832 | 1.99 | 44000 | 0.2580 |
### How to use
```py
from transformers import T5ForConditionalGeneration, T5TokenizerFast
MODEL_CKPT = 'mrm8488/t5-base-iterater'
tokenizer = T5TokenizerFast.from_pretrained(MODEL_CKPT)
model = T5ForConditionalGeneration.from_pretrained(MODEL_CKPT)
def predict(intent, text):
input_text = f"<{intent}> {text}"
features = tokenizer([input_text], return_tensors='pt')
output = model.generate(input_ids=features['input_ids'],
attention_mask=features['attention_mask'], max_length=128, num_beams=8)
return tokenizer.decode(output[0], skip_special_tokens=True)
text = "Delay-based schemes have the potential to resolve this last packet problem by scheduling the link based on the delay for the packet has encountered."
intent = "clarity"
predict(intent, text)
# Delay-based schemes have the potential to resolve this last packet problem by scheduling the link based on the delay the packet has encountered.
```
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
SAGAR4REAL/wav2vec2hindia
|
SAGAR4REAL
| 2022-03-28T08:32:52Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-28T07:17:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2hindia
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2hindia
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
huggingtweets/nsawaikar
|
huggingtweets
| 2022-03-28T07:54:11Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-28T07:52:56Z |
---
language: en
thumbnail: http://www.huggingtweets.com/nsawaikar/1648454046318/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1508184022052184064/yqLU6MxW_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Nathan.eth</div>
<div style="text-align: center; font-size: 14px;">@nsawaikar</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Nathan.eth.
| Data | Nathan.eth |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 336 |
| Short tweets | 621 |
| Tweets kept | 2293 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/pn1domem/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nsawaikar's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/g9hqx5dx) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/g9hqx5dx/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/nsawaikar')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
jkhan447/sentiment-model-sample-offline-goemotion
|
jkhan447
| 2022-03-28T06:50:10Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-28T06:33:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sentiment-model-sample-offline-goemotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-model-sample-offline-goemotion
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0183
- Accuracy: 0.7109
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
timhbach/Team-Gryffindor-DistilBERT-finetuned-ner-creditcardcontract
|
timhbach
| 2022-03-28T06:27:50Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-28T03:21:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Team-Gryffindor-DistilBERT-finetuned-ner-creditcardcontract
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Team-Gryffindor-DistilBERT-finetuned-ner-creditcardcontract
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0231
- eval_precision: 0.7448
- eval_recall: 0.75
- eval_f1: 0.7474
- eval_accuracy: 0.9942
- eval_runtime: 61.7618
- eval_samples_per_second: 27.201
- eval_steps_per_second: 3.4
- epoch: 3.0
- step: 5670
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cpu
- Datasets 2.0.0
- Tokenizers 0.11.6
|
haddadalwi/bert-large-uncased-whole-word-masking-finetuned-squad-finetuned-islamic-squad
|
haddadalwi
| 2022-03-28T05:04:56Z | 32 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-10T14:03:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: bert-large-uncased-whole-word-masking-finetuned-squad-finetuned-islamic-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-whole-word-masking-finetuned-squad-finetuned-islamic-squad
This model is a fine-tuned version of [bert-large-uncased-whole-word-masking-finetuned-squad](https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 40 | 0.4082 |
| No log | 2.0 | 80 | 0.3855 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
BigSalmon/InformalToFormalLincoln31
|
BigSalmon
| 2022-03-28T00:48:44Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-27T23:08:12Z |
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln31")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln31")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
|
huggingtweets/jacobe
|
huggingtweets
| 2022-03-27T23:02:12Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-27T23:01:35Z |
---
language: en
thumbnail: http://www.huggingtweets.com/jacobe/1648422127637/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1025926108984664064/2ZHTSIof_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Rowel Atienza</div>
<div style="text-align: center; font-size: 14px;">@jacobe</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Rowel Atienza.
| Data | Rowel Atienza |
| --- | --- |
| Tweets downloaded | 100 |
| Retweets | 29 |
| Short tweets | 4 |
| Tweets kept | 67 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1uzq4b7w/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jacobe's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1ouo6sis) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1ouo6sis/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jacobe')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/baguioni
|
huggingtweets
| 2022-03-27T22:55:21Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-27T22:54:40Z |
---
language: en
thumbnail: http://www.huggingtweets.com/baguioni/1648421716784/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1506662013707046914/hVtCPrPL_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">baguio</div>
<div style="text-align: center; font-size: 14px;">@baguioni</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from baguio.
| Data | baguio |
| --- | --- |
| Tweets downloaded | 3012 |
| Retweets | 1090 |
| Short tweets | 527 |
| Tweets kept | 1395 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1z9nh9v8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @baguioni's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2s53fr1o) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2s53fr1o/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/baguioni')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/baguioni-elonmusk-jacobe
|
huggingtweets
| 2022-03-27T22:44:21Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-27T22:43:39Z |
---
language: en
thumbnail: http://www.huggingtweets.com/baguioni-elonmusk-jacobe/1648421056394/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1503591435324563456/foUrqiEw_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1025926108984664064/2ZHTSIof_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1506662013707046914/hVtCPrPL_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & Rowel Atienza & baguio</div>
<div style="text-align: center; font-size: 14px;">@baguioni-elonmusk-jacobe</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & Rowel Atienza & baguio.
| Data | Elon Musk | Rowel Atienza | baguio |
| --- | --- | --- | --- |
| Tweets downloaded | 1621 | 100 | 3012 |
| Retweets | 69 | 29 | 1090 |
| Short tweets | 520 | 4 | 527 |
| Tweets kept | 1032 | 67 | 1395 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1xuj1tda/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @baguioni-elonmusk-jacobe's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3fpkbu3i) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3fpkbu3i/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/baguioni-elonmusk-jacobe')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
leonadase/bert-base-chinese-finetuned-fdRE
|
leonadase
| 2022-03-27T20:52:06Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:sem_eval2010_task8",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-27T19:04:51Z |
---
tags:
- generated_from_trainer
datasets:
- sem_eval2010_task8
metrics:
- accuracy
model-index:
- name: bert-base-chinese-finetuned-fdRE
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: sem_eval2010_task8
type: sem_eval2010_task8
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9080962800875274
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-chinese-finetuned-fdRE
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the sem_eval2010_task8 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2716
- Accuracy: 0.9081
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 46 | 0.5571 | 0.7812 |
| No log | 2.0 | 92 | 0.4030 | 0.8621 |
| No log | 3.0 | 138 | 0.3139 | 0.8928 |
| No log | 4.0 | 184 | 0.2716 | 0.9081 |
| No log | 5.0 | 230 | 0.2564 | 0.9081 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
ikram54/autotrain-harassement-675420038
|
ikram54
| 2022-03-27T18:08:30Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"unk",
"dataset:ikram54/autotrain-data-harassement",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-27T18:06:02Z |
---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- ikram54/autotrain-data-harassement
co2_eq_emissions: 2.6332836871905054
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 675420038
- CO2 Emissions (in grams): 2.6332836871905054
## Validation Metrics
- Loss: 0.8747465014457703
- Accuracy: 0.7085201793721974
- Macro F1: 0.579743989078862
- Micro F1: 0.7085201793721974
- Weighted F1: 0.6913786522271296
- Macro Precision: 0.5669375905888698
- Micro Precision: 0.7085201793721974
- Weighted Precision: 0.6760144007300164
- Macro Recall: 0.5940655209452201
- Micro Recall: 0.7085201793721974
- Weighted Recall: 0.7085201793721974
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/ikram54/autotrain-harassement-675420038
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ikram54/autotrain-harassement-675420038", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("ikram54/autotrain-harassement-675420038", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
willcai/wav2vec2_common_voice_accents_indian_only_rerun
|
willcai
| 2022-03-27T18:00:16Z | 2 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-27T06:51:10Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2_common_voice_accents_indian_only_rerun
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2_common_voice_accents_indian_only_rerun
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2807
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 48
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 384
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 588
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.6205 | 25.0 | 400 | 1.4584 |
| 0.3427 | 50.0 | 800 | 1.8377 |
| 0.1213 | 75.0 | 1200 | 1.6086 |
| 0.0643 | 100.0 | 1600 | 1.5136 |
| 0.0433 | 125.0 | 2000 | 1.4882 |
| 0.0323 | 150.0 | 2400 | 1.2204 |
| 0.0265 | 175.0 | 2800 | 1.3034 |
| 0.0206 | 200.0 | 3200 | 1.2866 |
| 0.0191 | 225.0 | 3600 | 1.2337 |
| 0.0148 | 250.0 | 4000 | 1.1729 |
| 0.0121 | 275.0 | 4400 | 1.2059 |
| 0.0105 | 300.0 | 4800 | 1.1246 |
| 0.01 | 325.0 | 5200 | 1.1397 |
| 0.0098 | 350.0 | 5600 | 1.1684 |
| 0.0073 | 375.0 | 6000 | 1.1030 |
| 0.0061 | 400.0 | 6400 | 1.2077 |
| 0.0049 | 425.0 | 6800 | 1.2653 |
| 0.0044 | 450.0 | 7200 | 1.1587 |
| 0.0037 | 475.0 | 7600 | 1.2283 |
| 0.0033 | 500.0 | 8000 | 1.1897 |
| 0.0026 | 525.0 | 8400 | 1.2633 |
| 0.0023 | 550.0 | 8800 | 1.2571 |
| 0.002 | 575.0 | 9200 | 1.2807 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.4
- Tokenizers 0.11.6
|
scasutt/wav2vec2-large-xlsr-53_toy_train_data_augment_0.1
|
scasutt
| 2022-03-27T17:07:53Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-25T17:45:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-53_toy_train_data_augment_0.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53_toy_train_data_augment_0.1
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4658
- Wer: 0.5037
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.447 | 1.05 | 250 | 3.3799 | 1.0 |
| 3.089 | 2.1 | 500 | 3.4868 | 1.0 |
| 3.063 | 3.15 | 750 | 3.3155 | 1.0 |
| 2.4008 | 4.2 | 1000 | 1.2934 | 0.8919 |
| 1.618 | 5.25 | 1250 | 0.7847 | 0.7338 |
| 1.3038 | 6.3 | 1500 | 0.6459 | 0.6712 |
| 1.2074 | 7.35 | 1750 | 0.5705 | 0.6269 |
| 1.1062 | 8.4 | 2000 | 0.5267 | 0.5843 |
| 1.026 | 9.45 | 2250 | 0.5108 | 0.5683 |
| 0.9505 | 10.5 | 2500 | 0.5066 | 0.5568 |
| 0.893 | 11.55 | 2750 | 0.5161 | 0.5532 |
| 0.8535 | 12.6 | 3000 | 0.4994 | 0.5341 |
| 0.8462 | 13.65 | 3250 | 0.4626 | 0.5262 |
| 0.8334 | 14.7 | 3500 | 0.4593 | 0.5197 |
| 0.842 | 15.75 | 3750 | 0.4651 | 0.5126 |
| 0.7678 | 16.81 | 4000 | 0.4687 | 0.5120 |
| 0.7873 | 17.86 | 4250 | 0.4716 | 0.5070 |
| 0.7486 | 18.91 | 4500 | 0.4657 | 0.5033 |
| 0.7073 | 19.96 | 4750 | 0.4658 | 0.5037 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
csukuangfj/icefall-asr-librispeech-stateless-transducer-2022-03-27-2
|
csukuangfj
| 2022-03-27T15:59:24Z | 0 | 0 | null |
[
"tensorboard",
"region:us"
] | null | 2022-03-27T13:27:21Z |
## Introduction
Please see <https://github.com/k2-fsa/icefall/pull/271> for more details.
|
EMBO/bio-lm
|
EMBO
| 2022-03-27T15:46:51Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"language model",
"dataset:EMBO/biolang",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
language:
- english
thumbnail:
tags:
- language model
license:
datasets:
- EMBO/biolang
metrics:
-
---
# bio-lm
## Model description
This model is a [RoBERTa base pre-trained model](https://huggingface.co/roberta-base) that was further trained using a masked language modeling task on a compendium of english scientific textual examples from the life sciences using the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang).
## Intended uses & limitations
#### How to use
The intended use of this model is to be fine-tuned for downstream tasks, token classification in particular.
To have a quick check of the model as-is in a fill-mask task:
```python
from transformers import pipeline, RobertaTokenizerFast
tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_len=512)
text = "Let us try this model to see if it <mask>."
fill_mask = pipeline(
"fill-mask",
model='EMBO/bio-lm',
tokenizer=tokenizer
)
fill_mask(text)
```
#### Limitations and bias
This model should be fine-tuned on a specifi task like token classification.
The model must be used with the `roberta-base` tokenizer.
## Training data
The model was trained with a masked language modeling taskon the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang) wich includes 12Mio examples from abstracts and figure legends extracted from papers published in life sciences.
## Training procedure
The training was run on a NVIDIA DGX Station with 4XTesla V100 GPUs.
Training code is available at https://github.com/source-data/soda-roberta
- Command: `python -m lm.train /data/json/oapmc_abstracts_figs/ MLM`
- Tokenizer vocab size: 50265
- Training data: EMBO/biolang MLM
- Training with: 12005390 examples
- Evaluating on: 36713 examples
- Epochs: 3.0
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- tensorboard run: lm-MLM-2021-01-27T15-17-43.113766
End of training:
```
trainset: 'loss': 0.8653350830078125
validation set: 'eval_loss': 0.8192330598831177, 'eval_recall': 0.8154601116513597
```
## Eval results
Eval on test set:
```
recall: 0.814471959728645
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.