modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-11 12:33:28
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-11 12:33:10
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Emanuel/roebrta-base-val-test
|
Emanuel
| 2022-01-23T15:12:04Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: language-modeling
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# language-modeling
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.8.1+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
|
dandelin/vilt-b32-finetuned-flickr30k
|
dandelin
| 2022-01-23T09:46:32Z | 34 | 3 |
transformers
|
[
"transformers",
"pytorch",
"vilt",
"arxiv:1505.04870",
"arxiv:2102.03334",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
license: apache-2.0
---
# Vision-and-Language Transformer (ViLT), fine-tuned on Flickr30k
Vision-and-Language Transformer (ViLT) model fine-tuned on [Flickr30k](https://arxiv.org/abs/1505.04870#:~:text=The%20Flickr30k%20dataset%20has%20become,for%20sentence%2Dbased%20image%20description.&text=Such%20annotations%20are%20essential%20for,entity%20mentions%20in%20an%20image.). It was introduced in the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Kim et al. and first released in [this repository](https://github.com/dandelin/ViLT).
Disclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Intended uses & limitations
You can use the model for image and text retrieval.
### How to use
Here is how to use the model in PyTorch:
```
from transformers import ViltProcessor, ViltForImageAndTextRetrieval
import requests
from PIL import Image
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"]
processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-finetuned-flickr30k")
model = ViltForImageAndTextRetrieval.from_pretrained("dandelin/vilt-b32-finetuned-flickr30k")
# prepare inputs
encoding = processor(image, text, return_tensors="pt")
# forward pass
scores = dict()
for text in texts:
encoding = processor(image, text, return_tensors="pt")
outputs = model(**encoding)
scores[text] = outputs.logits[0, :].item()
```
## Training data
(to do)
## Training procedure
### Preprocessing
(to do)
### Pretraining
(to do)
## Evaluation results
(to do)
### BibTeX entry and citation info
```bibtex
@misc{kim2021vilt,
title={ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision},
author={Wonjae Kim and Bokyung Son and Ildoo Kim},
year={2021},
eprint={2102.03334},
archivePrefix={arXiv},
primaryClass={stat.ML}
}
```
|
wesam266/wav2vec2-large-xlsr-53_english
|
wesam266
| 2022-01-23T02:40:28Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-53_english
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53_english
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2620
- Wer: 0.1916
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.0506 | 0.12 | 250 | 3.0206 | 0.9999 |
| 1.4381 | 0.25 | 500 | 1.0267 | 0.6323 |
| 1.0903 | 0.37 | 750 | 0.5841 | 0.3704 |
| 1.0384 | 0.5 | 1000 | 0.5156 | 0.3348 |
| 0.9658 | 0.62 | 1250 | 0.4721 | 0.3221 |
| 0.9184 | 0.74 | 1500 | 0.4301 | 0.3213 |
| 0.8939 | 0.87 | 1750 | 0.4188 | 0.2884 |
| 0.9051 | 0.99 | 2000 | 0.3852 | 0.2807 |
| 0.563 | 1.12 | 2250 | 0.3752 | 0.2804 |
| 0.6122 | 1.24 | 2500 | 0.3745 | 0.2732 |
| 0.6213 | 1.36 | 2750 | 0.3671 | 0.2575 |
| 0.5839 | 1.49 | 3000 | 0.3560 | 0.2578 |
| 0.615 | 1.61 | 3250 | 0.3555 | 0.2536 |
| 0.5557 | 1.74 | 3500 | 0.3511 | 0.2485 |
| 0.5497 | 1.86 | 3750 | 0.3364 | 0.2425 |
| 0.5412 | 1.98 | 4000 | 0.3253 | 0.2418 |
| 0.2834 | 2.11 | 4250 | 0.3293 | 0.2322 |
| 0.2723 | 2.23 | 4500 | 0.3157 | 0.2322 |
| 0.2713 | 2.35 | 4750 | 0.3148 | 0.2304 |
| 0.2878 | 2.48 | 5000 | 0.3143 | 0.2286 |
| 0.2776 | 2.6 | 5250 | 0.3122 | 0.2250 |
| 0.2553 | 2.73 | 5500 | 0.3003 | 0.2234 |
| 0.278 | 2.85 | 5750 | 0.2973 | 0.2198 |
| 0.2445 | 2.97 | 6000 | 0.2938 | 0.2180 |
| 0.4361 | 3.1 | 6250 | 0.2914 | 0.2132 |
| 0.3979 | 3.22 | 6500 | 0.2916 | 0.2125 |
| 0.4221 | 3.35 | 6750 | 0.2879 | 0.2113 |
| 0.4051 | 3.47 | 7000 | 0.2819 | 0.2100 |
| 0.4218 | 3.59 | 7250 | 0.2812 | 0.2072 |
| 0.4201 | 3.72 | 7500 | 0.2772 | 0.2055 |
| 0.3515 | 3.84 | 7750 | 0.2747 | 0.2031 |
| 0.4021 | 3.97 | 8000 | 0.2702 | 0.2018 |
| 0.4304 | 4.09 | 8250 | 0.2721 | 0.2007 |
| 0.3923 | 4.21 | 8500 | 0.2689 | 0.1991 |
| 0.3824 | 4.34 | 8750 | 0.2692 | 0.1980 |
| 0.3743 | 4.46 | 9000 | 0.2718 | 0.1950 |
| 0.3771 | 4.59 | 9250 | 0.2653 | 0.1950 |
| 0.4048 | 4.71 | 9500 | 0.2649 | 0.1934 |
| 0.3539 | 4.83 | 9750 | 0.2638 | 0.1919 |
| 0.3498 | 4.96 | 10000 | 0.2620 | 0.1916 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.1+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
|
vuiseng9/pegasus-xsum
|
vuiseng9
| 2022-01-23T02:33:40Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
This model is developed with transformers v4.13 with minor patch in this [fork](https://github.com/vuiseng9/transformers/tree/pegasus-v4p13).
# Setup
```bash
git clone https://github.com/vuiseng9/transformers
cd transformers
git checkout pegasus-v4p13 && git reset --hard 3db4b452
# installation, set summarization dependency
# . . .
```
# Train
```bash
#!/usr/bin/env bash
export CUDA_VISIBLE_DEVICES=0,1 # 2 cards on xsum
NEPOCH=10
RUNID=pegasus-xsum-${NEPOCH}eph-run1
OUTDIR=/data1/vchua/pegasus-hf4p13/pegasus/${RUNID}
mkdir -p $OUTDIR
nohup python run_summarization.py \
--model_name_or_path google/pegasus-large \
--dataset_name xsum \
--do_train \
--adafactor \
--learning_rate 1e-4 \
--label_smoothing_factor 0.1 \
--num_train_epochs $NEPOCH \
--per_device_train_batch_size 8 \
--do_eval \
--per_device_eval_batch_size 8 \
--num_beams 8 \
--max_source_length 512 \
--max_target_length 64 \
--evaluation_strategy steps \
--eval_steps 1000 \
--save_strategy steps \
--save_steps 2000 \
--logging_steps 1 \
--overwrite_output_dir \
--run_name $RUNID \
--output_dir $OUTDIR > $OUTDIR/run.log 2>&1
```
# Eval
```bash
#!/usr/bin/env bash
export CUDA_VISIBLE_DEVICES=3
DT=$(date +%F_%H-%M)
RUNID=pegasus-xsum-${DT}
OUTDIR=/data1/vchua/pegasus-hf4p13/pegasus-test/${RUNID}
mkdir -p $OUTDIR
nohup python run_summarization.py \
--model_name_or_path vuiseng9/pegasus-xsum \
--dataset_name xsum \
--max_source_length 512 \
--max_target_length 64 \
--do_predict \
--per_device_eval_batch_size 16 \
--predict_with_generate \
--num_beams 8 \
--overwrite_output_dir \
--run_name $RUNID \
--output_dir $OUTDIR > $OUTDIR/run.log 2>&1 &
```
Although fine-tuning is carried out for 10 epochs, this model is the checkpoint (@62000 steps, 4.9epoch, 20hrs) with lower loss during training. Test/predict with this checkpoint should give results below.
```
***** predict metrics *****
predict_gen_len = 24.0499
predict_loss = 1.5801
predict_rouge1 = 47.2124
predict_rouge2 = 24.3673
predict_rougeL = 39.0055
predict_rougeLsum = 39.0007
predict_runtime = 0:34:23.32
predict_samples = 11334
predict_samples_per_second = 5.493
predict_steps_per_second = 0.344
```
|
danhsf/t5-small-finetuned-en-to-pt
|
danhsf
| 2022-01-23T00:38:04Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: t5-small-finetuned-en-to-pt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-pt
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3295
- Bleu: 5.6807
- Gen Len: 18.6772
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 0.5787 | 1.0 | 6250 | 0.4928 | 4.1007 | 18.638 |
| 0.5089 | 2.0 | 12500 | 0.4463 | 4.3492 | 18.663 |
| 0.4652 | 3.0 | 18750 | 0.4215 | 4.68 | 18.6652 |
| 0.4353 | 4.0 | 25000 | 0.3980 | 4.8172 | 18.6708 |
| 0.4042 | 5.0 | 31250 | 0.3799 | 4.9719 | 18.6514 |
| 0.3734 | 6.0 | 37500 | 0.3676 | 5.2226 | 18.6572 |
| 0.3396 | 7.0 | 43750 | 0.3513 | 5.2693 | 18.6596 |
| 0.308 | 8.0 | 50000 | 0.3400 | 5.4546 | 18.676 |
| 0.2767 | 9.0 | 56250 | 0.3331 | 5.5649 | 18.6708 |
| 0.2424 | 10.0 | 62500 | 0.3295 | 5.6807 | 18.6772 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
pere/xls-test
|
pere
| 2022-01-22T18:40:50Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"ab",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ab
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 156.8789
- Wer: 1.3456
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
MelissaTESSA/distilbert-base-uncased-finetuned-cola
|
MelissaTESSA
| 2022-01-22T17:01:17Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5206791471093309
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6324
- Matthews Correlation: 0.5207
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5245 | 1.0 | 535 | 0.5155 | 0.4181 |
| 0.3446 | 2.0 | 1070 | 0.5623 | 0.4777 |
| 0.2331 | 3.0 | 1605 | 0.6324 | 0.5207 |
| 0.1678 | 4.0 | 2140 | 0.7706 | 0.5106 |
| 0.1255 | 5.0 | 2675 | 0.8852 | 0.4998 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ms29315/distilbert-base-uncased-finetuned-cola
|
ms29315
| 2022-01-21T19:56:06Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ms29315/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ms29315/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3100
- Validation Loss: 0.5090
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2670, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.3100 | 0.5090 | 0 |
### Framework versions
- Transformers 4.15.0
- TensorFlow 2.7.0
- Datasets 1.18.0
- Tokenizers 0.10.3
|
facebook/xm_transformer_600m-en_zh-multi_domain
|
facebook
| 2022-01-21T19:02:57Z | 5 | 2 |
fairseq
|
[
"fairseq",
"audio",
"audio-to-audio",
"speech-to-speech-translation",
"dataset:must_c",
"dataset:covost2",
"arxiv:2010.05171",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:05Z |
---
library_name: fairseq
task: audio-to-audio
tags:
- fairseq
- audio
- audio-to-audio
- speech-to-speech-translation
language: en-zh
datasets:
- must_c
- covost2
widget:
- example_title: Common Voice sample 1
src: https://huggingface.co/facebook/xm_transformer_600m-en_es-multi_domain/resolve/main/common_voice_en_18295850.mp3
---
# xm_transformer_600m-en_zh-multi_domain
[W2V2-Transformer](https://aclanthology.org/2021.acl-long.68/) speech-to-text translation model from fairseq S2T ([paper](https://arxiv.org/abs/2010.05171)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_to_text)):
- English-Chinese
- Trained on MuST-C, CoVoST 2, Multilingual LibriSpeech, Common Voice v7 and CCMatrix
- Speech synthesis with [facebook/tts_transformer-zh-cv7_css10](https://huggingface.co/facebook/tts_transformer-zh-cv7_css10)
## Usage
```python
from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub
from fairseq.models.speech_to_text.hub_interface import S2THubInterface
from fairseq.models.text_to_speech.hub_interface import TTSHubInterface
import IPython.display as ipd
import torchaudio
models, cfg, task = load_model_ensemble_and_task_from_hf_hub(
"facebook/xm_transformer_600m-en_zh-multi_domain",
arg_overrides={"config_yaml": "config.yaml"},
)
model = models[0]
generator = task.build_generator(model, cfg)
# requires 16000Hz mono channel audio
audio, _ = torchaudio.load("/path/to/an/audio/file")
sample = S2THubInterface.get_model_input(task, audio)
text = S2THubInterface.get_prediction(task, model, generator, sample)
# speech synthesis
tts_models, tts_cfg, tts_task = load_model_ensemble_and_task_from_hf_hub(
f"facebook/tts_transformer-zh-cv7_css10",
arg_overrides={"vocoder": "griffin_lim", "fp16": False},
)
tts_model = tts_models[0]
TTSHubInterface.update_cfg_with_data_cfg(tts_cfg, tts_task.data_cfg)
tts_generator = tts_task.build_generator([tts_model], tts_cfg)
tts_sample = TTSHubInterface.get_model_input(tts_task, text)
wav, sr = TTSHubInterface.get_prediction(
tts_task, tts_model, tts_generator, tts_sample
)
ipd.Audio(wav, rate=rate)
```
## Citation
```bibtex
@inproceedings{li-etal-2021-multilingual,
title = "Multilingual Speech Translation from Efficient Finetuning of Pretrained Models",
author = "Li, Xian and
Wang, Changhan and
Tang, Yun and
Tran, Chau and
Tang, Yuqing and
Pino, Juan and
Baevski, Alexei and
Conneau, Alexis and
Auli, Michael",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.68",
doi = "10.18653/v1/2021.acl-long.68",
pages = "827--838",
}
@inproceedings{wang-etal-2020-fairseq,
title = "Fairseq {S}2{T}: Fast Speech-to-Text Modeling with Fairseq",
author = "Wang, Changhan and
Tang, Yun and
Ma, Xutai and
Wu, Anne and
Okhonko, Dmytro and
Pino, Juan",
booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: System Demonstrations",
month = dec,
year = "2020",
address = "Suzhou, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.aacl-demo.6",
pages = "33--39",
}
```
|
facebook/xm_transformer_600m-en_tr-multi_domain
|
facebook
| 2022-01-21T19:02:30Z | 18 | 1 |
fairseq
|
[
"fairseq",
"audio",
"audio-to-audio",
"speech-to-speech-translation",
"dataset:must_c",
"dataset:covost2",
"arxiv:2010.05171",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:05Z |
---
library_name: fairseq
task: audio-to-audio
tags:
- fairseq
- audio
- audio-to-audio
- speech-to-speech-translation
language: en-tr
datasets:
- must_c
- covost2
widget:
- example_title: Common Voice sample 1
src: https://huggingface.co/facebook/xm_transformer_600m-en_es-multi_domain/resolve/main/common_voice_en_18295850.mp3
---
# xm_transformer_600m-en_tr-multi_domain
[W2V2-Transformer](https://aclanthology.org/2021.acl-long.68/) speech-to-text translation model from fairseq S2T ([paper](https://arxiv.org/abs/2010.05171)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_to_text)):
- English-Turkish
- Trained on MuST-C, CoVoST 2, Multilingual LibriSpeech, Common Voice v7 and CCMatrix
- Speech synthesis with [facebook/tts_transformer-tr-cv7](https://huggingface.co/facebook/tts_transformer-tr-cv7)
## Usage
```python
from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub
from fairseq.models.speech_to_text.hub_interface import S2THubInterface
from fairseq.models.text_to_speech.hub_interface import TTSHubInterface
import IPython.display as ipd
import torchaudio
models, cfg, task = load_model_ensemble_and_task_from_hf_hub(
"facebook/xm_transformer_600m-en_tr-multi_domain",
arg_overrides={"config_yaml": "config.yaml"},
)
model = models[0]
generator = task.build_generator(model, cfg)
# requires 16000Hz mono channel audio
audio, _ = torchaudio.load("/path/to/an/audio/file")
sample = S2THubInterface.get_model_input(task, audio)
text = S2THubInterface.get_prediction(task, model, generator, sample)
# speech synthesis
tts_models, tts_cfg, tts_task = load_model_ensemble_and_task_from_hf_hub(
f"facebook/tts_transformer-tr-cv7",
arg_overrides={"vocoder": "griffin_lim", "fp16": False},
)
tts_model = tts_models[0]
TTSHubInterface.update_cfg_with_data_cfg(tts_cfg, tts_task.data_cfg)
tts_generator = tts_task.build_generator([tts_model], tts_cfg)
tts_sample = TTSHubInterface.get_model_input(tts_task, text)
wav, sr = TTSHubInterface.get_prediction(
tts_task, tts_model, tts_generator, tts_sample
)
ipd.Audio(wav, rate=rate)
```
## Citation
```bibtex
@inproceedings{li-etal-2021-multilingual,
title = "Multilingual Speech Translation from Efficient Finetuning of Pretrained Models",
author = "Li, Xian and
Wang, Changhan and
Tang, Yun and
Tran, Chau and
Tang, Yuqing and
Pino, Juan and
Baevski, Alexei and
Conneau, Alexis and
Auli, Michael",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.68",
doi = "10.18653/v1/2021.acl-long.68",
pages = "827--838",
}
@inproceedings{wang-etal-2020-fairseq,
title = "Fairseq {S}2{T}: Fast Speech-to-Text Modeling with Fairseq",
author = "Wang, Changhan and
Tang, Yun and
Ma, Xutai and
Wu, Anne and
Okhonko, Dmytro and
Pino, Juan",
booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: System Demonstrations",
month = dec,
year = "2020",
address = "Suzhou, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.aacl-demo.6",
pages = "33--39",
}
```
|
facebook/xm_transformer_600m-en_ar-multi_domain
|
facebook
| 2022-01-21T19:02:06Z | 5 | 0 |
fairseq
|
[
"fairseq",
"audio",
"audio-to-audio",
"speech-to-speech-translation",
"dataset:must_c",
"dataset:covost2",
"arxiv:2010.05171",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:05Z |
---
library_name: fairseq
task: audio-to-audio
tags:
- fairseq
- audio
- audio-to-audio
- speech-to-speech-translation
language: en-ar
datasets:
- must_c
- covost2
widget:
- example_title: Common Voice sample 1
src: https://huggingface.co/facebook/xm_transformer_600m-en_es-multi_domain/resolve/main/common_voice_en_18295850.mp3
---
# xm_transformer_600m-en_ar-multi_domain
[W2V2-Transformer](https://aclanthology.org/2021.acl-long.68/) speech-to-text translation model from fairseq S2T ([paper](https://arxiv.org/abs/2010.05171)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_to_text)):
- English-Arabic
- Trained on MuST-C, CoVoST 2, Multilingual LibriSpeech, Common Voice v7 and CCMatrix
- Speech synthesis with [facebook/tts_transformer-ar-cv7](https://huggingface.co/facebook/tts_transformer-ar-cv7)
## Usage
```python
from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub
from fairseq.models.speech_to_text.hub_interface import S2THubInterface
from fairseq.models.text_to_speech.hub_interface import TTSHubInterface
import IPython.display as ipd
import torchaudio
models, cfg, task = load_model_ensemble_and_task_from_hf_hub(
"facebook/xm_transformer_600m-en_ar-multi_domain",
arg_overrides={"config_yaml": "config.yaml"},
)
model = models[0]
generator = task.build_generator(model, cfg)
# requires 16000Hz mono channel audio
audio, _ = torchaudio.load("/path/to/an/audio/file")
sample = S2THubInterface.get_model_input(task, audio)
text = S2THubInterface.get_prediction(task, model, generator, sample)
# speech synthesis
tts_models, tts_cfg, tts_task = load_model_ensemble_and_task_from_hf_hub(
f"facebook/tts_transformer-ar-cv7",
arg_overrides={"vocoder": "griffin_lim", "fp16": False},
)
tts_model = tts_models[0]
TTSHubInterface.update_cfg_with_data_cfg(tts_cfg, tts_task.data_cfg)
tts_generator = tts_task.build_generator([tts_model], tts_cfg)
tts_sample = TTSHubInterface.get_model_input(tts_task, text)
wav, sr = TTSHubInterface.get_prediction(
tts_task, tts_model, tts_generator, tts_sample
)
ipd.Audio(wav, rate=rate)
```
## Citation
```bibtex
@inproceedings{li-etal-2021-multilingual,
title = "Multilingual Speech Translation from Efficient Finetuning of Pretrained Models",
author = "Li, Xian and
Wang, Changhan and
Tang, Yun and
Tran, Chau and
Tang, Yuqing and
Pino, Juan and
Baevski, Alexei and
Conneau, Alexis and
Auli, Michael",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.68",
doi = "10.18653/v1/2021.acl-long.68",
pages = "827--838",
}
@inproceedings{wang-etal-2020-fairseq,
title = "Fairseq {S}2{T}: Fast Speech-to-Text Modeling with Fairseq",
author = "Wang, Changhan and
Tang, Yun and
Ma, Xutai and
Wu, Anne and
Okhonko, Dmytro and
Pino, Juan",
booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: System Demonstrations",
month = dec,
year = "2020",
address = "Suzhou, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.aacl-demo.6",
pages = "33--39",
}
```
|
facebook/xm_transformer_600m-en_es-multi_domain
|
facebook
| 2022-01-21T19:01:24Z | 2 | 1 |
fairseq
|
[
"fairseq",
"audio",
"audio-to-audio",
"speech-to-speech-translation",
"dataset:must_c",
"dataset:europarl_st",
"dataset:voxpopuli",
"arxiv:2010.05171",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:05Z |
---
library_name: fairseq
task: audio-to-audio
tags:
- fairseq
- audio
- audio-to-audio
- speech-to-speech-translation
language: en-es
datasets:
- must_c
- europarl_st
- voxpopuli
widget:
- example_title: Common Voice sample 1
src: https://huggingface.co/facebook/xm_transformer_600m-en_es-multi_domain/resolve/main/common_voice_en_18295850.mp3
---
# xm_transformer_600m-en_es-multi_domain
[W2V2-Transformer](https://aclanthology.org/2021.acl-long.68/) speech-to-text translation model from fairseq S2T ([paper](https://arxiv.org/abs/2010.05171)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_to_text)):
- English-Spanish
- Trained on MuST-C, EuroParl-ST, VoxPopuli, Multilingual LibriSpeech, Common Voice v7 and CCMatrix
- Speech synthesis with [facebook/tts_transformer-es-css10](https://huggingface.co/facebook/tts_transformer-es-css10)
## Usage
```python
from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub
from fairseq.models.text_to_speech.hub_interface import S2THubInterface
from fairseq.models.text_to_speech.hub_interface import TTSHubInterface
import IPython.display as ipd
import torchaudio
models, cfg, task = load_model_ensemble_and_task_from_hf_hub(
"facebook/xm_transformer_600m-en_es-multi_domain",
arg_overrides={"config_yaml": "config.yaml"},
)
model = models[0]
generator = task.build_generator(model, cfg)
# requires 16000Hz mono channel audio
audio, _ = torchaudio.load("/path/to/an/audio/file")
sample = S2THubInterface.get_model_input(task, audio)
text = S2THubInterface.get_prediction(task, model, generator, sample)
# speech synthesis
tts_models, tts_cfg, tts_task = load_model_ensemble_and_task_from_hf_hub(
f"facebook/tts_transformer-es-css10",
arg_overrides={"vocoder": "griffin_lim", "fp16": False},
)
tts_model = tts_models[0]
TTSHubInterface.update_cfg_with_data_cfg(tts_cfg, tts_task.data_cfg)
tts_generator = tts_task.build_generator([tts_model], tts_cfg)
tts_sample = TTSHubInterface.get_model_input(tts_task, text)
wav, sr = TTSHubInterface.get_prediction(
tts_task, tts_model, tts_generator, tts_sample
)
ipd.Audio(wav, rate=rate)
```
## Citation
```bibtex
@inproceedings{li-etal-2021-multilingual,
title = "Multilingual Speech Translation from Efficient Finetuning of Pretrained Models",
author = "Li, Xian and
Wang, Changhan and
Tang, Yun and
Tran, Chau and
Tang, Yuqing and
Pino, Juan and
Baevski, Alexei and
Conneau, Alexis and
Auli, Michael",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.68",
doi = "10.18653/v1/2021.acl-long.68",
pages = "827--838",
}
@inproceedings{wang-etal-2020-fairseq,
title = "Fairseq {S}2{T}: Fast Speech-to-Text Modeling with Fairseq",
author = "Wang, Changhan and
Tang, Yun and
Ma, Xutai and
Wu, Anne and
Okhonko, Dmytro and
Pino, Juan",
booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: System Demonstrations",
month = dec,
year = "2020",
address = "Suzhou, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.aacl-demo.6",
pages = "33--39",
}
```
|
facebook/xm_transformer_600m-ru_en-multi_domain
|
facebook
| 2022-01-21T18:56:34Z | 6 | 2 |
fairseq
|
[
"fairseq",
"audio",
"audio-to-audio",
"speech-to-speech-translation",
"dataset:mtedx",
"dataset:covost2",
"arxiv:2010.05171",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:05Z |
---
library_name: fairseq
task: audio-to-audio
tags:
- fairseq
- audio
- audio-to-audio
- speech-to-speech-translation
language: ru-en
datasets:
- mtedx
- covost2
widget:
- example_title: Common Voice sample 1
src: https://huggingface.co/facebook/xm_transformer_600m-ru_en-multi_domain/resolve/main/common_voice_ru_18945535.flac
---
# xm_transformer_600m-ru_en-multi_domain
[W2V2-Transformer](https://aclanthology.org/2021.acl-long.68/) speech-to-text translation model from fairseq S2T ([paper](https://arxiv.org/abs/2010.05171)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_to_text)):
- Russian-English
- Trained on mTEDx, CoVoST 2, OpenSTT, Common Voice v7 and CCMatrix
- Speech synthesis with [facebook/fastspeech2-en-ljspeech](https://huggingface.co/facebook/fastspeech2-en-ljspeech)
## Usage
```python
from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub
from fairseq.models.text_to_speech.hub_interface import S2THubInterface
from fairseq.models.text_to_speech.hub_interface import TTSHubInterface
import IPython.display as ipd
import torchaudio
models, cfg, task = load_model_ensemble_and_task_from_hf_hub(
"facebook/xm_transformer_600m-ru_en-multi_domain",
arg_overrides={"config_yaml": "config.yaml"},
)
model = models[0]
generator = task.build_generator(model, cfg)
# requires 16000Hz mono channel audio
audio, _ = torchaudio.load("/path/to/an/audio/file")
sample = S2THubInterface.get_model_input(task, audio)
text = S2THubInterface.get_prediction(task, model, generator, sample)
# speech synthesis
tts_models, tts_cfg, tts_task = load_model_ensemble_and_task_from_hf_hub(
f"facebook/fastspeech2-en-ljspeech",
arg_overrides={"vocoder": "griffin_lim", "fp16": False},
)
tts_model = tts_models[0]
TTSHubInterface.update_cfg_with_data_cfg(tts_cfg, tts_task.data_cfg)
tts_generator = tts_task.build_generator([tts_model], tts_cfg)
tts_sample = TTSHubInterface.get_model_input(tts_task, text)
wav, sr = TTSHubInterface.get_prediction(
tts_task, tts_model, tts_generator, tts_sample
)
ipd.Audio(wav, rate=rate)
```
## Citation
```bibtex
@inproceedings{li-etal-2021-multilingual,
title = "Multilingual Speech Translation from Efficient Finetuning of Pretrained Models",
author = "Li, Xian and
Wang, Changhan and
Tang, Yun and
Tran, Chau and
Tang, Yuqing and
Pino, Juan and
Baevski, Alexei and
Conneau, Alexis and
Auli, Michael",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.68",
doi = "10.18653/v1/2021.acl-long.68",
pages = "827--838",
}
@inproceedings{wang-etal-2020-fairseq,
title = "Fairseq {S}2{T}: Fast Speech-to-Text Modeling with Fairseq",
author = "Wang, Changhan and
Tang, Yun and
Ma, Xutai and
Wu, Anne and
Okhonko, Dmytro and
Pino, Juan",
booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: System Demonstrations",
month = dec,
year = "2020",
address = "Suzhou, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.aacl-demo.6",
pages = "33--39",
}
@inproceedings{wang-etal-2021-fairseq,
title = "fairseq S{\^{}}2: A Scalable and Integrable Speech Synthesis Toolkit",
author = "Wang, Changhan and
Hsu, Wei-Ning and
Adi, Yossi and
Polyak, Adam and
Lee, Ann and
Chen, Peng-Jen and
Gu, Jiatao and
Pino, Juan",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-demo.17",
doi = "10.18653/v1/2021.emnlp-demo.17",
pages = "143--152",
}
```
|
jiobiala24/wav2vec2-base-checkpoint-7.1
|
jiobiala24
| 2022-01-21T15:50:15Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-base-checkpoint-7.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-checkpoint-7.1
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-6](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-6) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9369
- Wer: 0.3243
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.3124 | 1.75 | 1000 | 0.5602 | 0.3403 |
| 0.2428 | 3.5 | 2000 | 0.5924 | 0.3431 |
| 0.1884 | 5.24 | 3000 | 0.6161 | 0.3423 |
| 0.1557 | 6.99 | 4000 | 0.6570 | 0.3415 |
| 0.1298 | 8.74 | 5000 | 0.6837 | 0.3446 |
| 0.1141 | 10.49 | 6000 | 0.7304 | 0.3396 |
| 0.1031 | 12.24 | 7000 | 0.7264 | 0.3410 |
| 0.0916 | 13.99 | 8000 | 0.7229 | 0.3387 |
| 0.0835 | 15.73 | 9000 | 0.8078 | 0.3458 |
| 0.0761 | 17.48 | 10000 | 0.8304 | 0.3408 |
| 0.0693 | 19.23 | 11000 | 0.8290 | 0.3387 |
| 0.0646 | 20.98 | 12000 | 0.8593 | 0.3372 |
| 0.0605 | 22.73 | 13000 | 0.8728 | 0.3345 |
| 0.0576 | 24.48 | 14000 | 0.9111 | 0.3297 |
| 0.0529 | 26.22 | 15000 | 0.9247 | 0.3273 |
| 0.0492 | 27.97 | 16000 | 0.9248 | 0.3250 |
| 0.0472 | 29.72 | 17000 | 0.9369 | 0.3243 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
deepparag/DumBot
|
deepparag
| 2022-01-21T15:40:27Z | 148 | 2 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
thumbnail: https://cdn.discordapp.com/app-icons/870239976690970625/c02cae78ae105f07969cfd8f8ea3d0a0.png
tags:
- conversational
license: mit
---
# THIS AI IS OUTDATED. See [Aeona](https://huggingface.co/deepparag/Aeona)
An generative AI made using [microsoft/DialoGPT-small](https://huggingface.co/microsoft/DialoGPT-small).
Trained on:
https://www.kaggle.com/Cornell-University/movie-dialog-corpus
https://www.kaggle.com/jef1056/discord-data
[Live Demo](https://dumbot-331213.uc.r.appspot.com/)
Example:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("deepparag/DumBot")
model = AutoModelWithLMHead.from_pretrained("deepparag/DumBot")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=4,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("DumBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
|
Gianpe/en_textcat_emotion_xlm
|
Gianpe
| 2022-01-21T15:09:03Z | 3 | 0 |
spacy
|
[
"spacy",
"text-classification",
"en",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
tags:
- spacy
- text-classification
language:
- en
model-index:
- name: en_textcat_emotion_xlm
results: []
---
|
shivam/xls-r-hindi
|
shivam
| 2022-01-21T14:00:59Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"hi",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- hi
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4484
- Wer: 1.0145
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.1844 | 3.4 | 500 | 5.2015 | 0.9999 |
| 3.3962 | 6.8 | 1000 | 3.4017 | 1.0002 |
| 2.5433 | 10.2 | 1500 | 1.6884 | 1.0222 |
| 1.5099 | 13.6 | 2000 | 0.7929 | 1.0188 |
| 1.2685 | 17.01 | 2500 | 0.6122 | 1.0191 |
| 1.1844 | 20.41 | 3000 | 0.5434 | 1.0197 |
| 1.0945 | 23.81 | 3500 | 0.5208 | 1.0316 |
| 1.0506 | 27.21 | 4000 | 0.4941 | 1.0139 |
| 1.0199 | 30.61 | 4500 | 0.4736 | 1.0106 |
| 0.9546 | 34.01 | 5000 | 0.4664 | 1.0164 |
| 0.9388 | 37.41 | 5500 | 0.4565 | 1.0085 |
| 0.9125 | 40.81 | 6000 | 0.4636 | 1.0148 |
| 0.8733 | 44.22 | 6500 | 0.4530 | 1.0154 |
| 0.8829 | 47.62 | 7000 | 0.4494 | 1.0152 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
alistvt/bert-base-uncased-pretrained-mlm-coqa-stories
|
alistvt
| 2022-01-21T13:17:32Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-pretrained-mlm-coqa-stories
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-pretrained-mlm-coqa-stories
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0573 | 1.0 | 2479 | 1.8805 |
| 1.9517 | 2.0 | 4958 | 1.8377 |
| 1.9048 | 3.0 | 7437 | 1.8310 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
alistvt/bert-base-uncased-pretrained-clm-coqa-stories
|
alistvt
| 2022-01-21T12:36:10Z | 20 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-pretrained-clm-coqa-stories
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-pretrained-clm-coqa-stories
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0002
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0201 | 1.0 | 2479 | 0.0018 |
| 0.0033 | 2.0 | 4958 | 0.0003 |
| 0.0014 | 3.0 | 7437 | 0.0002 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anuragshas/wav2vec2-large-xls-r-300m-ur
|
anuragshas
| 2022-01-21T04:32:18Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-ur
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-ur
This model is a fine-tuned version of [anuragshas/wav2vec2-large-xls-r-300m-ur](https://huggingface.co/anuragshas/wav2vec2-large-xls-r-300m-ur) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0508
- Wer: 0.7328
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.12
- num_epochs: 240
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 0.0719 | 66.67 | 400 | 1.8510 | 0.7432 |
| 0.0284 | 133.33 | 800 | 2.0088 | 0.7415 |
| 0.014 | 200.0 | 1200 | 2.0508 | 0.7328 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
espnet/simpleoier_librispeech_asr_train_asr_conformer7_hubert_ll60k_large_raw_en_bpe5000_sp
|
espnet
| 2022-01-21T04:15:13Z | 8 | 2 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:librispeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- librispeech
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/simpleoier_librispeech_asr_train_asr_conformer7_hubert_ll60k_large_raw_en_bpe5000_sp`
This model was trained by simpleoier using librispeech recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout b0ff60946ada6753af79423a2e6063984bec2926
pip install -e .
cd egs2/librispeech/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/simpleoier_librispeech_asr_train_asr_conformer7_hubert_ll60k_large_raw_en_bpe5000_sp
```
## ASR config
<details><summary>expand</summary>
```
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
espnet/simpleoier_librispeech_asr_train_asr_conformer7_wav2vec2_960hr_large_raw_en_bpe5000_sp
|
espnet
| 2022-01-21T04:09:13Z | 4 | 0 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:librispeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- librispeech
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/simpleoier_librispeech_asr_train_asr_conformer7_wav2vec2_960hr_large_raw_en_bpe5000_sp`
This model was trained by simpleoier using librispeech recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout b0ff60946ada6753af79423a2e6063984bec2926
pip install -e .
cd egs2/librispeech/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/simpleoier_librispeech_asr_train_asr_conformer7_wav2vec2_960hr_large_raw_en_bpe5000_sp
```
## ASR config
<details><summary>expand</summary>
```
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
guoqiang/glm
|
guoqiang
| 2022-01-21T01:21:46Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
# WudaoSailing
WudaoSailing is a package for pretraining chinese Language Model and finetune tasks. Now it supports GLM, Bert, T5, Cogview and Roberta models.
## Get Started
### Docker Image
We prepare two docker images based on CUDA 10.2 and CUDA 11.2. You can build images from the docker file [docs/docker/cuda102.dockerfile](docs/docker/cuda102.dcokerfile) or pull the pre-built images from Docker Hub and run with docker v19.03+
```shell
nvidia-docker run -id --hostname=V100 --network=host\
--ipc=host --shm-size=16gb --name=deepspeed-cuda \
-e NVIDIA_VISIBLE_DEVICES=0,1,2,3 \
-v /DATA/disk1/docker/containers/:/data deepspeed/cuda102:lastest
```
or replace `cuda102` with `cuda112`.
```shell
docker build -f cuda102.dockerfile -t deepspeed/cuda102 .
```
### Clone this repo
```shell
git clone https://github.com/wangguojim/WudaoSailing.git
cd WudaoSailing
pip install -r requirements.txt
```
## GLM
We show some examples based on GLM model.
### finetuene
We provide scripts for finetuning GLM on some downstream tasks.
#### SuperGLUE
- Download the [SuperGlue](https://super.gluebenchmark.com/tasks) data and check the experiment setup in
[examples/glm/scripts/ds_finetune_superglue.sh](xamples/glm/scripts/ds_finetune_superglue.sh). Note that `DATA_ROOT, CHECKPOINT_PATH, SAVE_PATH`
need to be changed to your local path. You may also change the `batch-size` and `nproc_per_node` according to your
available hardware.
- Run the following script for text similarity finetune task (use the afqmc dataset as an example)
```
cd examples/glm/
bash scripts/ds_finetune_superglue.sh\
config/model_blocklm_large_chinese.sh\
config_tasks/task_afqmc.sh
```
- Run the following script for text classification finetune task (use the thunews and thunews dataset as an example)
```
cd examples/glm/
bash scripts/ds_finetune_superglue.sh\
config/model_blocklm_large_chinese.sh\
config_tasks/task_tnews.sh
```
- Run the following script for causal inference finetune task (use the COPA dataset as an example)
```
cd examples/glm/
bash scripts/ds_finetune_superglue.sh\
config/model_blocklm_large_chinese.sh\
config_tasks/task_copa.sh
```
- To apply GLM to a new NLU dataset with cloze-filling finetuning, implement a `DataProcessor` in
[examples/glm/tasks/superglue/dataset.py](examples/glm/tasks/superglue/dataset.py) for data loading and add a `PVP` in
[examples/glm/tasks/superglue/pvp.py](examples/glm/tasks/superglue/pvp.py) for the cloze question. More details can be found
[here](examples/glm/tasks/superglue/README.md).
#### Blank Filling (Interactive)
* Change `CHECKPOINT_PATH` to your local path. Run the following script
```
bash config/generate_block.sh\
config/model_blocklm_large_chinese.sh
```
##### Example1 (Entity Prediction):
Context: 凯旋门位于意大利米兰市古城堡旁。1807年为纪念[MASK]而建,门高25米,顶上矗立两武士青铜古兵车铸像。
GLM:拿破仑军队攻克米兰城
##### Example2 (Sentence Prediction)
Context: 工业互联网(Industrial Internet)是新一代信息通信技术与工业经济深度融合的新型基础设施、应用模式和工业生态,通过对人、机、物、系统等的全面连接,构建起覆盖全产业链、全价值链的全新制造和服务体系,为工业乃至产业数字化、网络化、智能化发展提供了实现途径,是第四次工业革命的重要基石。[sMASK]它以网络为基础、平台为中枢、数据为要素、安全为保障,既是工业数字化、网络化、智能化转型的基础设施,也是互联网、大数据、人工智能与实体经济深度融合的应用模式,同时也是一种新业态、新产业,将重塑企业形态、供应链和产业链。当前,工业互联网融合应用向国民经济重点行业广泛拓展,形成平台化设计、智能化制造、网络化协同、个性化定制、服务化延伸、数字化管理六大新模式,赋能、赋智、赋值作用不断显现,有力的促进了实体经济提质、增效、降本、绿色、安全发展。
GLM: 工业互联网是制造业技术、管理、模式的重大变革,是推动互联网、大数据、人工智能和实体经济深度融合的重要载体,是建设制造强国和网络强国的重要基础。
##### Example3 (Long Text Generation)
Context: 问题:高斯所在的国家有什么汽车品牌?答案:[gMASK]
GLM:答案:[gMASK]<|startofpiece|>德国奔驰、德国大众、别克、沃尔沃、斯柯达、本田、雪铁龙.
### Ptuning
Run the following script to integrate p-tuning with GLM:
```shell
cd algutils/ptuning/
bash finetune_zy.sh
```
### Pretrain
Run the following script to pre-train the GLM-Large model
```shell
cd examples/glm/
bash scripts/ds_pretrain_nvidia.sh config/ds_block_large.sh
```
The script [examples/glm/config/ds_pretrain_nvidia.sh](examples/glm/config/ds_pretrain_nvidia.sh) launches the training program with DeepSpeed. You should change `NUM_WORKERS` and `NUM_GPUS_PER_WORKER` to the number of workers and the number of gpus per worker. Also change `HOST_FILE_PATH` to the path to an OpenMPI-style hostfile. More details about DeepSpeed launcher can be found [here](https://www.deepspeed.ai/getting-started/#resource-configuration-multi-node).
The file [examples/glm/config/ds_block_large.sh](examples/glm/config/ds_block_large.sh) defines the hyperparameters for pretraining. Most of the arguments are fairly self-explanatory. Specifically, `--train-data` can be multiple keywords defined in `NAMED_CORPORA` in [data_utils/corpora.py](data_utils/corpora.py). The hyperparameters of the optimizer are defined in the corresponding json file under `config`. The semantics of the json file can be found [here](https://www.deepspeed.ai/docs/config-json).
## Bert
We show some examples based on GLM model.
### Pretrain
Run the following script to pre-train the Bert model
```shell
cd examples/bert/
python quick_start.py
```
## CogView
### Pretrain
Run the following script to pre-train the cogview model
```shell
cd examples/cogview/
bash config/pretrain_multiple_nodes.sh
```
### inference
Run the following script to test the ability of text2image
```shell
cd examples/cogview/
bash config/text2image_cogview.sh
```
|
huggingtweets/anticarbons
|
huggingtweets
| 2022-01-20T22:52:20Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/anticarbons/1642719091326/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1477498953524518912/yvJkd9VL_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ANTICARBON</div>
<div style="text-align: center; font-size: 14px;">@anticarbons</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ANTICARBON.
| Data | ANTICARBON |
| --- | --- |
| Tweets downloaded | 2518 |
| Retweets | 427 |
| Short tweets | 352 |
| Tweets kept | 1739 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/s9q99sc5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @anticarbons's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1k8boybi) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1k8boybi/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/anticarbons')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
anuragshas/wav2vec2-large-xls-r-300m-hi
|
anuragshas
| 2022-01-20T20:38:42Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-hi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hi
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4156
- Wer: 0.7181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.7703 | 2.72 | 400 | 2.2274 | 0.9259 |
| 0.6515 | 5.44 | 800 | 1.5812 | 0.7581 |
| 0.339 | 8.16 | 1200 | 2.0590 | 0.7825 |
| 0.2262 | 10.88 | 1600 | 2.0324 | 0.7603 |
| 0.1665 | 13.6 | 2000 | 2.1396 | 0.7481 |
| 0.1311 | 16.33 | 2400 | 2.2090 | 0.7379 |
| 0.1079 | 19.05 | 2800 | 2.3907 | 0.7612 |
| 0.0927 | 21.77 | 3200 | 2.5294 | 0.7478 |
| 0.0748 | 24.49 | 3600 | 2.5024 | 0.7452 |
| 0.0644 | 27.21 | 4000 | 2.4715 | 0.7307 |
| 0.0569 | 29.93 | 4400 | 2.4156 | 0.7181 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
oandreae/financial_sentiment_model
|
oandreae
| 2022-01-20T20:00:01Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"perceiver",
"text-classification",
"generated_from_trainer",
"dataset:financial_phrasebank",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- financial_phrasebank
metrics:
- recall
- accuracy
- precision
model-index:
- name: financial_sentiment_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: financial_phrasebank
type: financial_phrasebank
args: sentences_50agree
metrics:
- name: Recall
type: recall
value: 0.8839956357328868
- name: Accuracy
type: accuracy
value: 0.8804123711340206
- name: Precision
type: precision
value: 0.8604175202419276
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# financial_sentiment_model
This model is a fine-tuned version of [deepmind/language-perceiver](https://huggingface.co/deepmind/language-perceiver) on the financial_phrasebank dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3467
- Recall: 0.8840
- Accuracy: 0.8804
- Precision: 0.8604
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Recall | Accuracy | Precision |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|:---------:|
| 0.4481 | 1.0 | 273 | 0.4035 | 0.8526 | 0.8433 | 0.7955 |
| 0.4069 | 2.0 | 546 | 0.4478 | 0.8683 | 0.8289 | 0.8123 |
| 0.2225 | 3.0 | 819 | 0.3167 | 0.8747 | 0.8680 | 0.8387 |
| 0.1245 | 4.0 | 1092 | 0.3467 | 0.8840 | 0.8804 | 0.8604 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.0+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
tomwetherell/TOMFINSEN
|
tomwetherell
| 2022-01-20T18:19:24Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"perceiver",
"text-classification",
"generated_from_trainer",
"dataset:financial_phrasebank",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- financial_phrasebank
metrics:
- recall
- accuracy
- precision
model-index:
- name: TOMFINSEN
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: financial_phrasebank
type: financial_phrasebank
args: sentences_50agree
metrics:
- name: Recall
type: recall
value: 0.8985861629736692
- name: Accuracy
type: accuracy
value: 0.8742268041237113
- name: Precision
type: precision
value: 0.8509995913451198
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TOMFINSEN
This model is a fine-tuned version of [deepmind/language-perceiver](https://huggingface.co/deepmind/language-perceiver) on the financial_phrasebank dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3642
- Recall: 0.8986
- Accuracy: 0.8742
- Precision: 0.8510
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Recall | Accuracy | Precision |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|:---------:|
| 0.5403 | 1.0 | 273 | 0.4207 | 0.8358 | 0.8619 | 0.8534 |
| 0.3939 | 2.0 | 546 | 0.3750 | 0.8943 | 0.8577 | 0.8225 |
| 0.1993 | 3.0 | 819 | 0.3113 | 0.8882 | 0.8660 | 0.8367 |
| 0.301 | 4.0 | 1092 | 0.3642 | 0.8986 | 0.8742 | 0.8510 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.0+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
ilevs/opus-mt-en-ru-finetuned-en-to-ru
|
ilevs
| 2022-01-20T18:18:30Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-en-ru-finetuned-en-to-ru
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ru-finetuned-en-to-ru
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ru](https://huggingface.co/Helsinki-NLP/opus-mt-en-ru) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7682
- Bleu: 14.6112
- Gen Len: 7.202
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 2.3198 | 1.0 | 4956 | 2.1261 | 9.5339 | 6.7709 |
| 1.9732 | 2.0 | 9912 | 1.9639 | 10.4715 | 7.1254 |
| 1.7127 | 3.0 | 14868 | 1.8780 | 11.6128 | 7.1106 |
| 1.5614 | 4.0 | 19824 | 1.8367 | 12.8389 | 7.0468 |
| 1.4276 | 5.0 | 24780 | 1.8040 | 13.7423 | 7.0403 |
| 1.3096 | 6.0 | 29736 | 1.7820 | 14.1469 | 7.0555 |
| 1.2381 | 7.0 | 34692 | 1.7761 | 13.9987 | 7.2225 |
| 1.1784 | 8.0 | 39648 | 1.7725 | 14.4675 | 7.1799 |
| 1.1376 | 9.0 | 44604 | 1.7692 | 14.4937 | 7.1957 |
| 1.0862 | 10.0 | 49560 | 1.7682 | 14.6112 | 7.202 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
nntadotzip/xlnet-base-cased-IUChatbot-ontologyDts-BertPretrainedTokenizerFast
|
nntadotzip
| 2022-01-20T18:06:05Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlnet",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlnet-base-cased-IUChatbot-ontologyDts-BertPretrainedTokenizerFast
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased-IUChatbot-ontologyDts-BertPretrainedTokenizerFast
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3489
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 382 | 0.4695 |
| 0.5633 | 2.0 | 764 | 0.3361 |
| 0.3533 | 3.0 | 1146 | 0.3489 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
ucberkeley-dlab/hate-measure-roberta-large
|
ucberkeley-dlab
| 2022-01-20T17:57:30Z | 7 | 4 |
tf-keras
|
[
"tf-keras",
"text-classification",
"hate-speech",
"counterspeech",
"irt",
"arxiv:2009.10277",
"en",
"dataset:ucberkeley-dlab/measuring-hate-speech",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- text-classification
- hate-speech
- counterspeech
- irt
- arxiv:2009.10277
datasets:
- ucberkeley-dlab/measuring-hate-speech
---
# Measuring hate speech: RoBERTa-Large
This model predicts a continuous hate speech score as described in Kennedy et al. (2020).
## Citation
```
@article{kennedy2020constructing,
title={Constructing interval variables via faceted Rasch measurement and multitask deep learning: a hate speech application},
author={Kennedy, Chris J and Bacon, Geoff and Sahn, Alexander and von Vacano, Claudia},
journal={arXiv preprint arXiv:2009.10277},
year={2020}
}
```
## References
Kennedy, C. J., Bacon, G., Sahn, A., & von Vacano, C. (2020). [Constructing interval variables via faceted Rasch measurement and multitask deep learning: a hate speech application](https://arxiv.org/abs/2009.10277). arXiv preprint arXiv:2009.10277.
|
Rocketknight1/distilroberta-base-finetuned-wikitext2
|
Rocketknight1
| 2022-01-20T17:54:46Z | 22 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilroberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- TensorFlow 2.8.0-rc0
- Datasets 1.17.0
- Tokenizers 0.11.0
|
nntadotzip/bert-base-cased-IUChatbot-ontologyDts
|
nntadotzip
| 2022-01-20T16:21:21Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased-IUChatbot-ontologyDts
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-IUChatbot-ontologyDts
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2446
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 382 | 0.2686 |
| 0.3946 | 2.0 | 764 | 0.2535 |
| 0.2577 | 3.0 | 1146 | 0.2446 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
ml6team/distilbart-tos-summarizer-tosdr
|
ml6team
| 2022-01-20T15:21:41Z | 22 | 15 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"summarization",
"t&c",
"tos",
"distilbart",
"distilbart-6-6",
"en",
"dataset:tosdr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- summarization
- t&c
- tos
- distilbart
- distilbart-6-6
datasets:
- tosdr
metrics:
- rouge1
- rouge2
- rougel
inference:
parameters:
min_length: 5
max_length: 512
do_sample: False
widget:
- text: "In addition, certain portions of the Web Site may be subject to additional terms of use that we make available for your review or otherwise link to that portion of the Web Site to which such additional terms apply. By using such portions, or any part thereof, you agree to be bound by the additional terms of use applicable to such portions. Age Restrictions The Web Site may be accessed and used only by individuals who can form legally binding contracts under applicable laws, who are at least 18 years of age or the age of majority in their state or territory of residence (if higher than 18), and who are not barred from using the Web Site under applicable laws. Our Technology may not be copied, modified, reproduced, republished, posted, transmitted, sold, offered for sale, or redistributed in any way without our prior written permission and the prior written permission of our applicable licensors. Nothing in these Site Terms of Use grants you any right to receive delivery of a copy of Our Technology or to obtain access to Our Technology except as generally and ordinarily permitted through the Web Site according to these Site Terms of Use. Furthermore, nothing in these Site Terms of Use will be deemed to grant you, by implication, estoppel or otherwise, a license to Our Technology. Certain of the names, logos, and other materials displayed via the Web site constitute trademarks, tradenames, service marks or logos (“Marks”) of us or other entities. You are not authorized to use any such Marks. Ownership of all such Marks and the goodwill associated therewith remains with us or those other entities. Any use of third party software provided in connection with the Web Site will be governed by such third parties’ licenses and not by these Site Terms of Use. Information on this Web Site may contain technical inaccuracies or typographical errors. Lenovo provides no assurances that any reported problems may be resolved with the use of any information that Lenovo provides."
---
# T&C Summarization Model
T&C Summarization Model based on [sshleifer/distilbart-cnn-6-6](https://huggingface.co/sshleifer/distilbart-cnn-6-6),
This abstractive summarization model is a part of a bigger end-to-end T&C summarizer pipeline
which is preceded by LSA (Latent Semantic Analysis) extractive summarization. The extractive
summarization shortens the T&C to be further summarized by this model.
## Finetuning Corpus
We collaborated with [TOSDR](https://tosdr.org/) to work with their data, and the model is finetuned accordingly. The article and
summarization text is reduced via extractive summarization before it is finetuned to the model.
## Contact Us
https://ml6.eu/ .
This abstractive model finetuning is the continuation of the Christmas Project 2021 done in ML6: https://bit.ly/XmasProjects .
## Load Finetuned Model
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("ml6team/distilbart-tos-summarizer-tosdr")
model = AutoModelForSeq2SeqLM.from_pretrained("ml6team/distilbart-tos-summarizer-tosdr")
```
## Code Sample
This sample requires [sumy](https://pypi.org/project/sumy/), the LSA Extractive Summarization library, as additional package to
run.
```
import re
import nltk
nltk.download('punkt')
from sumy.parsers.plaintext import PlaintextParser
from sumy.nlp.tokenizers import Tokenizer
from sumy.nlp.stemmers import Stemmer
from sumy.summarizers.lsa import LsaSummarizer
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
LANGUAGE = "english"
EXTRACTED_ARTICLE_SENTENCES_LEN = 12
stemmer = Stemmer(LANGUAGE)
lsa_summarizer = LsaSummarizer(stemmer)
tokenizer = AutoTokenizer.from_pretrained("ml6team/distilbart-tos-summarizer-tosdr")
model = AutoModelForSeq2SeqLM.from_pretrained("ml6team/distilbart-tos-summarizer-tosdr")
def get_extractive_summary(text, sentences_count):
parser = PlaintextParser.from_string(text, Tokenizer(LANGUAGE))
summarized_info = lsa_summarizer(parser.document, sentences_count)
summarized_info = [element._text for element in summarized_info]
return ' '.join(summarized_info)
def get_summary(dict_summarizer_model, dict_tokenizer, text_content):
text_content = get_extractive_summary(text_content, EXTRACTED_ARTICLE_SENTENCES_LEN)
tokenizer = dict_tokenizer['tokenizer']
model = dict_summarizer_model['model']
inputs = tokenizer(text_content, max_length=dict_tokenizer['max_length'], truncation=True, return_tensors="pt")
outputs = model.generate(
inputs["input_ids"], max_length=dict_summarizer_model['max_length'], min_length=dict_summarizer_model['min_length'],
)
summarized_text = tokenizer.decode(outputs[0])
match = re.search(r"<s>(.*)</s>", summarized_text)
if match is not None: summarized_text = match.group(1)
return summarized_text.replace('<s>', '').replace('</s>', '')
test_tos = """
In addition, certain portions of the Web Site may be subject to additional terms of use that we make available for your review or otherwise link to that portion of the Web Site to which such additional terms apply. By using such portions, or any part thereof, you agree to be bound by the additional terms of use applicable to such portions.
Age Restrictions The Web Site may be accessed and used only by individuals who can form legally binding contracts under applicable laws, who are at least 18 years of age or the age of majority in their state or territory of residence (if higher than 18), and who are not barred from using the Web Site under applicable laws.
Our Technology may not be copied, modified, reproduced, republished, posted, transmitted, sold, offered for sale, or redistributed in any way without our prior written permission and the prior written permission of our applicable licensors. Nothing in these Site Terms of Use grants you any right to receive delivery of a copy of Our Technology or to obtain access to Our Technology except as generally and ordinarily permitted through the Web Site according to these Site Terms of Use.
Furthermore, nothing in these Site Terms of Use will be deemed to grant you, by implication, estoppel or otherwise, a license to Our Technology. Certain of the names, logos, and other materials displayed via the Web site constitute trademarks, tradenames, service marks or logos (“Marks”) of us or other entities. You are not authorized to use any such Marks. Ownership of all such Marks and the goodwill associated therewith remains with us or those other entities.
Any use of third party software provided in connection with the Web Site will be governed by such third parties’ licenses and not by these Site Terms of Use. Information on this Web Site may contain technical inaccuracies or typographical errors. Lenovo provides no assurances that any reported problems may be resolved with the use of any information that Lenovo provides
"""
model_dict = {
'model': model,
'max_length': 512,
'min_length': 4
}
tokenizer_dict = {
'tokenizer': tokenizer,
'max_length': 1024
}
print(get_summary(model_dict, tokenizer_dict, test_tos))
```
|
huggingtweets/aevaeavaevevave
|
huggingtweets
| 2022-01-20T15:13:33Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/aevaeavaevevave/1642691608974/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1471448753353670660/T0h3zXn-_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">aeva</div>
<div style="text-align: center; font-size: 14px;">@aevaeavaevevave</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from aeva.
| Data | aeva |
| --- | --- |
| Tweets downloaded | 3184 |
| Retweets | 985 |
| Short tweets | 659 |
| Tweets kept | 1540 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3g4kejp0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @aevaeavaevevave's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3ikuw0pg) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3ikuw0pg/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/aevaeavaevevave')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
aviator-neural/mbart_jokes
|
aviator-neural
| 2022-01-20T14:31:08Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: mbart_jokes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart_jokes
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0282
## Model description
This model is trained of jokes dataset , where you can ask a question and the model gives funny answer.
## Intended uses & limitations
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.3455 | 1.0 | 1914 | 3.0282 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
g30rv17ys/avhubert
|
g30rv17ys
| 2022-01-20T13:07:45Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
https://dl.fbaipublicfiles.com/avhubert/model/lrs3_vox/vsr/base_vox_433h.pt
|
mptrigo/run1
|
mptrigo
| 2022-01-20T10:37:49Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model_index:
- name: run1
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
metric:
name: Bleu
type: bleu
value: 8.4217
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# run1
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-es-es](https://huggingface.co/Helsinki-NLP/opus-mt-es-es) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1740
- Bleu: 8.4217
- Gen Len: 15.9457
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 250 | 4.2342 | 0.8889 | 83.4022 |
| 4.6818 | 2.0 | 500 | 3.7009 | 4.1671 | 35.587 |
| 4.6818 | 3.0 | 750 | 3.4737 | 7.6414 | 23.9674 |
| 3.4911 | 4.0 | 1000 | 3.3713 | 7.7512 | 18.6957 |
| 3.4911 | 5.0 | 1250 | 3.2689 | 8.0901 | 19.4674 |
| 3.0164 | 6.0 | 1500 | 3.2194 | 8.5708 | 25.0543 |
| 3.0164 | 7.0 | 1750 | 3.1853 | 9.5275 | 23.9239 |
| 2.6954 | 8.0 | 2000 | 3.1562 | 8.5635 | 18.9674 |
| 2.6954 | 9.0 | 2250 | 3.1564 | 8.2031 | 17.5978 |
| 2.4503 | 10.0 | 2500 | 3.1314 | 8.5638 | 18.1522 |
| 2.4503 | 11.0 | 2750 | 3.1511 | 8.8428 | 17.913 |
| 2.2554 | 12.0 | 3000 | 3.1513 | 8.1244 | 17.0 |
| 2.2554 | 13.0 | 3250 | 3.1664 | 8.0157 | 16.2717 |
| 2.1202 | 14.0 | 3500 | 3.1656 | 8.7758 | 16.6087 |
| 2.1202 | 15.0 | 3750 | 3.1550 | 8.4637 | 16.4565 |
| 2.0082 | 16.0 | 4000 | 3.1702 | 8.2488 | 15.8587 |
| 2.0082 | 17.0 | 4250 | 3.1725 | 8.609 | 16.3043 |
| 1.9274 | 18.0 | 4500 | 3.1750 | 8.4476 | 15.8043 |
| 1.9274 | 19.0 | 4750 | 3.1734 | 8.4753 | 16.5543 |
| 1.888 | 20.0 | 5000 | 3.1740 | 8.4217 | 15.9457 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.1.dev0
- Tokenizers 0.10.3
|
hrdipto/wav2vec2-xls-r-tf-left-right-shuru
|
hrdipto
| 2022-01-20T08:48:17Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-xls-r-tf-left-right-shuru
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-tf-left-right-shuru
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0921
- Wer: 1.2628
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.5528 | 23.81 | 500 | 0.5509 | 1.9487 |
| 0.2926 | 47.62 | 1000 | 0.1306 | 1.2756 |
| 0.1171 | 71.43 | 1500 | 0.1189 | 1.2628 |
| 0.0681 | 95.24 | 2000 | 0.0921 | 1.2628 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
ml6team/distilbert-base-dutch-cased-toxic-comments
|
ml6team
| 2022-01-20T08:21:12Z | 10 | 6 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"nl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language:
- nl
tags:
- text-classification
- pytorch
widget:
- text: "Ik heb je lief met heel mijn hart"
example_title: "Non toxic comment 1"
- text: "Dat is een goed punt, zo had ik het nog niet bekeken."
example_title: "Non toxic comment 2"
- text: "Wat de fuck zei je net tegen me, klootzak?"
example_title: "Toxic comment 1"
- text: "Rot op, vuile hoerenzoon."
example_title: "Toxic comment 2"
license: apache-2.0
metrics:
- Accuracy, F1 Score, Recall, Precision
---
# distilbert-base-dutch-toxic-comments
## Model description:
This model was created with the purpose to detect toxic or potentially harmful comments.
For this model, we finetuned a multilingual distilbert model [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the translated [Jigsaw Toxicity dataset](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge).
The original dataset was translated using the appropriate [MariantMT model](https://huggingface.co/Helsinki-NLP/opus-mt-en-nl).
The model was trained for 2 epochs, on 90% of the dataset, with the following arguments:
```
training_args = TrainingArguments(
learning_rate=3e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
gradient_accumulation_steps=4,
load_best_model_at_end=True,
metric_for_best_model="recall",
epochs=2,
evaluation_strategy="steps",
save_strategy="steps",
save_total_limit=10,
logging_steps=100,
eval_steps=250,
save_steps=250,
weight_decay=0.001,
report_to="wandb")
```
## Model Performance:
Model evaluation was done on 1/10th of the dataset, which served as the test dataset.
| Accuracy | F1 Score | Recall | Precision |
| --- | --- | --- | --- |
| 95.75 | 78.88 | 77.23 | 80.61 |
## Dataset:
Unfortunately we cannot open-source the dataset, since we are bound by the underlying Jigsaw license.
|
ml6team/robbert-dutch-base-toxic-comments
|
ml6team
| 2022-01-20T07:57:36Z | 2,793 | 6 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"nl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language:
- nl
tags:
- text-classification
- pytorch
widget:
- text: "Ik heb je lief met heel mijn hart"
example_title: "Non toxic comment 1"
- text: "Dat is een goed punt, zo had ik het nog niet bekeken."
example_title: "Non toxic comment 2"
- text: "Wat de fuck zei je net tegen me, klootzak?"
example_title: "Toxic comment 1"
- text: "Rot op, vuile hoerenzoon."
example_title: "Toxic comment 2"
license: apache-2.0
metrics:
- Accuracy, F1 Score, Recall, Precision
---
# RobBERT-dutch-base-toxic-comments
## Model description:
This model was created with the purpose to detect toxic or potentially harmful comments.
For this model, we finetuned a dutch RobBerta-based model called [RobBERT](https://huggingface.co/pdelobelle/robbert-v2-dutch-base) on the translated [Jigsaw Toxicity dataset](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge).
The original dataset was translated using the appropriate [MariantMT model](https://huggingface.co/Helsinki-NLP/opus-mt-en-nl).
The model was trained for 2 epochs, on 90% of the dataset, with the following arguments:
```
training_args = TrainingArguments(
learning_rate=1e-5,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
gradient_accumulation_steps=6,
load_best_model_at_end=True,
metric_for_best_model="recall",
epochs=2,
evaluation_strategy="steps",
save_strategy="steps",
save_total_limit=10,
logging_steps=100,
eval_steps=250,
save_steps=250,
weight_decay=0.001,
report_to="wandb")
```
## Model Performance:
Model evaluation was done on 1/10th of the dataset, which served as the test dataset.
| Accuracy | F1 Score | Recall | Precision |
| --- | --- | --- | --- |
| 95.63 | 78.80 | 78.99 | 78.61 |
## Dataset:
Unfortunately we cannot open-source the dataset, since we are bound by the underlying Jigsaw license.
|
rdpatilds/distilbert-finetuned-imdb
|
rdpatilds
| 2022-01-20T05:49:25Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: rdpatilds/distilbert-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# rdpatilds/distilbert-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.6914
- Validation Loss: 2.5383
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -688, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.6914 | 2.5383 | 0 |
### Framework versions
- Transformers 4.15.0
- TensorFlow 2.7.0
- Datasets 1.17.0
- Tokenizers 0.10.3
|
abdelkader/distilbert-base-uncased-distilled-clinc
|
abdelkader
| 2022-01-20T05:15:31Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9464516129032258
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3038
- Accuracy: 0.9465
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 2.8460 | 0.7506 |
| 3.322 | 2.0 | 636 | 1.4301 | 0.8532 |
| 3.322 | 3.0 | 954 | 0.7377 | 0.9152 |
| 1.2296 | 4.0 | 1272 | 0.4784 | 0.9316 |
| 0.449 | 5.0 | 1590 | 0.3730 | 0.9390 |
| 0.449 | 6.0 | 1908 | 0.3367 | 0.9429 |
| 0.2424 | 7.0 | 2226 | 0.3163 | 0.9468 |
| 0.1741 | 8.0 | 2544 | 0.3074 | 0.9452 |
| 0.1741 | 9.0 | 2862 | 0.3054 | 0.9458 |
| 0.1501 | 10.0 | 3180 | 0.3038 | 0.9465 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
abdelkader/distilbert-base-uncased-finetuned-clinc
|
abdelkader
| 2022-01-20T04:59:36Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9174193548387096
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7713
- Accuracy: 0.9174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2831 | 0.7426 |
| 3.785 | 2.0 | 636 | 1.8739 | 0.8335 |
| 3.785 | 3.0 | 954 | 1.1525 | 0.8926 |
| 1.6894 | 4.0 | 1272 | 0.8569 | 0.91 |
| 0.897 | 5.0 | 1590 | 0.7713 | 0.9174 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
mrp/marian-finetuned-kde4-en-to-fr
|
mrp
| 2022-01-20T04:05:30Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 50.20410659441166
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9643
- Bleu: 50.2041
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
ethzanalytics/ai-msgbot-gpt2-XL
|
ethzanalytics
| 2022-01-20T01:40:42Z | 9 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"gpt",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- text-generation
- gpt2
- gpt
license: mit
datasets:
- natural questions
widget:
- text: "Do you like my new haircut?\nperson beta:\n\n"
example_title: "haircut"
- text: "I love to learn new things.. are you willing to teach me something?\nperson beta:\n\n"
example_title: "teaching"
- text: "What's your favorite animal? Mine is the dog? \nperson beta:\n\n"
example_title: "favorite"
- text: "how much does it cost?\nperson beta:\n\n"
example_title: "money"
inference:
parameters:
min_length: 2
max_length: 64
length_penalty: 0.6
no_repeat_ngram_size: 3
do_sample: True
top_p: 0.85
top_k: 10
repetition_penalty: 2.1
---
# ai-msgbot GPT2-XL
_NOTE: model card is WIP_
GPT2-XL (~1.5 B parameters) trained on [the Wizard of Wikipedia dataset](https://parl.ai/projects/wizard_of_wikipedia/) for 40k steps with **33**/36 layers frozen using `aitextgen`.
Designed for use with [ai-msgbot](https://github.com/pszemraj/ai-msgbot) to create an open-ended chatbot (of course, if other use cases arise, have at it).
## conversation data
The dataset was tokenized and fed to the model as a conversation between two speakers, whose names are below. This is relevant for writing prompts and filtering/extracting text from responses.
`script_speaker_name` = `person alpha`
`script_responder_name` = `person beta`
## examples
- the default inference API examples should work _okay_
- an ideal test would be explicitly adding `person beta` into the prompt text the model is forced to respond to instead of adding onto the entered prompt.
### Example prompt:
```
do you like to eat beans?
person beta:
```
### Resulting output
```
do you like to eat beans?person beta:
yes, i like fried beans.
person alpha:
i wonder when the first beans were cultivated and how they were processed.
person beta:
nitrogenic bacteria (in
```
_Note: the Inference API cuts off generation due to length, if run elsewhere you would see what comes after "(in"_
## citations
```
@inproceedings{dinan2019wizard,
author={Emily Dinan and Stephen Roller and Kurt Shuster and Angela Fan and Michael Auli and Jason Weston},
title={{W}izard of {W}ikipedia: Knowledge-powered Conversational Agents},
booktitle = {Proceedings of the International Conference on Learning Representations (ICLR)},
year={2019},
}
@inproceedings{li-etal-2017-dailydialog,
title = "{D}aily{D}ialog: A Manually Labelled Multi-turn Dialogue Dataset",
author = "Li, Yanran and
Su, Hui and
Shen, Xiaoyu and
Li, Wenjie and
Cao, Ziqiang and
Niu, Shuzi",
booktitle = "Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = nov,
year = "2017",
address = "Taipei, Taiwan",
publisher = "Asian Federation of Natural Language Processing",
url = "https://aclanthology.org/I17-1099",
pages = "986--995",
abstract = "We develop a high-quality multi-turn dialog dataset, \textbf{DailyDialog}, which is intriguing in several aspects. The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way and cover various topics about our daily life. We also manually label the developed dataset with communication intention and emotion information. Then, we evaluate existing approaches on DailyDialog dataset and hope it benefit the research field of dialog systems. The dataset is available on \url{http://yanran.li/dailydialog}",
}
```
|
UBC-NLP/ARBERT
|
UBC-NLP
| 2022-01-19T20:10:55Z | 540 | 5 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"Arabic BERT",
"MSA",
"Twitter",
"Masked Langauge Model",
"ar",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language:
- ar
tags:
- Arabic BERT
- MSA
- Twitter
- Masked Langauge Model
widget:
- text: "اللغة العربية هي لغة [MASK]."
---
<img src="https://raw.githubusercontent.com/UBC-NLP/marbert/main/ARBERT_MARBERT.jpg" alt="drawing" width="30%" height="30%" align="right"/>
**ARBERT** is one of three models described in our **ACl 2021 paper** **["ARBERT & MARBERT: Deep Bidirectional Transformers for Arabic"](https://mageed.arts.ubc.ca/files/2020/12/marbert_arxiv_2020.pdf)**. ARBERT is a large-scale pre-trained masked language model focused on Modern Standard Arabic (MSA). To train ARBERT, we use the same architecture as BERT-base: 12 attention layers, each has 12 attention heads and 768 hidden dimensions, a vocabulary of 100K WordPieces, making ∼163M parameters. We train ARBERT on a collection of Arabic datasets comprising **61GB of text** (**6.2B tokens**). For more information, please visit our own GitHub [repo](https://github.com/UBC-NLP/marbert).
# BibTex
If you use our models (ARBERT, MARBERT, or MARBERTv2) for your scientific publication, or if you find the resources in this repository useful, please cite our paper as follows (to be updated):
```bibtex
@inproceedings{abdul-mageed-etal-2021-arbert,
title = "{ARBERT} {\&} {MARBERT}: Deep Bidirectional Transformers for {A}rabic",
author = "Abdul-Mageed, Muhammad and
Elmadany, AbdelRahim and
Nagoudi, El Moatez Billah",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.551",
doi = "10.18653/v1/2021.acl-long.551",
pages = "7088--7105",
abstract = "Pre-trained language models (LMs) are currently integral to many natural language processing systems. Although multilingual LMs were also introduced to serve many languages, these have limitations such as being costly at inference time and the size and diversity of non-English data involved in their pre-training. We remedy these issues for a collection of diverse Arabic varieties by introducing two powerful deep bidirectional transformer-based models, ARBERT and MARBERT. To evaluate our models, we also introduce ARLUE, a new benchmark for multi-dialectal Arabic language understanding evaluation. ARLUE is built using 42 datasets targeting six different task clusters, allowing us to offer a series of standardized experiments under rich conditions. When fine-tuned on ARLUE, our models collectively achieve new state-of-the-art results across the majority of tasks (37 out of 48 classification tasks, on the 42 datasets). Our best model acquires the highest ARLUE score (77.40) across all six task clusters, outperforming all other models including XLM-R Large ( 3.4x larger size). Our models are publicly available at https://github.com/UBC-NLP/marbert and ARLUE will be released through the same repository.",
}
```
## Acknowledgments
We gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council of Canada, Canadian Foundation for Innovation, [ComputeCanada](www.computecanada.ca) and [UBC ARC-Sockeye](https://doi.org/10.14288/SOCKEYE). We also thank the [Google TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc) program for providing us with free TPU access.
|
hrdipto/wav2vec2-xls-r-tf-left-right-trainer
|
hrdipto
| 2022-01-19T20:06:38Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-xls-r-tf-left-right-trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-tf-left-right-trainer
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0090
- eval_wer: 0.0037
- eval_runtime: 11.2686
- eval_samples_per_second: 71.703
- eval_steps_per_second: 8.963
- epoch: 21.05
- step: 4000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
vuiseng9/bert-base-squadv1-pruneofa-90pc-bt-qat-lt
|
vuiseng9
| 2022-01-19T19:13:40Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"onnx",
"bert",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
This model is a downstream optimization of [```vuiseng9/bert-base-squadv1-pruneofa-90pc-bt```](https://huggingface.co/vuiseng9/bert-base-squadv1-pruneofa-90pc-bt) using [OpenVINO/NNCF](https://github.com/openvinotoolkit/nncf). Applied optimization includes:
1. magnitude sparsification at 0% upon initialization. Custom reverse masking and sparsity freezing are applied.
2. NNCF Quantize-Aware Training - Symmetric 8-bit for both weight and activation on all learnable layers.
3. Custom distillation with large model ```bert-large-uncased-whole-word-masking-finetuned-squad```
```
eval_exact_match = 80.6623
eval_f1 = 87.7147
eval_samples = 10784
```
# Setup
```bash
# OpenVINO/NNCF
git clone https://github.com/vuiseng9/nncf && cd nncf
git checkout tld-poc
git reset --hard 5647610d5ee2bf9f1324604e6579bca1c391e260
python setup.py develop
pip install -r examples/torch/requirements.txt
# Huggingface nn_pruning
git clone https://github.com/vuiseng9/nn_pruning && cd nn_pruning
git checkout reproduce-evaluation
git reset --hard 2d4e196d694c465e43e5fbce6c3836d0a60e1446
pip install -e ".[dev]"
# Huggingface Transformers
git clone https://github.com/vuiseng9/transformers && cd transformers
git checkout tld-poc
git reset --hard 5dd7402e9a316041dea4ff67508c01047323616e
pip install -e .
head -n 1 examples/pytorch/question-answering/requirements.txt | xargs -i pip install {}
# Additional dependencies
pip install onnx
```
# Train
```bash
wget https://huggingface.co/vuiseng9/bert-base-squadv1-pruneofa-90pc-bt-qat-lt/raw/main/nncf_bert_squad_sparsity.json
NNCF_CFG=/path/to/downloaded_nncf_cfg_above #to-revise
OUTROOT=/path/to/train_output_root #to-revise
WORKDIR=transformers/examples/pytorch/question-answering #to-revise
RUNID=bert-base-squadv1-pruneofa-90pc-bt-qat-lt
cd $WORKDIR
OUTDIR=$OUTROOT/$RUNID
mkdir -p $OUTDIR
export CUDA_VISIBLE_DEVICES=0
NEPOCH=5
python run_qa.py \
--model_name_or_path vuiseng9/bert-base-squadv1-pruneofa-90pc-bt \
--pruneofa_qat \
--dataset_name squad \
--do_eval \
--do_train \
--evaluation_strategy steps \
--eval_steps 250 \
--learning_rate 3e-5 \
--lr_scheduler_type cosine_with_restarts \
--warmup_ratio 0.25 \
--cosine_cycles 1 \
--teacher bert-large-uncased-whole-word-masking-finetuned-squad \
--teacher_ratio 0.9 \
--num_train_epochs $NEPOCH \
--per_device_eval_batch_size 128 \
--per_device_train_batch_size 16 \
--max_seq_length 384 \
--doc_stride 128 \
--save_steps 250 \
--nncf_config $NNCF_CFG \
--logging_steps 1 \
--overwrite_output_dir \
--run_name $RUNID \
--output_dir $OUTDIR
```
# Eval
This repo must be cloned locally.
```bash
git clone https://huggingface.co/vuiseng9/bert-base-squadv1-pruneofa-90pc-bt-qat-lt
MODELROOT=/path/to/cloned_repo_above #to-revise
export CUDA_VISIBLE_DEVICES=0
OUTDIR=eval-bert-base-squadv1-pruneofa-90pc-bt-qat-lt
WORKDIR=transformers/examples/pytorch/question-answering #to-revise
cd $WORKDIR
mkdir $OUTDIR
nohup python run_qa.py \
--model_name_or_path vuiseng9/bert-base-squadv1-pruneofa-90pc-bt \
--dataset_name squad \
--qat_checkpoint $MODELROOT/checkpoint-22000 \
--nncf_config $MODELROOT/nncf_bert_squad_sparsity.json \
--to_onnx $OUTDIR/bert-base-squadv1-pruneofa-90pc-bt-qat-lt.onnx \
--do_eval \
--per_device_eval_batch_size 128 \
--max_seq_length 384 \
--doc_stride 128 \
--overwrite_output_dir \
--output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log &
```
|
kjackson/distilbert-base-uncased-finetuned-emotion
|
kjackson
| 2022-01-19T19:10:27Z | 0 | 0 | null |
[
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1907.11692",
"license:mit",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: en
tags:
- exbert
license: mit
datasets:
- bookcorpus
- wikipedia
---
# RoBERTa base model
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1907.11692) and first released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/roberta). This model is case-sensitive: it
makes a difference between english and English.
Disclaimer: The team releasing RoBERTa did not write a model card for this model so this model card has been written by
the Hugging Face team.
|
dehio/german-qg-t5-drink600
|
dehio
| 2022-01-19T16:38:22Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"question generation",
"de",
"dataset:deepset/germanquad",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: mit
widget:
- text: "generate question: Der Monk Sour Drink ist ein somit eine aromatische Überraschung, die sowohl <hl>im Sommer wie auch zu Silvester<hl> funktioniert."
language:
- de
tags:
- question generation
datasets:
- deepset/germanquad
model-index:
- name: german-qg-t5-drink600
results: []
---
# german-qg-t5-drink600
This model is fine-tuned in question generation in German. The expected answer must be highlighted with <hl> token. It is based on [german-qg-t5-quad](https://huggingface.co/dehio/german-qg-t5-quad) and further pre-trained on drink related questions.
## Task example
#### Input
generate question: Der Monk Sour Drink ist ein somit eine aromatische Überraschung,
die sowohl <hl>im Sommer wie auch zu Silvester<hl> funktioniert.
#### Expected Question
Zu welchen Gelegenheiten passt der Monk Sour gut?
## Model description
The model is based on [german-qg-t5-quad](https://huggingface.co/dehio/german-qg-t5-quad), which was pre-trained on [GermanQUAD](https://www.deepset.ai/germanquad). We further pre-trained it on questions annotated on drink receipts from [Mixology](https://mixology.eu/) ("drink600").
We have not yet open sourced the dataset, since we do not own copyright on the source material.
## Training and evaluation data
The training script can be accessed [here](https://github.com/d-e-h-i-o/german-qg).
## Evaluation
It achieves a **BLEU-4 score of 29.80** on the drink600 test set (n=120) and **11.30** on the GermanQUAD test set.
Thus, fine-tuning on drink600 did not affect performance on GermanQuAD.
In comparison, *german-qg-t5-quad* achieves a BLEU-4 score of **10.76** on the drink600 test set.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 100
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
indonesian-nlp/wav2vec2-luganda
|
indonesian-nlp
| 2022-01-19T16:19:45Z | 11 | 2 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"lg",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: lg
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
license: apache-2.0
model-index:
- name: Wav2Vec2 Luganda by Indonesian-NLP
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice lg
type: common_voice
args: lg
metrics:
- name: Test WER
type: wer
value: 7.53
---
# Automatic Speech Recognition for Luganda
This is the model built for the
[Mozilla Luganda Automatic Speech Recognition competition](https://zindi.africa/competitions/mozilla-luganda-automatic-speech-recognition).
It is a fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
model on the [Luganda Common Voice dataset](https://huggingface.co/datasets/common_voice) version 7.0.
We also provide a [live demo](https://huggingface.co/spaces/indonesian-nlp/luganda-asr) to test the model.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "lg", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("indonesian-nlp/wav2vec2-luganda")
model = Wav2Vec2ForCTC.from_pretrained("indonesian-nlp/wav2vec2-luganda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
if "audio" in batch:
speech_array = torch.tensor(batch["audio"]["array"])
else:
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Indonesian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "lg", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("indonesian-nlp/wav2vec2-luganda")
model = Wav2Vec2ForCTC.from_pretrained("indonesian-nlp/wav2vec2-luganda")
model.to("cuda")
chars_to_ignore = [",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�", "‘", "’", "’"]
chars_to_ignore_regex = f'[{"".join(chars_to_ignore)}]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
if "audio" in batch:
speech_array = torch.tensor(batch["audio"]["array"])
else:
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
WER without KenLM: 15.38 %
WER With KenLM:
**Test Result**: 7.53 %
## Training
The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO
The script used for training can be found [here](https://github.com/indonesian-nlp/luganda-asr)
|
DanL/scientific-challenges-and-directions
|
DanL
| 2022-01-19T12:47:22Z | 315 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:DanL/scientific-challenges-and-directions-dataset",
"arxiv:2108.13751",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
tags:
- generated_from_trainer
- text-classification
language:
- en
datasets:
- DanL/scientific-challenges-and-directions-dataset
widget:
- text: "severe atypical cases of pneumonia emerged and quickly spread worldwide."
example_title: "challenge"
- text: "we speculate that studying IL-6 will be beneficial."
example_title: "direction"
- text: "in future studies, both PRRs should be tested as the cause for multiple deaths."
example_title: "both"
- text: "IbMADS1-transformed potatoes exhibited tuber morphogenesis in the fibrous roots."
example_title: "neither"
metrics:
- precision
- recall
- f1
model-index:
- name: scientific-challenges-and-directions
results: []
---
# scientific-challenges-and-directions
We present a novel resource to help scientists and medical professionals discover challenges and potential directions across scientific literature, focusing on a broad corpus pertaining to the COVID-19 pandemic and related historical research. At a high level, the _challenges_ and _directions_ are defined as follows:
* **Challenge**: A sentence mentioning a problem, difficulty, flaw, limitation, failure, lack of clarity, or knowledge gap.
* **Research direction**: A sentence mentioning suggestions or needs for further research, hypotheses, speculations, indications or hints that an issue is worthy of exploration.
* This model here is described in our paper: [A Search Engine for Discovery of Scientific Challenges and Directions](https://arxiv.org/abs/2108.13751) (though we've upgraded the infrastructure since the paper was released - there are slight differences in the results).
* Our dataset can be found [here](https://huggingface.co/datasets/DanL/scientific-challenges-and-directions-dataset).
* Please cite our paper if you use our datasets or models in your project. See the [BibTeX](#citation).
* Feel free to [email us](#contact-us).
* Also, check out [our search engine](https://challenges.apps.allenai.org/), as an example application.
## Model description
This model is a fine-tuned version of [PubMedBERT](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the [scientific-challenges-and-directions-dataset](https://huggingface.co/datasets/DanL/scientific-challenges-and-directions-dataset), designed for multi-label text classification.
## Training and evaluation data
The scientific-challenges-and-directions model is trained based on a dataset that is a collection of 2894 sentences and their surrounding contexts, from 1786 full-text papers in the CORD-19 corpus, labeled for classification of challenges and directions by expert annotators with biomedical and bioNLP backgrounds. For full details on the train/test/split of the data see section 3.1 in our [paper](https://arxiv.org/abs/2108.13751)
## Example notebook
We include an example notebook that uses the model for inference in our [repo](https://github.com/Dan-La/scientific-challenges-and-directions). See `Inference_Notebook.ipynb`.
A training notebook is also included.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning rate: 2e-05
- train batch size: 8
- eval batch size: 4
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr scheduler type: linear
- lr scheduler warmup steps: 500
- num epochs: 30
### Training results
The achieves the following results on the test set:
- Precision Challenge: 0.768719
- Recall Challenge: 0.780405
- F1 Challenge: 0.774518
- Precision Direction: 0.758112
- Recall Direction: 0.774096
- F1 Direction: 0.766021
- Precision (micro avg. on both labels): 0.764894
- Recall (micro avg. on both labels): 0.778139
- F1 (micro avg. on both labels): 0.771459
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
## Citation
If using our dataset and models, please cite:
```
@misc{lahav2021search,
title={A Search Engine for Discovery of Scientific Challenges and Directions},
author={Dan Lahav and Jon Saad Falcon and Bailey Kuehl and Sophie Johnson and Sravanthi Parasa and Noam Shomron and Duen Horng Chau and Diyi Yang and Eric Horvitz and Daniel S. Weld and Tom Hope},
year={2021},
eprint={2108.13751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Contact us
Please don't hesitate to reach out.
**Email:** `lahav@mail.tau.ac.il`,`tomh@allenai.org`.
|
baaastien/xls-r-ab-test
|
baaastien
| 2022-01-19T12:03:47Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"ab",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ab
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the COMMON_VOICE - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 133.5167
- Wer: 18.9286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
mishig/test_vid
|
mishig
| 2022-01-19T09:56:39Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
# Video demo on ModelCard
Please find [this file](https://huggingface.co/mishig/test_vid/blob/main/README.md) to see how to add a video to model card.
<video src="https://huggingface.co/mishig/test_vid/resolve/main/output.mp4" controls autoplay loop/>
|
chitra/finetuned-adversarial-paraphrase-model
|
chitra
| 2022-01-19T09:13:16Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
model-index:
- name: finetuned-adversarial-paraphrase-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-adversarial-paraphrase-model
This model is a fine-tuned version of [coderpotter/adversarial-paraphrasing-detector](https://huggingface.co/coderpotter/adversarial-paraphrasing-detector) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 7.5680
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0848 | 1.0 | 2000 | 5.4633 |
| 0.0495 | 2.0 | 4000 | 6.0352 |
| 0.0121 | 3.0 | 6000 | 7.5680 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
mrp/distilbert-base-uncased-finetuned-imdb
|
mrp
| 2022-01-19T08:44:09Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.707 | 1.0 | 157 | 2.4883 |
| 2.572 | 2.0 | 314 | 2.4240 |
| 2.5377 | 3.0 | 471 | 2.4355 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
huggingtweets/histronicmonstr
|
huggingtweets
| 2022-01-19T04:57:37Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/histronicmonstr/1642568219493/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1431060400171270149/X2agCkD0_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">(心) !!!Ma-tin Korii!!! Uwa~😃!!!</div>
<div style="text-align: center; font-size: 14px;">@histronicmonstr</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from (心) !!!Ma-tin Korii!!! Uwa~😃!!!.
| Data | (心) !!!Ma-tin Korii!!! Uwa~😃!!! |
| --- | --- |
| Tweets downloaded | 3203 |
| Retweets | 97 |
| Short tweets | 488 |
| Tweets kept | 2618 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1sdp3pm6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @histronicmonstr's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2ms6e48p) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2ms6e48p/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/histronicmonstr')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/godslovepariah
|
huggingtweets
| 2022-01-19T04:12:22Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/godslovepariah/1642565537762/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1432780406777020417/XTrp9MCR_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">LOVER//PARIAH</div>
<div style="text-align: center; font-size: 14px;">@godslovepariah</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from LOVER//PARIAH.
| Data | LOVER//PARIAH |
| --- | --- |
| Tweets downloaded | 525 |
| Retweets | 9 |
| Short tweets | 10 |
| Tweets kept | 506 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/6l5fj9xw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @godslovepariah's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3v0x5r1a) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3v0x5r1a/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/godslovepariah')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
NbAiLab/roberta_des_128
|
NbAiLab
| 2022-01-19T01:06:51Z | 3 | 0 |
transformers
|
[
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
Just for performing some experiments. Do not use.
This needed to be restarted at 100k. I am getting memory errors at the end of the epoch. Not really sure why.
Step 2 is therefore on train_2__4. Static learning rate for a while. The first 100k ended at 0.59. This is decent so early. No point in running more epochs here though. Changing the corpus and continue training.
|
domdomreloaded/bert-base-uncased-finetuned-swag
|
domdomreloaded
| 2022-01-18T22:33:47Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"generated_from_trainer",
"dataset:swag",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- swag
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-swag
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-swag
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the swag dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6045
- Accuracy: 0.7960
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7494 | 1.0 | 4597 | 0.5942 | 0.7716 |
| 0.3499 | 2.0 | 9194 | 0.6045 | 0.7960 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
milyiyo/electra-base-gen-finetuned-amazon-review
|
milyiyo
| 2022-01-18T21:21:53Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: electra-base-gen-finetuned-amazon-review
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.5024
- name: F1
type: f1
value: 0.5063190059782597
- name: Precision
type: precision
value: 0.5121183330982292
- name: Recall
type: recall
value: 0.5024
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-base-gen-finetuned-amazon-review
This model is a fine-tuned version of [mrm8488/electricidad-base-generator](https://huggingface.co/mrm8488/electricidad-base-generator) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8030
- Accuracy: 0.5024
- F1: 0.5063
- Precision: 0.5121
- Recall: 0.5024
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Accuracy | F1 | Validation Loss | Precision | Recall |
|:-------------:|:-----:|:----:|:--------:|:------:|:---------------:|:---------:|:------:|
| 0.5135 | 1.0 | 1000 | 0.4886 | 0.4929 | 1.6580 | 0.5077 | 0.4886 |
| 0.4138 | 2.0 | 2000 | 0.5044 | 0.5093 | 1.7951 | 0.5183 | 0.5044 |
| 0.4244 | 3.0 | 3000 | 0.5022 | 0.5068 | 1.8108 | 0.5141 | 0.5022 |
| 0.4231 | 6.0 | 6000 | 1.7636 | 0.4972 | 0.5018 | 0.5092 | 0.4972 |
| 0.3574 | 7.0 | 7000 | 1.8030 | 0.5024 | 0.5063 | 0.5121 | 0.5024 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
malloc/OpenNMT-py-English-German-Transformer
|
malloc
| 2022-01-18T20:18:11Z | 0 | 2 | null |
[
"translation",
"pytorch",
"de",
"en",
"dataset:WMT",
"license:mit",
"region:us"
] |
translation
| 2022-03-02T23:29:05Z |
---
language:
- de
- en
tags:
- translation
- pytorch
license: mit
datasets:
- WMT
metrics:
- bleu
---
# OpenNMT-py-English-German-Transformer
[OpenNMT-py](https://github.com/OpenNMT/OpenNMT-py) is the PyTorch version of the OpenNMT project, an open-source (MIT) neural machine translation framework.
OpenNMT has several [pretrained models](https://opennmt.net/Models-py/). This one is trained particularly for English to German translation.
- Configuration: Base Transformer configuration with [standard training options](http://opennmt.net/OpenNMT-py/FAQ.html#how-do-i-use-the-transformer-model-do-you-support-multi-gpu)
- Data: WMT with shared SentencePiece model
- BLEU:
- newstest2014 = 26.89
- newstest2017 = 28.09
|
vuiseng9/bert-base-squadv1-pruneofa-90pc-bt
|
vuiseng9
| 2022-01-18T19:13:21Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"onnx",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
This model is transfer-learning of [bert-base pruneofa 90% sparse](https://huggingface.co/Intel/bert-base-uncased-sparse-90-unstructured-pruneofa) on Squadv1 dataset.
```
eval_exact_match = 80.2933
eval_f1 = 87.6788
eval_samples = 10784
```
# Train
use https://github.com/IntelLabs/Model-Compression-Research-Package.git
see ```pruneofa-transfer-learning.sh```
# Eval
```bash
export CUDA_VISIBLE_DEVICES=0
OUTDIR=eval-bert-base-squadv1-pruneofa-90pc-bt
WORKDIR=transformers/examples/pytorch/question-answering
cd $WORKDIR
nohup python run_qa.py \
--model_name_or_path vuiseng9/bert-base-squadv1-pruneofa-90pc-bt \
--dataset_name squad \
--do_eval \
--per_device_eval_batch_size 128 \
--max_seq_length 384 \
--doc_stride 128 \
--overwrite_output_dir \
--output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log &
```
|
phueb/BabyBERTa-2
|
phueb
| 2022-01-18T14:44:44Z | 60 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"BabyBERTa",
"en",
"dataset:CHILDES",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: en
tags:
- BabyBERTa
datasets:
- CHILDES
widget:
- text: "Look here. What is that <mask> ?"
- text: "Do you like your <mask> ?"
---
## BabyBERTA
### Overview
BabyBERTa is a light-weight version of RoBERTa trained on 5M words of American-English child-directed input.
It is intended for language acquisition research, on a single desktop with a single GPU - no high-performance computing infrastructure needed.
The three provided models are randomly selected from 10 that were trained and reported in the paper.
## Loading the tokenizer
BabyBERTa was trained with `add_prefix_space=True`, so it will not work properly with the tokenizer defaults.
For instance, to load the tokenizer for BabyBERTa-1, load it as follows:
```python
tokenizer = RobertaTokenizerFast.from_pretrained("phueb/BabyBERTa-1",
add_prefix_space=True)
```
### Hyper-Parameters
See the paper for details.
All provided models were trained for 400K steps with a batch size of 16.
Importantly, BabyBERTa never predicts unmasked tokens during training - `unmask_prob` is set to zero.
### Performance
BabyBerta was developed for learning grammatical knowledge from child-directed input.
Its grammatical knowledge was evaluated using the [Zorro](https://github.com/phueb/Zorro) test suite.
The best model achieves an overall accuracy of 80.3,
comparable to RoBERTa-base, which achieves an overall accuracy of 82.6 on the latest version of Zorro (as of October, 2021).
Both values differ slightly from those reported in the [CoNLL 2021 paper](https://aclanthology.org/2021.conll-1.49/).
There are two reasons for this:
1. Performance of RoBERTa-base is slightly larger because the authors previously lower-cased all words in Zorro before evaluation.
Lower-casing of proper nouns is detrimental to RoBERTa-base because RoBERTa-base has likely been trained on proper nouns that are primarily title-cased.
In contrast, because BabyBERTa is not case-sensitive, its performance is not influenced by this change.
2. The latest version of Zorro no longer contains ambiguous content words such as "Spanish" which can be both a noun and an adjective.
this resulted in a small reduction in the performance of BabyBERTa.
Overall Accuracy on Zorro:
| Model Name | Accuracy (holistic scoring) | Accuracy (MLM-scoring) |
|----------------------------------------|------------------------------|------------|
| [BabyBERTa-1][link-BabyBERTa-1] | 80.3 | 79.9 |
| [BabyBERTa-2][link-BabyBERTa-2] | 78.6 | 78.2 |
| [BabyBERTa-3][link-BabyBERTa-3] | 74.5 | 78.1 |
### Additional Information
This model was trained by [Philip Huebner](https://philhuebner.com), currently at the [UIUC Language and Learning Lab](http://www.learninglanguagelab.org).
More info can be found [here](https://github.com/phueb/BabyBERTa).
[link-BabyBERTa-1]: https://huggingface.co/phueb/BabyBERTa-1
[link-BabyBERTa-2]: https://huggingface.co/phueb/BabyBERTa-2
[link-BabyBERTa-3]: https://huggingface.co/phueb/BabyBERTa-3
|
phueb/BabyBERTa-3
|
phueb
| 2022-01-18T14:41:25Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"BabyBERTa",
"en",
"dataset:CHILDES",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: en
tags:
- BabyBERTa
license: mit
datasets:
- CHILDES
widget:
- text: "Look here. What is that <mask> ?"
- text: "Do you like your <mask> ?"
---
## BabyBERTA
### Overview
BabyBERTa is a light-weight version of RoBERTa trained on 5M words of American-English child-directed input.
It is intended for language acquisition research, on a single desktop with a single GPU - no high-performance computing infrastructure needed.
The three provided models are randomly selected from 10 that were trained and reported in the paper.
## Loading the tokenizer
BabyBERTa was trained with `add_prefix_space=True`, so it will not work properly with the tokenizer defaults.
For instance, to load the tokenizer for BabyBERTa-1, load it as follows:
```python
tokenizer = RobertaTokenizerFast.from_pretrained("phueb/BabyBERTa-1",
add_prefix_space=True)
```
### Hyper-Parameters
See the paper for details.
All provided models were trained for 400K steps with a batch size of 16.
Importantly, BabyBERTa never predicts unmasked tokens during training - `unmask_prob` is set to zero.
### Performance
BabyBerta was developed for learning grammatical knowledge from child-directed input.
Its grammatical knowledge was evaluated using the [Zorro](https://github.com/phueb/Zorro) test suite.
The best model achieves an overall accuracy of 80.3,
comparable to RoBERTa-base, which achieves an overall accuracy of 82.6 on the latest version of Zorro (as of October, 2021).
Both values differ slightly from those reported in the [CoNLL 2021 paper](https://aclanthology.org/2021.conll-1.49/).
There are two reasons for this:
1. Performance of RoBERTa-base is slightly larger because the authors previously lower-cased all words in Zorro before evaluation.
Lower-casing of proper nouns is detrimental to RoBERTa-base because RoBERTa-base has likely been trained on proper nouns that are primarily title-cased.
In contrast, because BabyBERTa is not case-sensitive, its performance is not influenced by this change.
2. The latest version of Zorro no longer contains ambiguous content words such as "Spanish" which can be both a noun and an adjective.
this resulted in a small reduction in the performance of BabyBERTa.
Overall Accuracy on Zorro:
| Model Name | Accuracy (holistic scoring) | Accuracy (MLM-scoring) |
|----------------------------------------|------------------------------|------------|
| [BabyBERTa-1][link-BabyBERTa-1] | 80.3 | 79.9 |
| [BabyBERTa-2][link-BabyBERTa-2] | 78.6 | 78.2 |
| [BabyBERTa-3][link-BabyBERTa-3] | 74.5 | 78.1 |
### Additional Information
This model was trained by [Philip Huebner](https://philhuebner.com), currently at the [UIUC Language and Learning Lab](http://www.learninglanguagelab.org).
More info can be found [here](https://github.com/phueb/BabyBERTa).
[link-BabyBERTa-1]: https://huggingface.co/phueb/BabyBERTa-1
[link-BabyBERTa-2]: https://huggingface.co/phueb/BabyBERTa-2
[link-BabyBERTa-3]: https://huggingface.co/phueb/BabyBERTa-3
|
soskok1288/Sas
|
soskok1288
| 2022-01-18T11:54:46Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
export enum PipelineType {
"text-generation"}
|
hkunlp/T5_large_prefix_all_tasks_2upsample2
|
hkunlp
| 2022-01-18T07:15:22Z | 4 | 2 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
This is the ckpt of prefix-tuning model we trained on 21 tasks using a upsampling temp of 2.
Note: The prefix module is large due to the fact we keep the re-param weight and didn't compress it to make it more original and extendable for researchers.
|
philschmid/tf-distilbart-cnn-12-6-tradetheevent
|
philschmid
| 2022-01-18T05:02:13Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: philschmid/tf-distilbart-cnn-12-6-tradetheevent
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# philschmid/tf-distilbart-cnn-12-6-tradetheevent
This model is a fine-tuned version of [philschmid/tf-distilbart-cnn-12-6](https://huggingface.co/philschmid/tf-distilbart-cnn-12-6) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6894
- Validation Loss: 1.7245
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 161440, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.6635 | 1.5957 | 0 |
| 1.3144 | 1.5577 | 1 |
| 1.0819 | 1.6059 | 2 |
| 0.8702 | 1.6695 | 3 |
| 0.6894 | 1.7245 | 4 |
### Framework versions
- Transformers 4.16.0.dev0
- TensorFlow 2.7.0
- Datasets 1.17.0
- Tokenizers 0.10.3
|
dmiller1/distilbert-base-uncased-finetuned-emotion
|
dmiller1
| 2022-01-18T03:59:30Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.9261144741040841
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2161
- Accuracy: 0.926
- F1: 0.9261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8436 | 1.0 | 250 | 0.3175 | 0.9105 | 0.9081 |
| 0.2492 | 2.0 | 500 | 0.2161 | 0.926 | 0.9261 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.7.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
huggingtweets/dankogai-hirox246-syakkin_dama
|
huggingtweets
| 2022-01-18T02:01:17Z | 0 | 0 | null |
[
"huggingtweets",
"en",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/dankogai-hirox246-syakkin_dama/1642471272927/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/646595746905620480/oeKI14gB_400x400.png')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1190142566831984640/o4kO2hp-_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1283621672541536259/WI_8OTJz_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ひろゆき, Hiroyuki Nishimura & Dan Kogai & 借金玉</div>
<div style="text-align: center; font-size: 14px;">@dankogai-hirox246-syakkin_dama</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ひろゆき, Hiroyuki Nishimura & Dan Kogai & 借金玉.
| Data | ひろゆき, Hiroyuki Nishimura | Dan Kogai | 借金玉 |
| --- | --- | --- | --- |
| Tweets downloaded | 3249 | 3250 | 3249 |
| Retweets | 283 | 341 | 260 |
| Short tweets | 1819 | 2313 | 2918 |
| Tweets kept | 1147 | 596 | 71 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1meoqt2b/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dankogai-hirox246-syakkin_dama's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1gc1ic0l) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1gc1ic0l/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dankogai-hirox246-syakkin_dama')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
jkang/drawing-artistic-trend-classifier
|
jkang
| 2022-01-18T01:19:29Z | 3 | 0 |
tf-keras
|
[
"tf-keras",
"en",
"license:mit",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: en
license: mit
datasets:
- web crawled (coming soon)
---
# Simple CNN-based Artist Classifier
This repo contains a simple CNN-based Keras model which classifies images into one of 8 artistic trends.
See also: `https://huggingface.co/jkang/drawing-artist-classifier`
- The purpose of this model was for a quick prototyping
- Data has been web-crawled using `https://github.com/YoongiKim/AutoCrawler`
- 8 popular artists/painters were chosen:
- \[TREND\]: \[ID\]
- cubism: 0,
- expressionism: 1,
- fauvisme: 2,
- graffitiar: 3,
- impressionism: 4,
- popart: 5,
- post_impressionism: 6,
- surrealism: 7}
- About 100 representative paintings per artist considering 8 trends were crawled and manually checked
- Dataset will be shared later
# How to use
```python
import tensorflow as tf
from huggingface_hub import from_pretrained_keras
model = from_pretrained_keras("jkang/drawing-artistic-trend-classifier")
image_file = 'monet.jpg'
img = tf.io.read_file(image_file)
img = tf.io.decode_jpeg(img, channels=3)
last_layer_activation, predictions = model(img[tf.newaxis,...])
```
# Intended uses & limitations
You can use this model freely for predicting artists or trends of a given image.
Please keep in mind that this model is not intended for production, but for research and quick prototyping.
Web-crawled image data might not have a balanced amount of drawings that sufficiently represent the artists.
---
- 2022-01-18 first created by jaekoo kang
|
huggingtweets/ayatokura-chomado-ikeay
|
huggingtweets
| 2022-01-17T23:42:42Z | 0 | 0 | null |
[
"huggingtweets",
"en",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/ayatokura-chomado-ikeay/1642462957980/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1334136134234849280/XgE0O39a_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1480842681182220288/ywam5sXK_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1480168235417083905/Kp8uyXIy_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">池澤あやか / いけあや & ちょまど🎀💻エンジニア兼漫画家 & 職業「戸倉彩」👩💻とくあや</div>
<div style="text-align: center; font-size: 14px;">@ayatokura-chomado-ikeay</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 池澤あやか / いけあや & ちょまど🎀💻エンジニア兼漫画家 & 職業「戸倉彩」👩💻とくあや.
| Data | 池澤あやか / いけあや | ちょまど🎀💻エンジニア兼漫画家 | 職業「戸倉彩」👩💻とくあや |
| --- | --- | --- | --- |
| Tweets downloaded | 3250 | 3245 | 3249 |
| Retweets | 224 | 717 | 1266 |
| Short tweets | 2813 | 867 | 1036 |
| Tweets kept | 213 | 1661 | 947 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2rhguk5h/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ayatokura-chomado-ikeay's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/34bxjwb8) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/34bxjwb8/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ayatokura-chomado-ikeay')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Rocketknight1/marian-finetuned-kde4-en-to-fr
|
Rocketknight1
| 2022-01-17T20:42:34Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"marian",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Rocketknight1/marian-finetuned-kde4-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Rocketknight1/marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6862
- Validation Loss: 0.8050
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 17733, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.0615 | 0.8832 | 0 |
| 0.7983 | 0.8211 | 1 |
| 0.6862 | 0.8050 | 2 |
### Framework versions
- Transformers 4.16.0.dev0
- TensorFlow 2.7.0
- Datasets 1.17.0
- Tokenizers 0.10.3
|
ronanki/xlmr_17-01-2022_v3
|
ronanki
| 2022-01-17T20:34:20Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# ronanki/xlmr_17-01-2022_v3
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ronanki/xlmr_17-01-2022_v3')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('ronanki/xlmr_17-01-2022_v3')
model = AutoModel.from_pretrained('ronanki/xlmr_17-01-2022_v3')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ronanki/xlmr_17-01-2022_v3)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 40 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 4,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
dshvadskiy/bert-finetuned-ner
|
dshvadskiy
| 2022-01-17T17:54:13Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2002",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2002
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2002
type: conll2002
args: es
metrics:
- name: Precision
type: precision
value: 0.7394396551724138
- name: Recall
type: recall
value: 0.7883731617647058
- name: F1
type: f1
value: 0.7631227758007118
- name: Accuracy
type: accuracy
value: 0.9655744705631151
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2002 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1458
- Precision: 0.7394
- Recall: 0.7884
- F1: 0.7631
- Accuracy: 0.9656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1047 | 1.0 | 1041 | 0.1516 | 0.7173 | 0.7505 | 0.7335 | 0.9602 |
| 0.068 | 2.0 | 2082 | 0.1280 | 0.7470 | 0.7888 | 0.7673 | 0.9664 |
| 0.0406 | 3.0 | 3123 | 0.1458 | 0.7394 | 0.7884 | 0.7631 | 0.9656 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
abhi1nandy2/EManuals_BERT
|
abhi1nandy2
| 2022-01-17T17:12:46Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"EManuals",
"customer support",
"QA",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language:
- English
tags:
- EManuals
- customer support
- QA
- bert
---
Refer to https://aclanthology.org/2021.findings-emnlp.392/ for the paper and https://sites.google.com/view/emanualqa/home for the project website
## Citation
Please cite the work if you would like to use it.
```
@inproceedings{nandy-etal-2021-question-answering,
title = "Question Answering over Electronic Devices: A New Benchmark Dataset and a Multi-Task Learning based {QA} Framework",
author = "Nandy, Abhilash and
Sharma, Soumya and
Maddhashiya, Shubham and
Sachdeva, Kapil and
Goyal, Pawan and
Ganguly, NIloy",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.392",
doi = "10.18653/v1/2021.findings-emnlp.392",
pages = "4600--4609",
abstract = "Answering questions asked from instructional corpora such as E-manuals, recipe books, etc., has been far less studied than open-domain factoid context-based question answering. This can be primarily attributed to the absence of standard benchmark datasets. In this paper, we meticulously create a large amount of data connected with E-manuals and develop a suitable algorithm to exploit it. We collect E-Manual Corpus, a huge corpus of 307,957 E-manuals, and pretrain RoBERTa on this large corpus. We create various benchmark QA datasets which include question answer pairs curated by experts based upon two E-manuals, real user questions from Community Question Answering Forum pertaining to E-manuals etc. We introduce EMQAP (E-Manual Question Answering Pipeline) that answers questions pertaining to electronics devices. Built upon the pretrained RoBERTa, it harbors a supervised multi-task learning framework which efficiently performs the dual tasks of identifying the section in the E-manual where the answer can be found and the exact answer span within that section. For E-Manual annotated question-answer pairs, we show an improvement of about 40{\%} in ROUGE-L F1 scores over most competitive baseline. We perform a detailed ablation study and establish the versatility of EMQAP across different circumstances. The code and datasets are shared at https://github.com/abhi1nandy2/EMNLP-2021-Findings, and the corresponding project website is https://sites.google.com/view/emanualqa/home.",
}
```
|
nielsr/tapex-large-finetuned-tabfact
|
nielsr
| 2022-01-17T13:39:28Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text-classification",
"tapex",
"en",
"dataset:tab_fact",
"arxiv:2107.07653",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language: en
tags:
- tapex
license: apache-2.0
datasets:
- tab_fact
inference: false
---
TAPEX-large model fine-tuned on WTQ. This model was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. Original repo can be found [here](https://github.com/microsoft/Table-Pretraining).
To load it and run inference, you can do the following:
```
from transformers import BartTokenizer, BartForSequenceClassification
import pandas as pd
tokenizer = BartTokenizer.from_pretrained("nielsr/tapex-large-finetuned-tabfact")
model = BartForSequenceClassification.from_pretrained("nielsr/tapex-large-finetuned-tabfact")
# create table
data = {'Actors': ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], 'Number of movies': ["87", "53", "69"]}
table = pd.DataFrame.from_dict(data)
# turn into dict
table_dict = {"header": list(table.columns), "rows": [list(row.values) for i,row in table.iterrows()]}
# turn into format TAPEX expects
# define the linearizer based on this code: https://github.com/microsoft/Table-Pretraining/blob/main/tapex/processor/table_linearize.py
linearizer = IndexedRowTableLinearize()
linear_table = linearizer.process_table(table_dict)
# add sentence
sentence = "George Clooney has 69 movies"
joint_input = sentence + " " + linear_table
# encode
encoding = tokenizer(joint_input, return_tensors="pt")
# forward pass
outputs = model(**encoding)
# print prediction
logits = outputs.logits
print(logits.argmax(-1))
```
|
Dumiiii/wav2vec2-xls-r-300m-romanian
|
Dumiiii
| 2022-01-17T13:34:59Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
name: wav2vec2-xls-r-300m-romanian
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
## This model achieves WER on common-voice ro test split of WER: 12.457178%
# wav2vec2-xls-r-300m-romanian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an common voice ro and RSS dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0836
- eval_wer: 0.0705
- eval_runtime: 160.4549
- eval_samples_per_second: 11.081
- eval_steps_per_second: 1.39
- epoch: 14.38
- step: 2703
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 15
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
Used the following code for evaluation:
```
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ro", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("Dumiiii/wav2vec2-xls-r-300m-romanian")
model = Wav2Vec2ForCTC.from_pretrained("Dumiiii/wav2vec2-xls-r-300m-romanian")
model.to("cuda")
chars_to_ignore_regex = '['+string.punctuation+']'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
Credits for evaluation: https://huggingface.co/anton-l
|
nielsr/tapex-large-finetuned-wtq
|
nielsr
| 2022-01-17T09:56:43Z | 8 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"tapex",
"table-question-answering",
"en",
"dataset:wtq",
"arxiv:2107.07653",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
table-question-answering
| 2022-03-02T23:29:05Z |
---
language: en
tags:
- tapex
- table-question-answering
license: apache-2.0
datasets:
- wtq
inference: false
---
TAPEX-large model fine-tuned on WTQ. This model was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. Original repo can be found [here](https://github.com/microsoft/Table-Pretraining).
To load it and run inference, you can do the following:
```
from transformers import BartTokenizer, BartForConditionalGeneration
import pandas as pd
tokenizer = BartTokenizer.from_pretrained("nielsr/tapex-large-finetuned-wtq")
model = BartForConditionalGeneration.from_pretrained("nielsr/tapex-large-finetuned-wtq")
# create table
data = {'Actors': ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], 'Number of movies': ["87", "53", "69"]}
table = pd.DataFrame.from_dict(data)
# turn into dict
table_dict = {"header": list(table.columns), "rows": [list(row.values) for i,row in table.iterrows()]}
# turn into format TAPEX expects
# define the linearizer based on this code: https://github.com/microsoft/Table-Pretraining/blob/main/tapex/processor/table_linearize.py
linearizer = IndexedRowTableLinearize()
linear_table = linearizer.process_table(table_dict)
# add question
question = "how many movies does George Clooney have?"
joint_input = question + " " + linear_table
# encode
encoding = tokenizer(joint_input, return_tensors="pt")
# forward pass
outputs = model.generate(**encoding)
# decode
tokenizer.batch_decode(outputs, skip_special_tokens=True)
```
|
philschmid/tf-distilbart-cnn-12-6
|
philschmid
| 2022-01-17T08:39:52Z | 28 | 0 |
transformers
|
[
"transformers",
"tf",
"bart",
"text2text-generation",
"summarization",
"en",
"dataset:cnn_dailymail",
"dataset:xsum",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:05Z |
---
language: en
tags:
- summarization
license: apache-2.0
datasets:
- cnn_dailymail
- xsum
thumbnail: https://huggingface.co/front/thumbnails/distilbart_medium.png
---
# This is an Tensorflow fork of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6)
### Usage
This checkpoint should be loaded into `BartForConditionalGeneration.from_pretrained`. See the [BART docs](https://huggingface.co/transformers/model_doc/bart.html?#transformers.BartForConditionalGeneration) for more information.
### Metrics for DistilBART models
| Model Name | MM Params | Inference Time (MS) | Speedup | Rouge 2 | Rouge-L |
|:---------------------------|------------:|----------------------:|----------:|----------:|----------:|
| distilbart-xsum-12-1 | 222 | 90 | 2.54 | 18.31 | 33.37 |
| distilbart-xsum-6-6 | 230 | 132 | 1.73 | 20.92 | 35.73 |
| distilbart-xsum-12-3 | 255 | 106 | 2.16 | 21.37 | 36.39 |
| distilbart-xsum-9-6 | 268 | 136 | 1.68 | 21.72 | 36.61 |
| bart-large-xsum (baseline) | 406 | 229 | 1 | 21.85 | 36.50 |
| distilbart-xsum-12-6 | 306 | 137 | 1.68 | 22.12 | 36.99 |
| bart-large-cnn (baseline) | 406 | 381 | 1 | 21.06 | 30.63 |
| distilbart-12-3-cnn | 255 | 214 | 1.78 | 20.57 | 30.00 |
| distilbart-12-6-cnn | 306 | 307 | 1.24 | 21.26 | 30.59 |
| distilbart-6-6-cnn | 230 | 182 | 2.09 | 20.17 | 29.70 |
|
sahri/indonesiasentiment
|
sahri
| 2022-01-17T04:50:03Z | 19 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"roberta",
"text-classification",
"indonesian-roberta-base-sentiment-classifier",
"id",
"dataset:indonlu",
"arxiv:1907.11692",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language: id
tags:
- indonesian-roberta-base-sentiment-classifier
license: mit
datasets:
- indonlu
widget:
- text: "tidak jelek tapi keren"
---
## Indonesian RoBERTa Base Sentiment Classifier
Indonesian RoBERTa Base Sentiment Classifier is a sentiment-text-classification model based on the [RoBERTa](https://arxiv.org/abs/1907.11692) model. The model was originally the pre-trained [Indonesian RoBERTa Base](https://hf.co/flax-community/indonesian-roberta-base) model, which is then fine-tuned on [`indonlu`](https://hf.co/datasets/indonlu)'s `SmSA` dataset consisting of Indonesian comments and reviews.
After training, the model achieved an evaluation accuracy of 94.36% and F1-macro of 92.42%. On the benchmark test set, the model achieved an accuracy of 93.2% and F1-macro of 91.02%.
Hugging Face's `Trainer` class from the [Transformers](https://huggingface.co/transformers) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with other frameworks nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
| ---------------------------------------------- | ------- | ------------ | ------------------------------- |
| `indonesian-roberta-base-sentiment-classifier` | 124M | RoBERTa Base | `SmSA` |
## Evaluation Results
The model was trained for 5 epochs and the best model was loaded at the end.
| Epoch | Training Loss | Validation Loss | Accuracy | F1 | Precision | Recall |
| ----- | ------------- | --------------- | -------- | -------- | --------- | -------- |
| 1 | 0.342600 | 0.213551 | 0.928571 | 0.898539 | 0.909803 | 0.890694 |
| 2 | 0.190700 | 0.213466 | 0.934127 | 0.901135 | 0.925297 | 0.882757 |
| 3 | 0.125500 | 0.219539 | 0.942857 | 0.920901 | 0.927511 | 0.915193 |
| 4 | 0.083600 | 0.235232 | 0.943651 | 0.924227 | 0.926494 | 0.922048 |
| 5 | 0.059200 | 0.262473 | 0.942063 | 0.920583 | 0.924084 | 0.917351 |
## How to Use
### As Text Classifier
```python
from transformers import pipeline
pretrained_name = "sahri/sentiment"
nlp = pipeline(
"sentiment-analysis",
model=pretrained_name,
tokenizer=pretrained_name
)
nlp("tidak jelek tapi keren")
```
## Disclaimer
Do consider the biases which come from both the pre-trained RoBERTa model and the `SmSA` dataset that may be carried over into the results of this model.
## Author
Indonesian RoBERTa Base Sentiment Classifier was trained and evaluated by [sahri ramadhan] All computation and development are done on Google Colaboratory using their free GPU access.
|
ilevs/opus-mt-ru-en-finetuned-ru-to-en
|
ilevs
| 2022-01-16T19:29:07Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-ru-en-finetuned-ru-to-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ru-en-finetuned-ru-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ru-en](https://huggingface.co/Helsinki-NLP/opus-mt-ru-en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1251
- Bleu: 15.9892
- Gen Len: 5.0168
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 2.6914 | 1.0 | 4956 | 2.5116 | 11.1411 | 4.9989 |
| 2.2161 | 2.0 | 9912 | 2.3255 | 11.7334 | 5.1678 |
| 1.9237 | 3.0 | 14868 | 2.2388 | 13.6802 | 5.1463 |
| 1.7087 | 4.0 | 19824 | 2.1892 | 13.8815 | 5.0625 |
| 1.5423 | 5.0 | 24780 | 2.1586 | 14.8182 | 5.0779 |
| 1.3909 | 6.0 | 29736 | 2.1445 | 14.3603 | 5.2194 |
| 1.3041 | 7.0 | 34692 | 2.1323 | 16.2138 | 5.0438 |
| 1.2078 | 8.0 | 39648 | 2.1275 | 16.2574 | 5.0165 |
| 1.1523 | 9.0 | 44604 | 2.1255 | 16.0368 | 5.014 |
| 1.1005 | 10.0 | 49560 | 2.1251 | 15.9892 | 5.0168 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
husnu/electra-small-turkish-uncased-discriminator
|
husnu
| 2022-01-16T19:01:47Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: ft_electra-small-turkish-uncased-discriminator_lr-2e-1_epochs-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
This model is a fine-tuned version of [loodos/electra-small-turkish-uncased-discriminator](https://huggingface.co/loodos/electra-small-turkish-uncased-discriminator) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 5.9506
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.2
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.951 | 1.0 | 5818 | 5.9506 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Shushant/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel_PubmedBERT
|
Shushant
| 2022-01-16T15:54:15Z | 55 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel_PubmedBERT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel_PubmedBERT
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7515
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 22 | 3.9518 |
| No log | 2.0 | 44 | 3.2703 |
| No log | 3.0 | 66 | 2.9308 |
| No log | 4.0 | 88 | 2.7806 |
| No log | 5.0 | 110 | 2.6926 |
| No log | 6.0 | 132 | 2.7043 |
| No log | 7.0 | 154 | 2.7113 |
| No log | 8.0 | 176 | 2.7236 |
| No log | 9.0 | 198 | 2.7559 |
| No log | 10.0 | 220 | 2.7515 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Shushant/biobert-v1.1-biomedicalQuestionAnswering
|
Shushant
| 2022-01-16T15:34:49Z | 83 | 5 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
model-index:
- name: biobert-v1.1-biomedicalQuestionAnswering
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert-v1.1-biomedicalQuestionAnswering
This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9009
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 22 | 3.7409 |
| No log | 2.0 | 44 | 3.1852 |
| No log | 3.0 | 66 | 3.0342 |
| No log | 4.0 | 88 | 2.9416 |
| No log | 5.0 | 110 | 2.9009 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
jiobiala24/wav2vec2-base-checkpoint-5
|
jiobiala24
| 2022-01-16T10:56:18Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-base-checkpoint-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-checkpoint-5
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-4](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-4) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9849
- Wer: 0.3354
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.3947 | 1.96 | 1000 | 0.5749 | 0.3597 |
| 0.2856 | 3.93 | 2000 | 0.6212 | 0.3479 |
| 0.221 | 5.89 | 3000 | 0.6280 | 0.3502 |
| 0.1755 | 7.86 | 4000 | 0.6517 | 0.3526 |
| 0.1452 | 9.82 | 5000 | 0.7115 | 0.3481 |
| 0.1256 | 11.79 | 6000 | 0.7687 | 0.3509 |
| 0.1117 | 13.75 | 7000 | 0.7785 | 0.3490 |
| 0.0983 | 15.72 | 8000 | 0.8115 | 0.3442 |
| 0.0877 | 17.68 | 9000 | 0.8290 | 0.3429 |
| 0.0799 | 19.65 | 10000 | 0.8517 | 0.3412 |
| 0.0733 | 21.61 | 11000 | 0.9370 | 0.3448 |
| 0.066 | 23.58 | 12000 | 0.9157 | 0.3410 |
| 0.0623 | 25.54 | 13000 | 0.9673 | 0.3377 |
| 0.0583 | 27.5 | 14000 | 0.9804 | 0.3348 |
| 0.0544 | 29.47 | 15000 | 0.9849 | 0.3354 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
Sakil/imdbsentdistilbertmodel
|
Sakil
| 2022-01-16T06:54:14Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"text Classification",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
language:
- en
tags:
- text Classification
license: apache-2.0
widget:
- text: "I like you. </s></s> I love you."
---
* IMDBSentimentDistilBertModel:
- I have used IMDB movie review dataset to create custom model by using DistilBertForSequenceClassification.
from transformers import DistilBertForSequenceClassification, Trainer, TrainingArguments
model = DistilBertForSequenceClassification.from_pretrained('./imdbsentdistilbertmodel')
|
anzorq/t5-v1_1-small-ru_kbd-cased
|
anzorq
| 2022-01-16T05:24:51Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"translation",
"ru",
"kbd",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:05Z |
---
language:
- ru
- kbd
tags:
- translation
datasets:
- anzorq/kbd-ru-1.67M-temp
- 17753 Russian-Kabardian pairs of text
widget:
- text: "ru->kbd: Я иду домой."
example_title: "Я иду домой."
- text: "ru->kbd: Дети играют во дворе."
example_title: "Дети играют во дворе."
- text: "ru->kbd: Сколько тебе лет?"
example_title: "Сколько тебе лет?"
---
## [google/t5-v1_1-small](google/t5-v1_1-small) model
### pretrained on [anzorq/kbd-ru-1.67M-temp](https://huggingface.co/datasets/anzorq/kbd-ru-1.67M-temp)
### fine-tuned on **17753** Russian-Kabardian word/sentence pairs
kbd text uses custom latin script for optimization reasons.
Translation input should start with '**ru->kbd:** '.
**Tokenizer**: T5 sentencepiece, char, cased.
|
husnu/bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3TQUAD2-finetuned_lr-2e-05_epochs-3
|
husnu
| 2022-01-15T18:42:09Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- turkish squad v2
model-index:
- name: bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3TQUAD2-finetuned_lr-2e-05_epochs-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3TQUAD2-finetuned_lr-2e-05_epochs-3
This model is a fine-tuned version of [husnu/bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3](https://huggingface.co/husnu/bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3) on the turkish squad2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9011
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.6404 | 1.0 | 2245 | 1.4524 |
| 0.403 | 2.0 | 4490 | 1.5638 |
| 0.2355 | 3.0 | 6735 | 1.9011 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Fraser/to_delete
|
Fraser
| 2022-01-15T15:08:51Z | 0 | 0 | null |
[
"program-synthesis",
"en",
"dataset:program-synthesis",
"license:mit",
"region:us"
] | null | 2022-03-02T23:29:04Z |
---
language:
- en
thumbnail: "https://huggingface.co/Fraser/program-synthesis/resolve/main/img.png"
tags:
- program-synthesis
license: "mit"
datasets:
- program-synthesis
---
# Program Synthesis Data
Generated program synthesis datasets used to train [dreamcoder](https://github.com/ellisk42/ec).
Currently just supports text & list data.
```python
_FEATURES = datasets.Features(
{
"description": datasets.Value("string"),
"input": datasets.Value("string"),
"output": datasets.Value("string"),
"types": datasets.Value("string")
}
)
```

|
jiobiala24/wav2vec2-base-checkpoint-4
|
jiobiala24
| 2022-01-15T12:59:52Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-base-checkpoint-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-checkpoint-4
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-3](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-3) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
husnu/electra-small-turkish-uncased-discriminator-finetuned_lr-2e-05_epochs-6
|
husnu
| 2022-01-15T07:27:37Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"question-answering",
"generated_from_trainer",
"dataset:tsquad",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- tsquad
model-index:
- name: electra-small-turkish-uncased-discriminator-finetuned_lr-2e-05_epochs-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-small-turkish-uncased-discriminator-finetuned_lr-2e-05_epochs-6
This model is a fine-tuned version of [loodos/electra-small-turkish-uncased-discriminator](https://huggingface.co/loodos/electra-small-turkish-uncased-discriminator) on the turkish squad dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0379
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.128 | 1.0 | 722 | 2.7187 |
| 3.0376 | 2.0 | 1444 | 2.4486 |
| 2.5304 | 3.0 | 2166 | 2.3485 |
| 2.4214 | 4.0 | 2888 | 2.0450 |
| 2.1568 | 5.0 | 3610 | 2.0576 |
| 2.0752 | 6.0 | 4332 | 2.0379 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
khizon/distilbert-unreliable-news-eng-4L
|
khizon
| 2022-01-15T07:06:59Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
# Unreliable News Classifier (English)
Trained, validate, and tested using a subset of the NELA-GT-2018 dataset. The dataset is split such that there was no overlap in of news sources between the three sets.
This model used the pre-trained weights of `distilbert-base-cased` as starting point (only 4 layers) and was able to achieve 84% accuracy on the test set. It has less than 1% difference in performance compared to the BERT based model while having **2.0x** the speed.
For more details: [Github](https://github.com/khizon/CS284_final_project)
|
khizon/bert-unreliable-news-eng
|
khizon
| 2022-01-15T07:04:33Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
# Unreliable News Classifier (English)
Trained, validate, and tested using a subset of the NELA-GT-2018 dataset. The dataset is split such that there was no overlap in of news sources between the three sets.
This model used the pre-trained weights of `bert-base-cased` as starting point and was able to achieve 84% accuracy on the test set.
For more details: [Github](https://github.com/khizon/CS284_final_project)
|
Abirate/gpt_3_finetuned_multi_x_science
|
Abirate
| 2022-01-15T06:16:57Z | 28 | 2 |
transformers
|
[
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z |
---
- Text Generation
- PyTorch
- Transformers
- gpt_neo
- text generation
---
## Petrained Model Description: Open Source Version of GPT-3
Generative Pre-trained Transformer 3 (GPT-3) is an autoregressive language model that uses deep learning to produce human-like text.
It is the third-generation language prediction model in the GPT-n series (and the successor to GPT-2) created by OpenAI
GPT-Neo (125M) is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 125M represents the number of parameters of this particular pre-trained model.
and first released in this [repository](https://github.com/EleutherAI/gpt-neo).
## Fine-tuned Model Description: GPT-3 fine-tuned Multi-XScience
The Open Source version of GPT-3: GPT-Neo(125M) has been fine-tuned on a dataset called "Multi-XScience": [Multi-XScience_Repository](https://github.com/yaolu/Multi-XScience): A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles.
I first fine-tuned and then deployed it using Google "Material Design" (on Anvil): [Abir Scientific text Generator](https://abir-scientific-text-generator.anvil.app/)
By fine-tuning GPT-Neo(Open Source version of GPT-3), on Multi-XScience dataset, the model is now able to generate scientific texts(even better than GPT-J(6B).
Try putting the prompt "attention is all" on both my [Abir Scientific text Generator](https://abir-scientific-text-generator.anvil.app/) and on the [ GPT-J Eleuther.ai Demo](https://6b.eleuther.ai/) to understand what I mean.
And Here's a demonstration video for this. [Video real-time Demontration](https://www.youtube.com/watch?v=XP8uZfnCYQI)
|
husnu/xtremedistil-l6-h256-uncased-TQUAD-finetuned_lr-2e-05_epochs-9
|
husnu
| 2022-01-15T05:08:37Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: xtremedistil-l6-h256-uncased-TQUAD-finetuned_lr-2e-05_epochs-9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xtremedistil-l6-h256-uncased-TQUAD-finetuned_lr-2e-05_epochs-9
This model is a fine-tuned version of [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased) on the Turkish squad dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2340
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.5236 | 1.0 | 1050 | 3.0042 |
| 2.8489 | 2.0 | 2100 | 2.5866 |
| 2.5485 | 3.0 | 3150 | 2.3526 |
| 2.4067 | 4.0 | 4200 | 2.3535 |
| 2.3091 | 5.0 | 5250 | 2.2862 |
| 2.2401 | 6.0 | 6300 | 2.3989 |
| 2.1715 | 7.0 | 7350 | 2.2284 |
| 2.1414 | 8.0 | 8400 | 2.2298 |
| 2.1221 | 9.0 | 9450 | 2.2340 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
husnu/xtremedistil-l6-h256-uncased-finetuned_lr-2e-05_epochs-6
|
husnu
| 2022-01-14T20:57:15Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: xtremedistil-l6-h256-uncased-finetuned_lr-2e-05_epochs-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xtremedistil-l6-h256-uncased-finetuned_lr-2e-05_epochs-6
This model is a fine-tuned version of [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2578
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.3828 | 1.0 | 1845 | 1.7946 |
| 1.5827 | 2.0 | 3690 | 1.4123 |
| 1.404 | 3.0 | 5535 | 1.3142 |
| 1.346 | 4.0 | 7380 | 1.2819 |
| 1.2871 | 5.0 | 9225 | 1.2630 |
| 1.2538 | 6.0 | 11070 | 1.2578 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
vennify/t5-base-grammar-correction
|
vennify
| 2022-01-14T16:35:23Z | 13,051 | 165 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"grammar",
"en",
"dataset:jfleg",
"arxiv:1702.04066",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language: en
tags:
- grammar
- text2text-generation
license: cc-by-nc-sa-4.0
datasets:
- jfleg
---
# T5 Grammar Correction
This model generates a revised version of inputted text with the goal of containing fewer grammatical errors.
It was trained with [Happy Transformer](https://github.com/EricFillion/happy-transformer)
using a dataset called [JFLEG](https://arxiv.org/abs/1702.04066). Here's a [full article](https://www.vennify.ai/fine-tune-grammar-correction/) on how to train a similar model.
## Usage
`pip install happytransformer `
```python
from happytransformer import HappyTextToText, TTSettings
happy_tt = HappyTextToText("T5", "vennify/t5-base-grammar-correction")
args = TTSettings(num_beams=5, min_length=1)
# Add the prefix "grammar: " before each input
result = happy_tt.generate_text("grammar: This sentences has has bads grammar.", args=args)
print(result.text) # This sentence has bad grammar.
```
|
addy88/eli5-all-mpnet-base-v2
|
addy88
| 2022-01-14T13:24:40Z | 14 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:1908.10084",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. Finetune on [ELI5](https://huggingface.co/datasets/eli5)
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('addy88/eli5-all-mpnet-base-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('addy88/eli5-all-mpnet-base-v2')
model = AutoModel.from_pretrained('addy88/eli5-all-mpnet-base-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=addy88/eli5-all-mpnet-base-v2)
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 14393 with parameters:
```
{'batch_size': 16}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1439,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
```
|
lewtun/distilbert-base-uncased-finetuned-emotion-test-01
|
lewtun
| 2022-01-14T10:29:26Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion-test-01
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.39
- name: F1
type: f1
value: 0.21884892086330932
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion-test-01
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7510
- Accuracy: 0.39
- F1: 0.2188
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 2 | 1.7634 | 0.39 | 0.2188 |
| No log | 2.0 | 4 | 1.7510 | 0.39 | 0.2188 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.