modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-11 06:30:11
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
555 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-11 06:29:58
card
stringlengths
11
1.01M
younes9/AI-DAY-distilbert-base-uncased-finetuned-cola
younes9
2022-01-24T18:13:20Z
17
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: AI-DAY-distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5382139717003264 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # AI-DAY-distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7236 - Matthews Correlation: 0.5382 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5308 | 1.0 | 535 | 0.5065 | 0.4296 | | 0.3565 | 2.0 | 1070 | 0.5109 | 0.4940 | | 0.2399 | 3.0 | 1605 | 0.6056 | 0.5094 | | 0.1775 | 4.0 | 2140 | 0.7236 | 0.5382 | | 0.1242 | 5.0 | 2675 | 0.8659 | 0.5347 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
asanwari/agriculture-sentence-transformer
asanwari
2022-01-24T17:36:27Z
0
0
sentence-transformers
[ "sentence-transformers", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity language: english tags: - sentence-transformers - sentence-similarity - transformers --- # recobo/agri-sentence-transformer This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search. This model was built using [recobo/agriculture-bert-uncased](https://huggingface.co/recobo/agriculture-bert-uncased), which is a BERT model trained on 6.5 million passages from the agricultural domain. Hence, this model is expected to perform well on sentence similarity tasks specifically for agricultural text data. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["A man is eating food.", "A man is eating a piece of bread"] model = SentenceTransformer('recobo/agri-sentence-transformer') embeddings = model.encode(sentences) print(embeddings)
anirudh21/bert-base-uncased-finetuned-cola
anirudh21
2022-01-24T16:29:06Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: bert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5796941781913538 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-cola This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.9664 - Matthews Correlation: 0.5797 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5017 | 1.0 | 535 | 0.5252 | 0.4841 | | 0.2903 | 2.0 | 1070 | 0.5550 | 0.4967 | | 0.1839 | 3.0 | 1605 | 0.7295 | 0.5634 | | 0.1132 | 4.0 | 2140 | 0.7762 | 0.5702 | | 0.08 | 5.0 | 2675 | 0.9664 | 0.5797 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
huggingtweets/yu_kisub21
huggingtweets
2022-01-24T15:24:45Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/yu_kisub21/1643037750346/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1476997379857723392/L6czpqmI_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">ゆう🇲🇾英語を軸に人生に革新を🔥</div> <div style="text-align: center; font-size: 14px;">@yu_kisub21</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from ゆう🇲🇾英語を軸に人生に革新を🔥. | Data | ゆう🇲🇾英語を軸に人生に革新を🔥 | | --- | --- | | Tweets downloaded | 1580 | | Retweets | 366 | | Short tweets | 1137 | | Tweets kept | 77 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1fswx6qh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @yu_kisub21's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/35tec8b2) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/35tec8b2/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/yu_kisub21') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
dbsamu/electra-small-discriminator-finetuned-ner
dbsamu
2022-01-24T14:27:41Z
13
1
transformers
[ "transformers", "pytorch", "tensorboard", "electra", "token-classification", "generated_from_trainer", "dataset:wikiann", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wikiann metrics: - precision - recall - f1 - accuracy model-index: - name: electra-small-discriminator-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: wikiann type: wikiann args: en metrics: - name: Precision type: precision value: 0.7330965535385425 - name: Recall type: recall value: 0.7542632861138681 - name: F1 type: f1 value: 0.7435293071244329 - name: Accuracy type: accuracy value: 0.8883011190233978 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-small-discriminator-finetuned-ner This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on the wikiann dataset. It achieves the following results on the evaluation set: - Loss: 0.3685 - Precision: 0.7331 - Recall: 0.7543 - F1: 0.7435 - Accuracy: 0.8883 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.5465 | 1.0 | 1250 | 0.4158 | 0.6932 | 0.7201 | 0.7064 | 0.8735 | | 0.4037 | 2.0 | 2500 | 0.3817 | 0.7191 | 0.7470 | 0.7328 | 0.8828 | | 0.3606 | 3.0 | 3750 | 0.3685 | 0.7331 | 0.7543 | 0.7435 | 0.8883 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
anirudh21/bert-base-uncased-finetuned-wnli
anirudh21
2022-01-24T13:33:56Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: bert-base-uncased-finetuned-wnli results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: wnli metrics: - name: Accuracy type: accuracy value: 0.5633802816901409 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-wnli This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6854 - Accuracy: 0.5634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 40 | 0.6854 | 0.5634 | | No log | 2.0 | 80 | 0.6983 | 0.3239 | | No log | 3.0 | 120 | 0.6995 | 0.5352 | | No log | 4.0 | 160 | 0.6986 | 0.5634 | | No log | 5.0 | 200 | 0.6996 | 0.5634 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_publaynet_inference_only
deepdoctection
2022-01-24T13:05:27Z
0
0
null
[ "Tensorflow", "dataset:Publaynet", "arxiv:1908.07836", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05Z
--- tags: - Tensorflow license: apache-2.0 datasets: - Publaynet --- # Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Publaynet for Document Layout Analysis The model and its training code has been mainly taken from: [Tensorpack](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) . Please check: [Xu Zhong et. all. - PubLayNet: largest dataset ever for document layout analysis](https://arxiv.org/abs/1908.07836). This model is different from the model used the paper. The code has been adapted so that it can be used in a **deep**doctection pipeline. ## How this model can be used This model can be used with the **deep**doctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this [Get_started](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Get_Started.ipynb) tutorial. ## This is an inference model only To reduce the size of the checkpoint we removed all variables that are not necessary for inference. Therefore it cannot be used for fine-tuning. To fine tune this model please check [this model](https://huggingface.co/deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_publaynet). ## How this model was trained. To recreate the model run on the **deep**doctection framework, run: ```python >>> import os >>> from deep_doctection.datasets import DatasetRegistry >>> from deep_doctection.eval import MetricRegistry >>> from deep_doctection.utils import get_configs_dir_path >>> from deep_doctection.train import train_faster_rcnn publaynet = DatasetRegistry.get_dataset("publaynet") path_config_yaml=os.path.join(get_configs_dir_path(),"tp/layout/conf_frcnn_layout.yaml") path_weights = "" dataset_train = publaynet config_overwrite=["TRAIN.STEPS_PER_EPOCH=500","TRAIN.EVAL_PERIOD=200","TRAIN.STARTING_EPOCH=1", "PREPROC.TRAIN_SHORT_EDGE_SIZE=[800,1200]","TRAIN.CHECKPOINT_PERIOD=50", "BACKBONE.FREEZE_AT=0"] build_train_config=["max_datapoints=335703"] dataset_val = publaynet build_val_config = ["max_datapoints=2000"] coco_metric = MetricRegistry.get_metric("coco") train_faster_rcnn(path_config_yaml=path_config_yaml, dataset_train=dataset_train, path_weights=path_weights, config_overwrite=config_overwrite, log_dir="/path/to/dir", build_train_config=build_train_config, dataset_val=dataset_val, build_val_config=build_val_config, metric=coco_metric, pipeline_component_name="ImageLayoutService" ) ```
deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_publaynet
deepdoctection
2022-01-24T13:02:44Z
0
1
null
[ "Tensorflow", "dataset:Publaynet", "arxiv:1908.07836", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05Z
--- tags: - Tensorflow license: apache-2.0 datasets: - Publaynet --- # Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Publaynet for Document Layout Analysis The model and its training code has been mainly taken from: [Tensorpack](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) . Please check: [Xu Zhong et. all. - PubLayNet: largest dataset ever for document layout analysis](https://arxiv.org/abs/1908.07836). This model is different from the model used the paper. The code has been adapted so that it can be used in a **deep**doctection pipeline. ## How this model can be used This model can be used with the **deep**doctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this [Get_started](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Get_Started.ipynb) tutorial. ## How this model was trained. To recreate the model run on the **deep**doctection framework, run: ```python >>> import os >>> from deep_doctection.datasets import DatasetRegistry >>> from deep_doctection.eval import MetricRegistry >>> from deep_doctection.utils import get_configs_dir_path >>> from deep_doctection.train import train_faster_rcnn publaynet = DatasetRegistry.get_dataset("publaynet") path_config_yaml=os.path.join(get_configs_dir_path(),"tp/layout/conf_frcnn_layout.yaml") path_weights = "" dataset_train = publaynet config_overwrite=["TRAIN.STEPS_PER_EPOCH=500","TRAIN.EVAL_PERIOD=200","TRAIN.STARTING_EPOCH=1", "PREPROC.TRAIN_SHORT_EDGE_SIZE=[800,1200]","TRAIN.CHECKPOINT_PERIOD=50", "BACKBONE.FREEZE_AT=0"] build_train_config=["max_datapoints=335703"] dataset_val = publaynet build_val_config = ["max_datapoints=2000"] coco_metric = MetricRegistry.get_metric("coco") train_faster_rcnn(path_config_yaml=path_config_yaml, dataset_train=dataset_train, path_weights=path_weights, config_overwrite=config_overwrite, log_dir="/path/to/dir", build_train_config=build_train_config, dataset_val=dataset_val, build_val_config=build_val_config, metric=coco_metric, pipeline_component_name="ImageLayoutService" ) ``` ## How to fine-tune this model To fine tune this model, please check this [Fine-tune](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Fine_Tune.ipynb) tutorial.
nimelinia/rut5-reply-headline-model
nimelinia
2022-01-24T12:31:54Z
1
0
transformers
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
This model was trained from rut5-base-multitask with pair of questions and answers (in Russian). The model demonstrate interesting behavior with option "reply" and "headline". When model creates a headline for paragraph of text, it not only uses phrases from text, but also generate new words and sometimes new meanings. Examples of questions and answers: > Как зовут отца Александра Сергеевича Пушкина? > - Пушкин > Где купить вкусное мороженое? > - В супермаркете > Красивая ли Мона Лиза? > - Очень красивая Examples of headlines: > Власти Пекина из-за пандемии COVID-19 призвали жителей города отказаться от помощи и избегать любого контакта с олимпийскими машинами, попавшими в ДТП. Об этом сообщает South China Morning Post. > - Китайский губернатор призвал жителей Пекина отказаться от помощи > Казахский народ должен поддержать своего президента Касым-Жомарт Токаева на фоне угрозы повторения массовых беспорядков, но и властям страны следует провести демократические реформы для снижения недовольства. Об этом в интервью изданию Orda заявил бывший генеральный продюсер гостелеканала «Хабар», экс-глава канала «Ел Арна» Серик Абас-Шах. > - Казахский народ должен поддержать Токаева > Позиция России по макроэкономическим показателям является лучшей в мире. Об этом сказал ТАСС российский исполнительный директор в Международном валютном фонде (МВФ) Алексей Можин. > - Российская экономика является лучшей в мире
emre/wav2vec2-large-xlsr-53-demo-colab
emre
2022-01-24T10:54:03Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "robust-speech-event", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer - robust-speech-event datasets: - common_voice model-index: - name: wav2vec2-large-xlsr-53-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-53-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.3966 - Wer: 0.4834 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.1516 | 4.21 | 400 | 2.7673 | 1.0 | | 0.9134 | 8.42 | 800 | 0.4618 | 0.6418 | | 0.3273 | 12.63 | 1200 | 0.4188 | 0.5535 | | 0.2252 | 16.84 | 1600 | 0.4144 | 0.5232 | | 0.1692 | 21.05 | 2000 | 0.3995 | 0.5030 | | 0.1355 | 25.26 | 2400 | 0.4073 | 0.4920 | | 0.1172 | 29.47 | 2800 | 0.3966 | 0.4834 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
Vibharkchauhan/distilbert-base-uncased-finetuned-ner
Vibharkchauhan
2022-01-24T10:30:44Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.9192622045504749 - name: Recall type: recall value: 0.9310884886452623 - name: F1 type: f1 value: 0.9251375534930251 - name: Accuracy type: accuracy value: 0.9823820039080496 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0626 - Precision: 0.9193 - Recall: 0.9311 - F1: 0.9251 - Accuracy: 0.9824 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2393 | 1.0 | 878 | 0.0732 | 0.9052 | 0.9207 | 0.9129 | 0.9801 | | 0.0569 | 2.0 | 1756 | 0.0626 | 0.9193 | 0.9311 | 0.9251 | 0.9824 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
hfl/cino-large
hfl
2022-01-24T09:28:57Z
5
9
transformers
[ "transformers", "pytorch", "tf", "xlm-roberta", "fill-mask", "zh", "bo", "kk", "ko", "mn", "ug", "yue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - zh - bo - kk - ko - mn - ug - yue license: "apache-2.0" --- ## CINO: Pre-trained Language Models for Chinese Minority Languages(中国少数民族预训练模型) Multilingual Pre-trained Language Model, such as mBERT, XLM-R, provide multilingual and cross-lingual ability for language understanding. We have seen rapid progress on building multilingual PLMs in recent year. However, there is a lack of contributions on building PLMs on Chines minority languages, which hinders researchers from building powerful NLP systems. To address the absence of Chinese minority PLMs, Joint Laboratory of HIT and iFLYTEK Research (HFL) proposes CINO (Chinese-miNOrity pre-trained language model), which is built on XLM-R with additional pre-training using Chinese minority corpus, such as - Chinese,中文(zh) - Tibetan,藏语(bo) - Mongolian (Uighur form),蒙语(mn) - Uyghur,维吾尔语(ug) - Kazakh (Arabic form),哈萨克语(kk) - Korean,朝鲜语(ko) - Zhuang,壮语 - Cantonese,粤语(yue) Please read our GitHub repository for more details (Chinese): https://github.com/ymcui/Chinese-Minority-PLM You may also interested in, Chinese MacBERT: https://github.com/ymcui/MacBERT Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA Chinese XLNet: https://github.com/ymcui/Chinese-XLNet Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology
public-data/yolov5_anime
public-data
2022-01-24T05:53:35Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
# yolov5_anime - Repo: https://github.com/zymk9/yolov5_anime - https://drive.google.com/file/d/1-MO9RYPZxnBfpNiGY6GdsqCeQWYNxBdl/view
guoqiang/WuDaoSailing
guoqiang
2022-01-24T05:39:39Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
# WudaoSailing WudaoSailing is a package for pretraining chinese Language Model and finetune tasks. Now it supports GLM, Bert, T5, Cogview and Roberta models. ## Get Started ### Docker Image We prepare two docker images based on CUDA 10.2 and CUDA 11.2. You can build images from the docker file [docs/docker/cuda102.dockerfile](docs/docker/cuda102.dcokerfile) or pull the pre-built images from Docker Hub and run with docker v19.03+ ```shell nvidia-docker run -id --hostname=V100 --network=host\ --ipc=host --shm-size=16gb --name=deepspeed-cuda \ -e NVIDIA_VISIBLE_DEVICES=0,1,2,3 \ -v /DATA/disk1/docker/containers/:/data deepspeed/cuda102:lastest ``` or replace `cuda102` with `cuda112`. ```shell docker build -f cuda102.dockerfile -t deepspeed/cuda102 . ``` ### Clone this repo ```shell git clone https://github.com/wangguojim/WudaoSailing.git cd WudaoSailing pip install -r requirements.txt ``` ## GLM We show some examples based on GLM model. ### finetuene We provide scripts for finetuning GLM on some downstream tasks. #### SuperGLUE - Download the [SuperGlue](https://super.gluebenchmark.com/tasks) data and check the experiment setup in [examples/glm/scripts/ds_finetune_superglue.sh](xamples/glm/scripts/ds_finetune_superglue.sh). Note that `DATA_ROOT, CHECKPOINT_PATH, SAVE_PATH` need to be changed to your local path. You may also change the `batch-size` and `nproc_per_node` according to your available hardware. - Run the following script for text similarity finetune task (use the afqmc dataset as an example) ``` cd examples/glm/ bash scripts/ds_finetune_superglue.sh\ config/model_blocklm_large_chinese.sh\ config_tasks/task_afqmc.sh ``` - Run the following script for text classification finetune task (use the thunews and thunews dataset as an example) ``` cd examples/glm/ bash scripts/ds_finetune_superglue.sh\ config/model_blocklm_large_chinese.sh\ config_tasks/task_tnews.sh ``` - Run the following script for causal inference finetune task (use the COPA dataset as an example) ``` cd examples/glm/ bash scripts/ds_finetune_superglue.sh\ config/model_blocklm_large_chinese.sh\ config_tasks/task_copa.sh ``` - To apply GLM to a new NLU dataset with cloze-filling finetuning, implement a `DataProcessor` in [examples/glm/tasks/superglue/dataset.py](examples/glm/tasks/superglue/dataset.py) for data loading and add a `PVP` in [examples/glm/tasks/superglue/pvp.py](examples/glm/tasks/superglue/pvp.py) for the cloze question. More details can be found [here](examples/glm/tasks/superglue/README.md). #### Blank Filling (Interactive) * Change `CHECKPOINT_PATH` to your local path. Run the following script ``` bash config/generate_block.sh\ config/model_blocklm_large_chinese.sh ``` ##### Example1 (Entity Prediction): Context: 凯旋门位于意大利米兰市古城堡旁。1807年为纪念[MASK]而建,门高25米,顶上矗立两武士青铜古兵车铸像。 GLM:拿破仑军队攻克米兰城 ##### Example2 (Sentence Prediction) Context: 工业互联网(Industrial Internet)是新一代信息通信技术与工业经济深度融合的新型基础设施、应用模式和工业生态,通过对人、机、物、系统等的全面连接,构建起覆盖全产业链、全价值链的全新制造和服务体系,为工业乃至产业数字化、网络化、智能化发展提供了实现途径,是第四次工业革命的重要基石。[sMASK]它以网络为基础、平台为中枢、数据为要素、安全为保障,既是工业数字化、网络化、智能化转型的基础设施,也是互联网、大数据、人工智能与实体经济深度融合的应用模式,同时也是一种新业态、新产业,将重塑企业形态、供应链和产业链。当前,工业互联网融合应用向国民经济重点行业广泛拓展,形成平台化设计、智能化制造、网络化协同、个性化定制、服务化延伸、数字化管理六大新模式,赋能、赋智、赋值作用不断显现,有力的促进了实体经济提质、增效、降本、绿色、安全发展。 GLM: 工业互联网是制造业技术、管理、模式的重大变革,是推动互联网、大数据、人工智能和实体经济深度融合的重要载体,是建设制造强国和网络强国的重要基础。 ##### Example3 (Long Text Generation) Context: 问题:高斯所在的国家有什么汽车品牌?答案:[gMASK] GLM:答案:[gMASK]<|startofpiece|>德国奔驰、德国大众、别克、沃尔沃、斯柯达、本田、雪铁龙. ### Ptuning Run the following script to integrate p-tuning with GLM: ```shell cd algutils/ptuning/ bash finetune_zy.sh ``` ### Pretrain Run the following script to pre-train the GLM-Large model ```shell cd examples/glm/ bash scripts/ds_pretrain_nvidia.sh config/ds_block_large.sh ``` The script [examples/glm/config/ds_pretrain_nvidia.sh](examples/glm/config/ds_pretrain_nvidia.sh) launches the training program with DeepSpeed. You should change `NUM_WORKERS` and `NUM_GPUS_PER_WORKER` to the number of workers and the number of gpus per worker. Also change `HOST_FILE_PATH` to the path to an OpenMPI-style hostfile. More details about DeepSpeed launcher can be found [here](https://www.deepspeed.ai/getting-started/#resource-configuration-multi-node). The file [examples/glm/config/ds_block_large.sh](examples/glm/config/ds_block_large.sh) defines the hyperparameters for pretraining. Most of the arguments are fairly self-explanatory. Specifically, `--train-data` can be multiple keywords defined in `NAMED_CORPORA` in [data_utils/corpora.py](data_utils/corpora.py). The hyperparameters of the optimizer are defined in the corresponding json file under `config`. The semantics of the json file can be found [here](https://www.deepspeed.ai/docs/config-json). ## Bert We show some examples based on GLM model. ### Pretrain Run the following script to pre-train the Bert model ```shell cd examples/bert/ python quick_start.py ``` ## CogView ### Pretrain Run the following script to pre-train the cogview model ```shell cd examples/cogview/ bash config/pretrain_multiple_nodes.sh ``` ### inference Run the following script to test the ability of text2image ```shell cd examples/cogview/ bash config/text2image_cogview.sh ```
lucianpopa/autonlp-TREC-classification-522314623
lucianpopa
2022-01-24T02:31:54Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "autonlp", "en", "dataset:lucianpopa/autonlp-data-TREC-classification", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - lucianpopa/autonlp-data-TREC-classification co2_eq_emissions: 15.186006626915715 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 522314623 - CO2 Emissions (in grams): 15.186006626915715 ## Validation Metrics - Loss: 0.24612033367156982 - Accuracy: 0.9643183897529735 - Macro F1: 0.9493690949638435 - Micro F1: 0.9643183897529735 - Weighted F1: 0.9642384162837268 - Macro Precision: 0.9372705571897225 - Micro Precision: 0.9643183897529735 - Weighted Precision: 0.9652870438320825 - Macro Recall: 0.9649638583139503 - Micro Recall: 0.9643183897529735 - Weighted Recall: 0.9643183897529735 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/lucianpopa/autonlp-TREC-classification-522314623 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("lucianpopa/autonlp-TREC-classification-522314623", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("lucianpopa/autonlp-TREC-classification-522314623", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
jiobiala24/wav2vec2-base-checkpoint-8
jiobiala24
2022-01-24T01:26:07Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-base-checkpoint-8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-checkpoint-8 This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-7.1](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-7.1) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.9561 - Wer: 0.3271 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.3117 | 1.59 | 1000 | 0.5514 | 0.3451 | | 0.2509 | 3.19 | 2000 | 0.5912 | 0.3328 | | 0.1918 | 4.78 | 3000 | 0.6103 | 0.3346 | | 0.1612 | 6.38 | 4000 | 0.6469 | 0.3377 | | 0.1388 | 7.97 | 5000 | 0.6597 | 0.3391 | | 0.121 | 9.57 | 6000 | 0.6911 | 0.3472 | | 0.1096 | 11.16 | 7000 | 0.7300 | 0.3457 | | 0.0959 | 12.76 | 8000 | 0.7660 | 0.3400 | | 0.0882 | 14.35 | 9000 | 0.8316 | 0.3394 | | 0.0816 | 15.95 | 10000 | 0.8042 | 0.3357 | | 0.0739 | 17.54 | 11000 | 0.8087 | 0.3346 | | 0.0717 | 19.14 | 12000 | 0.8590 | 0.3353 | | 0.066 | 20.73 | 13000 | 0.8750 | 0.3336 | | 0.0629 | 22.33 | 14000 | 0.8759 | 0.3333 | | 0.0568 | 23.92 | 15000 | 0.8963 | 0.3321 | | 0.0535 | 25.52 | 16000 | 0.9391 | 0.3323 | | 0.0509 | 27.11 | 17000 | 0.9279 | 0.3296 | | 0.0498 | 28.71 | 18000 | 0.9561 | 0.3271 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
public-data/danbooru-pretrained
public-data
2022-01-23T23:31:03Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
# danbooru-pretrained - Repo: https://github.com/RF5/danbooru-pretrained - https://github.com/RF5/danbooru-pretrained/releases/tag/v0.1 - https://github.com/RF5/danbooru-pretrained/releases/download/v0.1/resnet50-13306192.pth - https://github.com/RF5/danbooru-pretrained/raw/master/config/class_names_6000.json
huggingtweets/twmatthieuh
huggingtweets
2022-01-23T21:14:21Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/twmatthieuh/1642972456953/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1484525847176691715/BwsIu8hd_400x400.png&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Matthieu H.</div> <div style="text-align: center; font-size: 14px;">@twmatthieuh</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Matthieu H.. | Data | Matthieu H. | | --- | --- | | Tweets downloaded | 1225 | | Retweets | 507 | | Short tweets | 26 | | Tweets kept | 692 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2hx6jinu/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @twmatthieuh's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/nrhuqdse) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/nrhuqdse/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/twmatthieuh') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
mattchurgin/xls-r-eng
mattchurgin
2022-01-23T17:31:10Z
6
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "ab", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - ab license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_7_0 - generated_from_trainer datasets: - common_voice model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [patrickvonplaten/wav2vec2_tiny_random_robust](https://huggingface.co/patrickvonplaten/wav2vec2_tiny_random_robust) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset. It achieves the following results on the evaluation set: - Loss: inf - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1 - Datasets 1.18.1.dev0 - Tokenizers 0.11.0
Emanuel/roebrta-base-val-test
Emanuel
2022-01-23T15:12:04Z
6
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- license: mit tags: - generated_from_trainer model-index: - name: language-modeling results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # language-modeling This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4229 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: tpu - num_devices: 8 - total_train_batch_size: 64 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.8.1+cu102 - Datasets 1.13.3 - Tokenizers 0.10.3
ylh1013/fintune-ja-chatbot
ylh1013
2022-01-23T14:21:02Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: - finetuned_from license: mit tags: - generated_from_trainer model-index: - name: fintune-ja-chatbot results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fintune-ja-chatbot This model is a fine-tuned version of [rinna/japanese-gpt2-medium](https://huggingface.co/rinna/japanese-gpt2-medium) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 48 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 50 ### Training results ### Framework versions - Transformers 4.12.3 - Pytorch 1.10.0+cu102 - Tokenizers 0.10.3
artemis13fowl/distilbert-base-uncased-finetuned-imdb
artemis13fowl
2022-01-23T14:10:31Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4725 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7086 | 1.0 | 157 | 2.4897 | | 2.5756 | 2.0 | 314 | 2.4230 | | 2.5395 | 3.0 | 471 | 2.4358 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
Madhour/gpt2-eli5
Madhour
2022-01-23T12:00:23Z
10
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "ELI5", "en", "dataset:eli5", "license:gpl-3.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- language: en tags: - ELI5 license: gpl-3.0 datasets: - eli5 Task: Summarization widget: - text: "<|BOS|><|SEP|>Consulting,business,Fraud<|SEP|>" inference: parameters: temperature: 0.9 return_full_text: False repetition_penalty: 1 --- # Conditional ELI5 Generator Given a few keywords, it generates an Eli5 question with a corresponding answer. The model is mainly used for [SeemsPhishy](https://github.com/madhour/seemsphishy) to auto generate newsletters for phishing/penetration-testing. # How to use ```Python from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM from torch import tensor tokenizer = AutoTokenizer.from_pretrained("Madhour/gpt2-eli5") model = AutoModelForCausalLM.from_pretrained("Madhour/gpt2-eli5") prompt = <|BOS|> +"I have a question."+ <|SEP|> + "keyword1,keyword2,keyword3" + <|SEP|> prompt = tensor(tokenizer.encode(prompt)).unsqueeze(0) text = model.generate(prompt, do_sample=True, min_length=50, max_length=768, top_k=30, top_p=0.7, temperature=0.9, repetition_penalty=2.0, num_return_sequences=3) ```
asanka25/xlm-roberta-base-finetuned-conll03-english-finetuned-sinhala
asanka25
2022-01-23T10:59:51Z
30
1
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
This model was created using xlm-roberta-base bodel and fine-tuned it using CoNLL 2003 dataset. On top of the trained model, we trained it again using a Sinhala NER data that was also formatted to the CoNLL format.
dandelin/vilt-b32-finetuned-flickr30k
dandelin
2022-01-23T09:46:32Z
34
3
transformers
[ "transformers", "pytorch", "vilt", "arxiv:1505.04870", "arxiv:2102.03334", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- license: apache-2.0 --- # Vision-and-Language Transformer (ViLT), fine-tuned on Flickr30k Vision-and-Language Transformer (ViLT) model fine-tuned on [Flickr30k](https://arxiv.org/abs/1505.04870#:~:text=The%20Flickr30k%20dataset%20has%20become,for%20sentence%2Dbased%20image%20description.&text=Such%20annotations%20are%20essential%20for,entity%20mentions%20in%20an%20image.). It was introduced in the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Kim et al. and first released in [this repository](https://github.com/dandelin/ViLT). Disclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Intended uses & limitations You can use the model for image and text retrieval. ### How to use Here is how to use the model in PyTorch: ``` from transformers import ViltProcessor, ViltForImageAndTextRetrieval import requests from PIL import Image url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"] processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-finetuned-flickr30k") model = ViltForImageAndTextRetrieval.from_pretrained("dandelin/vilt-b32-finetuned-flickr30k") # prepare inputs encoding = processor(image, text, return_tensors="pt") # forward pass scores = dict() for text in texts: encoding = processor(image, text, return_tensors="pt") outputs = model(**encoding) scores[text] = outputs.logits[0, :].item() ``` ## Training data (to do) ## Training procedure ### Preprocessing (to do) ### Pretraining (to do) ## Evaluation results (to do) ### BibTeX entry and citation info ```bibtex @misc{kim2021vilt, title={ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision}, author={Wonjae Kim and Bokyung Son and Ildoo Kim}, year={2021}, eprint={2102.03334}, archivePrefix={arXiv}, primaryClass={stat.ML} } ```
dandelin/vilt-b32-finetuned-coco
dandelin
2022-01-23T09:45:24Z
10,342
1
transformers
[ "transformers", "pytorch", "vilt", "arxiv:2102.03334", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- license: apache-2.0 --- # Vision-and-Language Transformer (ViLT), fine-tuned on COCO Vision-and-Language Transformer (ViLT) model fine-tuned on [COCO](https://cocodataset.org/#home). It was introduced in the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Kim et al. and first released in [this repository](https://github.com/dandelin/ViLT). Disclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Intended uses & limitations You can use the model for image and text retrieval. ### How to use Here is how to use the model in PyTorch: ``` from transformers import ViltProcessor, ViltForImageAndTextRetrieval import requests from PIL import Image url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"] processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-finetuned-coco") model = ViltForImageAndTextRetrieval.from_pretrained("dandelin/vilt-b32-finetuned-coco") # prepare inputs encoding = processor(image, text, return_tensors="pt") # forward pass scores = dict() for text in texts: encoding = processor(image, text, return_tensors="pt") outputs = model(**encoding) scores[text] = outputs.logits[0, :].item() ``` ## Training data (to do) ## Training procedure ### Preprocessing (to do) ### Pretraining (to do) ## Evaluation results (to do) ### BibTeX entry and citation info ```bibtex @misc{kim2021vilt, title={ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision}, author={Wonjae Kim and Bokyung Son and Ildoo Kim}, year={2021}, eprint={2102.03334}, archivePrefix={arXiv}, primaryClass={stat.ML} } ```
dandelin/vilt-b32-finetuned-nlvr2
dandelin
2022-01-23T09:43:30Z
673
2
transformers
[ "transformers", "pytorch", "vilt", "arxiv:2102.03334", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- license: apache-2.0 --- # Vision-and-Language Transformer (ViLT), fine-tuned on NLVR2 Vision-and-Language Transformer (ViLT) model fine-tuned on [NLVR2](https://lil.nlp.cornell.edu/nlvr/). It was introduced in the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Kim et al. and first released in [this repository](https://github.com/dandelin/ViLT). Disclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Intended uses & limitations You can use the model to determine whether a sentence is true or false given 2 images. ### How to use Here is how to use the model in PyTorch: ``` from transformers import ViltProcessor, ViltForImagesAndTextClassification import requests from PIL import Image image1 = Image.open(requests.get("https://lil.nlp.cornell.edu/nlvr/exs/ex0_0.jpg", stream=True).raw) image2 = Image.open(requests.get("https://lil.nlp.cornell.edu/nlvr/exs/ex0_1.jpg", stream=True).raw) text = "The left image contains twice the number of dogs as the right image." processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-finetuned-nlvr2") model = ViltForImagesAndTextClassification.from_pretrained("dandelin/vilt-b32-finetuned-nlvr2") # prepare inputs encoding = processor([image1, image2], text, return_tensors="pt") # forward pass outputs = model(input_ids=encoding.input_ids, pixel_values=encoding.pixel_values.unsqueeze(0)) logits = outputs.logits idx = logits.argmax(-1).item() print("Predicted answer:", model.config.id2label[idx]) ``` ## Training data (to do) ## Training procedure ### Preprocessing (to do) ### Pretraining (to do) ## Evaluation results (to do) ### BibTeX entry and citation info ```bibtex @misc{kim2021vilt, title={ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision}, author={Wonjae Kim and Bokyung Son and Ildoo Kim}, year={2021}, eprint={2102.03334}, archivePrefix={arXiv}, primaryClass={stat.ML} } ```
baffo32/pyc2py_alpha2
baffo32
2022-01-23T08:17:55Z
5
0
transformers
[ "transformers", "jax", "t5", "text2text-generation", "multilingual", "dataset:mc4", "arxiv:1907.06292", "arxiv:2105.13626", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: multilingual datasets: - mc4 license: apache-2.0 --- # ByT5 - Base ByT5 is a tokenizer-free version of [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and generally follows the architecture of [MT5](https://huggingface.co/google/mt5-base). ByT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task. ByT5 works especially well on noisy text data,*e.g.*, `google/byt5-base` significantly outperforms [mt5-base](https://huggingface.co/google/mt5-base) on [TweetQA](https://arxiv.org/abs/1907.06292). Paper: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel* ## Example Inference ByT5 works on raw UTF-8 bytes and can be used without a tokenizer: ```python from transformers import T5ForConditionalGeneration import torch model = T5ForConditionalGeneration.from_pretrained('google/byt5-base') input_ids = torch.tensor([list("Life is like a box of chocolates.".encode("utf-8"))]) + 3 # add 3 for special tokens labels = torch.tensor([list("La vie est comme une boîte de chocolat.".encode("utf-8"))]) + 3 # add 3 for special tokens loss = model(input_ids, labels=labels).loss # forward pass ``` For batched inference & training it is however recommended using a tokenizer class for padding: ```python from transformers import T5ForConditionalGeneration, AutoTokenizer model = T5ForConditionalGeneration.from_pretrained('google/byt5-base') tokenizer = AutoTokenizer.from_pretrained('google/byt5-base') model_inputs = tokenizer(["Life is like a box of chocolates.", "Today is Monday."], padding="longest", return_tensors="pt") labels = tokenizer(["La vie est comme une boîte de chocolat.", "Aujourd'hui c'est lundi."], padding="longest", return_tensors="pt").input_ids loss = model(**model_inputs, labels=labels).loss # forward pass ``` ## Abstract Most widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments. ![model image](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/ByT5.png)
danhsf/t5-small-finetuned-en-to-pt
danhsf
2022-01-23T00:38:04Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - bleu model-index: - name: t5-small-finetuned-en-to-pt results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-en-to-pt This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3295 - Bleu: 5.6807 - Gen Len: 18.6772 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.005 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:| | 0.5787 | 1.0 | 6250 | 0.4928 | 4.1007 | 18.638 | | 0.5089 | 2.0 | 12500 | 0.4463 | 4.3492 | 18.663 | | 0.4652 | 3.0 | 18750 | 0.4215 | 4.68 | 18.6652 | | 0.4353 | 4.0 | 25000 | 0.3980 | 4.8172 | 18.6708 | | 0.4042 | 5.0 | 31250 | 0.3799 | 4.9719 | 18.6514 | | 0.3734 | 6.0 | 37500 | 0.3676 | 5.2226 | 18.6572 | | 0.3396 | 7.0 | 43750 | 0.3513 | 5.2693 | 18.6596 | | 0.308 | 8.0 | 50000 | 0.3400 | 5.4546 | 18.676 | | 0.2767 | 9.0 | 56250 | 0.3331 | 5.5649 | 18.6708 | | 0.2424 | 10.0 | 62500 | 0.3295 | 5.6807 | 18.6772 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
pere/xls-test
pere
2022-01-22T18:40:50Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "ab", "dataset:common_voice", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - ab tags: - automatic-speech-recognition - mozilla-foundation/common_voice_7_0 - generated_from_trainer datasets: - common_voice model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset. It achieves the following results on the evaluation set: - Loss: 156.8789 - Wer: 1.3456 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
MelissaTESSA/distilbert-base-uncased-finetuned-cola
MelissaTESSA
2022-01-22T17:01:17Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5206791471093309 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6324 - Matthews Correlation: 0.5207 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5245 | 1.0 | 535 | 0.5155 | 0.4181 | | 0.3446 | 2.0 | 1070 | 0.5623 | 0.4777 | | 0.2331 | 3.0 | 1605 | 0.6324 | 0.5207 | | 0.1678 | 4.0 | 2140 | 0.7706 | 0.5106 | | 0.1255 | 5.0 | 2675 | 0.8852 | 0.4998 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
alistvt/bert-base-uncased-pretrain-finetuned-coqa-falttened
alistvt
2022-01-22T05:06:00Z
30
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer model-index: - name: bert-base-uncased-pretrain-finetuned-coqa-falttened results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-pretrain-finetuned-coqa-falttened This model is a fine-tuned version of [alistvt/bert-base-uncased-pretrained-mlm-coqa-stories](https://huggingface.co/alistvt/bert-base-uncased-pretrained-mlm-coqa-stories) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.8655 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 3.2886 | 0.29 | 2000 | 3.0142 | | 3.0801 | 0.59 | 4000 | 2.8347 | | 2.9744 | 0.88 | 6000 | 2.7643 | | 2.494 | 1.18 | 8000 | 2.7605 | | 2.4417 | 1.47 | 10000 | 2.7790 | | 2.4042 | 1.77 | 12000 | 2.7382 | | 2.1285 | 2.06 | 14000 | 2.8588 | | 2.0569 | 2.36 | 16000 | 2.8937 | | 2.0794 | 2.65 | 18000 | 2.8511 | | 2.0679 | 2.95 | 20000 | 2.8655 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
ms29315/distilbert-base-uncased-finetuned-cola
ms29315
2022-01-21T19:56:06Z
4
0
transformers
[ "transformers", "tf", "tensorboard", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: ms29315/distilbert-base-uncased-finetuned-cola results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # ms29315/distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3100 - Validation Loss: 0.5090 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2670, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.3100 | 0.5090 | 0 | ### Framework versions - Transformers 4.15.0 - TensorFlow 2.7.0 - Datasets 1.18.0 - Tokenizers 0.10.3
facebook/xm_transformer_600m-en_zh-multi_domain
facebook
2022-01-21T19:02:57Z
5
2
fairseq
[ "fairseq", "audio", "audio-to-audio", "speech-to-speech-translation", "dataset:must_c", "dataset:covost2", "arxiv:2010.05171", "region:us" ]
audio-to-audio
2022-03-02T23:29:05Z
--- library_name: fairseq task: audio-to-audio tags: - fairseq - audio - audio-to-audio - speech-to-speech-translation language: en-zh datasets: - must_c - covost2 widget: - example_title: Common Voice sample 1 src: https://huggingface.co/facebook/xm_transformer_600m-en_es-multi_domain/resolve/main/common_voice_en_18295850.mp3 --- # xm_transformer_600m-en_zh-multi_domain [W2V2-Transformer](https://aclanthology.org/2021.acl-long.68/) speech-to-text translation model from fairseq S2T ([paper](https://arxiv.org/abs/2010.05171)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_to_text)): - English-Chinese - Trained on MuST-C, CoVoST 2, Multilingual LibriSpeech, Common Voice v7 and CCMatrix - Speech synthesis with [facebook/tts_transformer-zh-cv7_css10](https://huggingface.co/facebook/tts_transformer-zh-cv7_css10) ## Usage ```python from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub from fairseq.models.speech_to_text.hub_interface import S2THubInterface from fairseq.models.text_to_speech.hub_interface import TTSHubInterface import IPython.display as ipd import torchaudio models, cfg, task = load_model_ensemble_and_task_from_hf_hub( "facebook/xm_transformer_600m-en_zh-multi_domain", arg_overrides={"config_yaml": "config.yaml"}, ) model = models[0] generator = task.build_generator(model, cfg) # requires 16000Hz mono channel audio audio, _ = torchaudio.load("/path/to/an/audio/file") sample = S2THubInterface.get_model_input(task, audio) text = S2THubInterface.get_prediction(task, model, generator, sample) # speech synthesis tts_models, tts_cfg, tts_task = load_model_ensemble_and_task_from_hf_hub( f"facebook/tts_transformer-zh-cv7_css10", arg_overrides={"vocoder": "griffin_lim", "fp16": False}, ) tts_model = tts_models[0] TTSHubInterface.update_cfg_with_data_cfg(tts_cfg, tts_task.data_cfg) tts_generator = tts_task.build_generator([tts_model], tts_cfg) tts_sample = TTSHubInterface.get_model_input(tts_task, text) wav, sr = TTSHubInterface.get_prediction( tts_task, tts_model, tts_generator, tts_sample ) ipd.Audio(wav, rate=rate) ``` ## Citation ```bibtex @inproceedings{li-etal-2021-multilingual, title = "Multilingual Speech Translation from Efficient Finetuning of Pretrained Models", author = "Li, Xian and Wang, Changhan and Tang, Yun and Tran, Chau and Tang, Yuqing and Pino, Juan and Baevski, Alexei and Conneau, Alexis and Auli, Michael", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-long.68", doi = "10.18653/v1/2021.acl-long.68", pages = "827--838", } @inproceedings{wang-etal-2020-fairseq, title = "Fairseq {S}2{T}: Fast Speech-to-Text Modeling with Fairseq", author = "Wang, Changhan and Tang, Yun and Ma, Xutai and Wu, Anne and Okhonko, Dmytro and Pino, Juan", booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: System Demonstrations", month = dec, year = "2020", address = "Suzhou, China", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.aacl-demo.6", pages = "33--39", } ```
facebook/xm_transformer_600m-en_vi-multi_domain
facebook
2022-01-21T19:02:41Z
8
1
fairseq
[ "fairseq", "audio", "audio-to-audio", "speech-to-speech-translation", "dataset:must_c", "arxiv:2010.05171", "region:us" ]
audio-to-audio
2022-03-02T23:29:05Z
--- library_name: fairseq task: audio-to-audio tags: - fairseq - audio - audio-to-audio - speech-to-speech-translation language: en-vi datasets: - must_c widget: - example_title: Common Voice sample 1 src: https://huggingface.co/facebook/xm_transformer_600m-en_es-multi_domain/resolve/main/common_voice_en_18295850.mp3 --- # xm_transformer_600m-en_vi-multi_domain [W2V2-Transformer](https://aclanthology.org/2021.acl-long.68/) speech-to-text translation model from fairseq S2T ([paper](https://arxiv.org/abs/2010.05171)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_to_text)): - English-Vietnamese - Trained on MuST-C, Multilingual LibriSpeech, Common Voice v7 and CCMatrix - Speech synthesis with [facebook/tts_transformer-vi-cv7](https://huggingface.co/facebook/tts_transformer-vi-cv7) ## Usage ```python from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub from fairseq.models.speech_to_text.hub_interface import S2THubInterface from fairseq.models.text_to_speech.hub_interface import TTSHubInterface import IPython.display as ipd import torchaudio models, cfg, task = load_model_ensemble_and_task_from_hf_hub( "facebook/xm_transformer_600m-en_vi-multi_domain", arg_overrides={"config_yaml": "config.yaml"}, ) model = models[0] generator = task.build_generator(model, cfg) # requires 16000Hz mono channel audio audio, _ = torchaudio.load("/path/to/an/audio/file") sample = S2THubInterface.get_model_input(task, audio) text = S2THubInterface.get_prediction(task, model, generator, sample) # speech synthesis tts_models, tts_cfg, tts_task = load_model_ensemble_and_task_from_hf_hub( f"facebook/tts_transformer-vi-cv7", arg_overrides={"vocoder": "griffin_lim", "fp16": False}, ) tts_model = tts_models[0] TTSHubInterface.update_cfg_with_data_cfg(tts_cfg, tts_task.data_cfg) tts_generator = tts_task.build_generator([tts_model], tts_cfg) tts_sample = TTSHubInterface.get_model_input(tts_task, text) wav, sr = TTSHubInterface.get_prediction( tts_task, tts_model, tts_generator, tts_sample ) ipd.Audio(wav, rate=rate) ``` ## Citation ```bibtex @inproceedings{li-etal-2021-multilingual, title = "Multilingual Speech Translation from Efficient Finetuning of Pretrained Models", author = "Li, Xian and Wang, Changhan and Tang, Yun and Tran, Chau and Tang, Yuqing and Pino, Juan and Baevski, Alexei and Conneau, Alexis and Auli, Michael", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-long.68", doi = "10.18653/v1/2021.acl-long.68", pages = "827--838", } @inproceedings{wang-etal-2020-fairseq, title = "Fairseq {S}2{T}: Fast Speech-to-Text Modeling with Fairseq", author = "Wang, Changhan and Tang, Yun and Ma, Xutai and Wu, Anne and Okhonko, Dmytro and Pino, Juan", booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: System Demonstrations", month = dec, year = "2020", address = "Suzhou, China", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.aacl-demo.6", pages = "33--39", } ```
facebook/xm_transformer_600m-en_tr-multi_domain
facebook
2022-01-21T19:02:30Z
18
1
fairseq
[ "fairseq", "audio", "audio-to-audio", "speech-to-speech-translation", "dataset:must_c", "dataset:covost2", "arxiv:2010.05171", "region:us" ]
audio-to-audio
2022-03-02T23:29:05Z
--- library_name: fairseq task: audio-to-audio tags: - fairseq - audio - audio-to-audio - speech-to-speech-translation language: en-tr datasets: - must_c - covost2 widget: - example_title: Common Voice sample 1 src: https://huggingface.co/facebook/xm_transformer_600m-en_es-multi_domain/resolve/main/common_voice_en_18295850.mp3 --- # xm_transformer_600m-en_tr-multi_domain [W2V2-Transformer](https://aclanthology.org/2021.acl-long.68/) speech-to-text translation model from fairseq S2T ([paper](https://arxiv.org/abs/2010.05171)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_to_text)): - English-Turkish - Trained on MuST-C, CoVoST 2, Multilingual LibriSpeech, Common Voice v7 and CCMatrix - Speech synthesis with [facebook/tts_transformer-tr-cv7](https://huggingface.co/facebook/tts_transformer-tr-cv7) ## Usage ```python from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub from fairseq.models.speech_to_text.hub_interface import S2THubInterface from fairseq.models.text_to_speech.hub_interface import TTSHubInterface import IPython.display as ipd import torchaudio models, cfg, task = load_model_ensemble_and_task_from_hf_hub( "facebook/xm_transformer_600m-en_tr-multi_domain", arg_overrides={"config_yaml": "config.yaml"}, ) model = models[0] generator = task.build_generator(model, cfg) # requires 16000Hz mono channel audio audio, _ = torchaudio.load("/path/to/an/audio/file") sample = S2THubInterface.get_model_input(task, audio) text = S2THubInterface.get_prediction(task, model, generator, sample) # speech synthesis tts_models, tts_cfg, tts_task = load_model_ensemble_and_task_from_hf_hub( f"facebook/tts_transformer-tr-cv7", arg_overrides={"vocoder": "griffin_lim", "fp16": False}, ) tts_model = tts_models[0] TTSHubInterface.update_cfg_with_data_cfg(tts_cfg, tts_task.data_cfg) tts_generator = tts_task.build_generator([tts_model], tts_cfg) tts_sample = TTSHubInterface.get_model_input(tts_task, text) wav, sr = TTSHubInterface.get_prediction( tts_task, tts_model, tts_generator, tts_sample ) ipd.Audio(wav, rate=rate) ``` ## Citation ```bibtex @inproceedings{li-etal-2021-multilingual, title = "Multilingual Speech Translation from Efficient Finetuning of Pretrained Models", author = "Li, Xian and Wang, Changhan and Tang, Yun and Tran, Chau and Tang, Yuqing and Pino, Juan and Baevski, Alexei and Conneau, Alexis and Auli, Michael", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-long.68", doi = "10.18653/v1/2021.acl-long.68", pages = "827--838", } @inproceedings{wang-etal-2020-fairseq, title = "Fairseq {S}2{T}: Fast Speech-to-Text Modeling with Fairseq", author = "Wang, Changhan and Tang, Yun and Ma, Xutai and Wu, Anne and Okhonko, Dmytro and Pino, Juan", booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: System Demonstrations", month = dec, year = "2020", address = "Suzhou, China", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.aacl-demo.6", pages = "33--39", } ```
facebook/xm_transformer_600m-en_ru-multi_domain
facebook
2022-01-21T19:01:38Z
8
1
fairseq
[ "fairseq", "audio", "audio-to-audio", "speech-to-speech-translation", "dataset:must_c", "arxiv:2010.05171", "region:us" ]
audio-to-audio
2022-03-02T23:29:05Z
--- library_name: fairseq task: audio-to-audio tags: - fairseq - audio - audio-to-audio - speech-to-speech-translation language: en-ru datasets: - must_c widget: - example_title: Common Voice sample 1 src: https://huggingface.co/facebook/xm_transformer_600m-en_es-multi_domain/resolve/main/common_voice_en_18295850.mp3 --- # xm_transformer_600m-en_ru-multi_domain [W2V2-Transformer](https://aclanthology.org/2021.acl-long.68/) speech-to-text translation model from fairseq S2T ([paper](https://arxiv.org/abs/2010.05171)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_to_text)): - English-Russian - Trained on MuST-C, Multilingual LibriSpeech, Common Voice v7 and CCMatrix - Speech synthesis with [facebook/tts_transformer-ru-cv7_css10](https://huggingface.co/facebook/tts_transformer-ru-cv7_css10) ## Usage ```python from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub from fairseq.models.speech_to_text.hub_interface import S2THubInterface from fairseq.models.text_to_speech.hub_interface import TTSHubInterface import IPython.display as ipd import torchaudio models, cfg, task = load_model_ensemble_and_task_from_hf_hub( "facebook/xm_transformer_600m-en_ru-multi_domain", arg_overrides={"config_yaml": "config.yaml"}, ) model = models[0] generator = task.build_generator(model, cfg) # requires 16000Hz mono channel audio audio, _ = torchaudio.load("/path/to/an/audio/file") sample = S2THubInterface.get_model_input(task, audio) text = S2THubInterface.get_prediction(task, model, generator, sample) # speech synthesis tts_models, tts_cfg, tts_task = load_model_ensemble_and_task_from_hf_hub( f"facebook/tts_transformer-ru-cv7_css10", arg_overrides={"vocoder": "griffin_lim", "fp16": False}, ) tts_model = tts_models[0] TTSHubInterface.update_cfg_with_data_cfg(tts_cfg, tts_task.data_cfg) tts_generator = tts_task.build_generator([tts_model], tts_cfg) tts_sample = TTSHubInterface.get_model_input(tts_task, text) wav, sr = TTSHubInterface.get_prediction( tts_task, tts_model, tts_generator, tts_sample ) ipd.Audio(wav, rate=rate) ``` ## Citation ```bibtex @inproceedings{li-etal-2021-multilingual, title = "Multilingual Speech Translation from Efficient Finetuning of Pretrained Models", author = "Li, Xian and Wang, Changhan and Tang, Yun and Tran, Chau and Tang, Yuqing and Pino, Juan and Baevski, Alexei and Conneau, Alexis and Auli, Michael", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-long.68", doi = "10.18653/v1/2021.acl-long.68", pages = "827--838", } @inproceedings{wang-etal-2020-fairseq, title = "Fairseq {S}2{T}: Fast Speech-to-Text Modeling with Fairseq", author = "Wang, Changhan and Tang, Yun and Ma, Xutai and Wu, Anne and Okhonko, Dmytro and Pino, Juan", booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: System Demonstrations", month = dec, year = "2020", address = "Suzhou, China", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.aacl-demo.6", pages = "33--39", } ```
facebook/xm_transformer_600m-en_es-multi_domain
facebook
2022-01-21T19:01:24Z
2
1
fairseq
[ "fairseq", "audio", "audio-to-audio", "speech-to-speech-translation", "dataset:must_c", "dataset:europarl_st", "dataset:voxpopuli", "arxiv:2010.05171", "region:us" ]
audio-to-audio
2022-03-02T23:29:05Z
--- library_name: fairseq task: audio-to-audio tags: - fairseq - audio - audio-to-audio - speech-to-speech-translation language: en-es datasets: - must_c - europarl_st - voxpopuli widget: - example_title: Common Voice sample 1 src: https://huggingface.co/facebook/xm_transformer_600m-en_es-multi_domain/resolve/main/common_voice_en_18295850.mp3 --- # xm_transformer_600m-en_es-multi_domain [W2V2-Transformer](https://aclanthology.org/2021.acl-long.68/) speech-to-text translation model from fairseq S2T ([paper](https://arxiv.org/abs/2010.05171)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_to_text)): - English-Spanish - Trained on MuST-C, EuroParl-ST, VoxPopuli, Multilingual LibriSpeech, Common Voice v7 and CCMatrix - Speech synthesis with [facebook/tts_transformer-es-css10](https://huggingface.co/facebook/tts_transformer-es-css10) ## Usage ```python from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub from fairseq.models.text_to_speech.hub_interface import S2THubInterface from fairseq.models.text_to_speech.hub_interface import TTSHubInterface import IPython.display as ipd import torchaudio models, cfg, task = load_model_ensemble_and_task_from_hf_hub( "facebook/xm_transformer_600m-en_es-multi_domain", arg_overrides={"config_yaml": "config.yaml"}, ) model = models[0] generator = task.build_generator(model, cfg) # requires 16000Hz mono channel audio audio, _ = torchaudio.load("/path/to/an/audio/file") sample = S2THubInterface.get_model_input(task, audio) text = S2THubInterface.get_prediction(task, model, generator, sample) # speech synthesis tts_models, tts_cfg, tts_task = load_model_ensemble_and_task_from_hf_hub( f"facebook/tts_transformer-es-css10", arg_overrides={"vocoder": "griffin_lim", "fp16": False}, ) tts_model = tts_models[0] TTSHubInterface.update_cfg_with_data_cfg(tts_cfg, tts_task.data_cfg) tts_generator = tts_task.build_generator([tts_model], tts_cfg) tts_sample = TTSHubInterface.get_model_input(tts_task, text) wav, sr = TTSHubInterface.get_prediction( tts_task, tts_model, tts_generator, tts_sample ) ipd.Audio(wav, rate=rate) ``` ## Citation ```bibtex @inproceedings{li-etal-2021-multilingual, title = "Multilingual Speech Translation from Efficient Finetuning of Pretrained Models", author = "Li, Xian and Wang, Changhan and Tang, Yun and Tran, Chau and Tang, Yuqing and Pino, Juan and Baevski, Alexei and Conneau, Alexis and Auli, Michael", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-long.68", doi = "10.18653/v1/2021.acl-long.68", pages = "827--838", } @inproceedings{wang-etal-2020-fairseq, title = "Fairseq {S}2{T}: Fast Speech-to-Text Modeling with Fairseq", author = "Wang, Changhan and Tang, Yun and Ma, Xutai and Wu, Anne and Okhonko, Dmytro and Pino, Juan", booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: System Demonstrations", month = dec, year = "2020", address = "Suzhou, China", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.aacl-demo.6", pages = "33--39", } ```
facebook/xm_transformer_600m-fr_en-multi_domain
facebook
2022-01-21T18:59:43Z
10
0
fairseq
[ "fairseq", "audio", "audio-to-audio", "speech-to-speech-translation", "dataset:mtedx", "dataset:covost2", "dataset:europarl_st", "dataset:voxpopuli", "arxiv:2010.05171", "region:us" ]
audio-to-audio
2022-03-02T23:29:05Z
--- library_name: fairseq task: audio-to-audio tags: - fairseq - audio - audio-to-audio - speech-to-speech-translation language: fr-en datasets: - mtedx - covost2 - europarl_st - voxpopuli widget: - example_title: Common Voice sample 1 src: https://huggingface.co/facebook/xm_transformer_600m-fr_en-multi_domain/resolve/main/common_voice_fr_19731305.mp3 --- # xm_transformer_600m-fr_en-multi_domain [W2V2-Transformer](https://aclanthology.org/2021.acl-long.68/) speech-to-text translation model from fairseq S2T ([paper](https://arxiv.org/abs/2010.05171)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_to_text)): - French-English - Trained on mTEDx, CoVoST 2, EuroParl-ST, VoxPopuli, Multilingual LibriSpeech, Common Voice v7 and CCMatrix - Speech synthesis with [facebook/fastspeech2-en-ljspeech](https://huggingface.co/facebook/fastspeech2-en-ljspeech) ## Usage ```python from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub from fairseq.models.text_to_speech.hub_interface import S2THubInterface from fairseq.models.text_to_speech.hub_interface import TTSHubInterface import IPython.display as ipd import torchaudio models, cfg, task = load_model_ensemble_and_task_from_hf_hub( "facebook/xm_transformer_600m-fr_en-multi_domain", arg_overrides={"config_yaml": "config.yaml"}, ) model = models[0] generator = task.build_generator(model, cfg) # requires 16000Hz mono channel audio audio, _ = torchaudio.load("/path/to/an/audio/file") sample = S2THubInterface.get_model_input(task, audio) text = S2THubInterface.get_prediction(task, model, generator, sample) # speech synthesis tts_models, tts_cfg, tts_task = load_model_ensemble_and_task_from_hf_hub( f"facebook/fastspeech2-en-ljspeech", arg_overrides={"vocoder": "griffin_lim", "fp16": False}, ) tts_model = tts_models[0] TTSHubInterface.update_cfg_with_data_cfg(tts_cfg, tts_task.data_cfg) tts_generator = tts_task.build_generator([tts_model], tts_cfg) tts_sample = TTSHubInterface.get_model_input(tts_task, text) wav, sr = TTSHubInterface.get_prediction( tts_task, tts_model, tts_generator, tts_sample ) ipd.Audio(wav, rate=rate) ``` ## Citation ```bibtex @inproceedings{li-etal-2021-multilingual, title = "Multilingual Speech Translation from Efficient Finetuning of Pretrained Models", author = "Li, Xian and Wang, Changhan and Tang, Yun and Tran, Chau and Tang, Yuqing and Pino, Juan and Baevski, Alexei and Conneau, Alexis and Auli, Michael", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-long.68", doi = "10.18653/v1/2021.acl-long.68", pages = "827--838", } @inproceedings{wang-etal-2020-fairseq, title = "Fairseq {S}2{T}: Fast Speech-to-Text Modeling with Fairseq", author = "Wang, Changhan and Tang, Yun and Ma, Xutai and Wu, Anne and Okhonko, Dmytro and Pino, Juan", booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: System Demonstrations", month = dec, year = "2020", address = "Suzhou, China", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.aacl-demo.6", pages = "33--39", } @inproceedings{wang-etal-2021-fairseq, title = "fairseq S{\^{}}2: A Scalable and Integrable Speech Synthesis Toolkit", author = "Wang, Changhan and Hsu, Wei-Ning and Adi, Yossi and Polyak, Adam and Lee, Ann and Chen, Peng-Jen and Gu, Jiatao and Pino, Juan", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-demo.17", doi = "10.18653/v1/2021.emnlp-demo.17", pages = "143--152", } ```
jiobiala24/wav2vec2-base-checkpoint-7.1
jiobiala24
2022-01-21T15:50:15Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-base-checkpoint-7.1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-checkpoint-7.1 This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-6](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-6) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.9369 - Wer: 0.3243 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.3124 | 1.75 | 1000 | 0.5602 | 0.3403 | | 0.2428 | 3.5 | 2000 | 0.5924 | 0.3431 | | 0.1884 | 5.24 | 3000 | 0.6161 | 0.3423 | | 0.1557 | 6.99 | 4000 | 0.6570 | 0.3415 | | 0.1298 | 8.74 | 5000 | 0.6837 | 0.3446 | | 0.1141 | 10.49 | 6000 | 0.7304 | 0.3396 | | 0.1031 | 12.24 | 7000 | 0.7264 | 0.3410 | | 0.0916 | 13.99 | 8000 | 0.7229 | 0.3387 | | 0.0835 | 15.73 | 9000 | 0.8078 | 0.3458 | | 0.0761 | 17.48 | 10000 | 0.8304 | 0.3408 | | 0.0693 | 19.23 | 11000 | 0.8290 | 0.3387 | | 0.0646 | 20.98 | 12000 | 0.8593 | 0.3372 | | 0.0605 | 22.73 | 13000 | 0.8728 | 0.3345 | | 0.0576 | 24.48 | 14000 | 0.9111 | 0.3297 | | 0.0529 | 26.22 | 15000 | 0.9247 | 0.3273 | | 0.0492 | 27.97 | 16000 | 0.9248 | 0.3250 | | 0.0472 | 29.72 | 17000 | 0.9369 | 0.3243 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
Gianpe/en_textcat_emotion_xlm
Gianpe
2022-01-21T15:09:03Z
3
0
spacy
[ "spacy", "text-classification", "en", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- tags: - spacy - text-classification language: - en model-index: - name: en_textcat_emotion_xlm results: [] ---
shivam/xls-r-hindi
shivam
2022-01-21T14:00:59Z
7
1
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "hi", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - hi license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_7_0 - generated_from_trainer datasets: - common_voice model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HI dataset. It achieves the following results on the evaluation set: - Loss: 0.4484 - Wer: 1.0145 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 50.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.1844 | 3.4 | 500 | 5.2015 | 0.9999 | | 3.3962 | 6.8 | 1000 | 3.4017 | 1.0002 | | 2.5433 | 10.2 | 1500 | 1.6884 | 1.0222 | | 1.5099 | 13.6 | 2000 | 0.7929 | 1.0188 | | 1.2685 | 17.01 | 2500 | 0.6122 | 1.0191 | | 1.1844 | 20.41 | 3000 | 0.5434 | 1.0197 | | 1.0945 | 23.81 | 3500 | 0.5208 | 1.0316 | | 1.0506 | 27.21 | 4000 | 0.4941 | 1.0139 | | 1.0199 | 30.61 | 4500 | 0.4736 | 1.0106 | | 0.9546 | 34.01 | 5000 | 0.4664 | 1.0164 | | 0.9388 | 37.41 | 5500 | 0.4565 | 1.0085 | | 0.9125 | 40.81 | 6000 | 0.4636 | 1.0148 | | 0.8733 | 44.22 | 6500 | 0.4530 | 1.0154 | | 0.8829 | 47.62 | 7000 | 0.4494 | 1.0152 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
BlindMan820/Sarcastic-News-Headlines
BlindMan820
2022-01-21T13:31:44Z
5
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "Text", "Sequence-Classification", "Sarcasm", "DistilBert", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- language: - English tags: - Text - Sequence-Classification - Sarcasm - DistilBert datasets: - Kaggle Dataset metrics : - precision - recall - f1 --- Dataset Link - https://www.kaggle.com/rmisra/news-headlines-dataset-for-sarcasm-detection
alistvt/bert-base-uncased-pretrained-mlm-coqa-stories
alistvt
2022-01-21T13:17:32Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer model-index: - name: bert-base-uncased-pretrained-mlm-coqa-stories results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-pretrained-mlm-coqa-stories This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8310 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0573 | 1.0 | 2479 | 1.8805 | | 1.9517 | 2.0 | 4958 | 1.8377 | | 1.9048 | 3.0 | 7437 | 1.8310 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
MadhurJindalWorkMail/autonlp-Gibb-Detect-515314387
MadhurJindalWorkMail
2022-01-21T07:05:45Z
3
1
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autonlp", "en", "dataset:MadhurJindalWorkMail/autonlp-data-Gibb-Detect", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - MadhurJindalWorkMail/autonlp-data-Gibb-Detect co2_eq_emissions: 70.95647633212745 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 515314387 - CO2 Emissions (in grams): 70.95647633212745 ## Validation Metrics - Loss: 0.08077705651521683 - Accuracy: 0.9760103738923709 - Macro F1: 0.9728412857204902 - Micro F1: 0.9760103738923709 - Weighted F1: 0.9759907151741426 - Macro Precision: 0.9736622407675567 - Micro Precision: 0.9760103738923709 - Weighted Precision: 0.97673611876005 - Macro Recall: 0.9728978421381711 - Micro Recall: 0.9760103738923709 - Weighted Recall: 0.9760103738923709 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/MadhurJindalWorkMail/autonlp-Gibb-Detect-515314387 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("MadhurJindalWorkMail/autonlp-Gibb-Detect-515314387", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("MadhurJindalWorkMail/autonlp-Gibb-Detect-515314387", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
anuragshas/wav2vec2-large-xls-r-300m-ur
anuragshas
2022-01-21T04:32:18Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-ur results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-ur This model is a fine-tuned version of [anuragshas/wav2vec2-large-xls-r-300m-ur](https://huggingface.co/anuragshas/wav2vec2-large-xls-r-300m-ur) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 2.0508 - Wer: 0.7328 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.12 - num_epochs: 240 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 0.0719 | 66.67 | 400 | 1.8510 | 0.7432 | | 0.0284 | 133.33 | 800 | 2.0088 | 0.7415 | | 0.014 | 200.0 | 1200 | 2.0508 | 0.7328 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
Gigworks/ASR_zh_espnet2
Gigworks
2022-01-21T02:58:59Z
0
1
null
[ "region:us" ]
null
2022-03-02T23:29:04Z
<b>Speech-To-Text Chinese Model</b> <br/><br/> Reference: <br/> Model - https://huggingface.co/espnet/pengcheng_guo_wenetspeech_asr_train_asr_raw_zh_char <br/> Code - https://huggingface.co/spaces/akhaliq/espnet2_asr/blob/main/app.py
mbateman/distilbert-base-uncased-finetuned-imdb
mbateman
2022-01-20T20:43:24Z
5
0
transformers
[ "transformers", "pytorch", "distilbert", "fill-mask", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4033 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.6482 | 1.0 | 625 | 2.4283 | | 2.5156 | 2.0 | 1250 | 2.3816 | | 2.475 | 3.0 | 1875 | 2.3638 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu102 - Datasets 1.15.1 - Tokenizers 0.10.1
anuragshas/wav2vec2-large-xls-r-300m-hi
anuragshas
2022-01-20T20:38:42Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-hi results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-hi This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 2.4156 - Wer: 0.7181 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.7703 | 2.72 | 400 | 2.2274 | 0.9259 | | 0.6515 | 5.44 | 800 | 1.5812 | 0.7581 | | 0.339 | 8.16 | 1200 | 2.0590 | 0.7825 | | 0.2262 | 10.88 | 1600 | 2.0324 | 0.7603 | | 0.1665 | 13.6 | 2000 | 2.1396 | 0.7481 | | 0.1311 | 16.33 | 2400 | 2.2090 | 0.7379 | | 0.1079 | 19.05 | 2800 | 2.3907 | 0.7612 | | 0.0927 | 21.77 | 3200 | 2.5294 | 0.7478 | | 0.0748 | 24.49 | 3600 | 2.5024 | 0.7452 | | 0.0644 | 27.21 | 4000 | 2.4715 | 0.7307 | | 0.0569 | 29.93 | 4400 | 2.4156 | 0.7181 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
oandreae/financial_sentiment_model
oandreae
2022-01-20T20:00:01Z
4
1
transformers
[ "transformers", "pytorch", "tensorboard", "perceiver", "text-classification", "generated_from_trainer", "dataset:financial_phrasebank", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - financial_phrasebank metrics: - recall - accuracy - precision model-index: - name: financial_sentiment_model results: - task: name: Text Classification type: text-classification dataset: name: financial_phrasebank type: financial_phrasebank args: sentences_50agree metrics: - name: Recall type: recall value: 0.8839956357328868 - name: Accuracy type: accuracy value: 0.8804123711340206 - name: Precision type: precision value: 0.8604175202419276 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # financial_sentiment_model This model is a fine-tuned version of [deepmind/language-perceiver](https://huggingface.co/deepmind/language-perceiver) on the financial_phrasebank dataset. It achieves the following results on the evaluation set: - Loss: 0.3467 - Recall: 0.8840 - Accuracy: 0.8804 - Precision: 0.8604 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - distributed_type: tpu - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Recall | Accuracy | Precision | |:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|:---------:| | 0.4481 | 1.0 | 273 | 0.4035 | 0.8526 | 0.8433 | 0.7955 | | 0.4069 | 2.0 | 546 | 0.4478 | 0.8683 | 0.8289 | 0.8123 | | 0.2225 | 3.0 | 819 | 0.3167 | 0.8747 | 0.8680 | 0.8387 | | 0.1245 | 4.0 | 1092 | 0.3467 | 0.8840 | 0.8804 | 0.8604 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.9.0+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
muellerzr/fastai-pets-resnet-34
muellerzr
2022-01-20T19:01:14Z
0
1
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
# The fastai models - PETS This model is based on Lesson 1 of [fastai](https://course.fast.ai) and of [Walk with fastai](https://walkwithfastai.com/Pets) ## Dataset Used This model was created with the [Oxford Pets](https://docs.fast.ai/data.external.html#Image-Classification-datasets) dataset in the fastai framework ## Model Training The model was trained as a binary classifier, for cats or dogs ## How to use: First, ensure that `huggingface_hub` is installed: ```bash pip(3) install huggingface_hub ``` Next, download this model repo: ```python from huggingface_hub import snapshot_download snapshot_download(repo_id="muellerzr/fastai-pets-resnet-34") ``` Then install the correct fastai version: ```bash cd fastai-pets-resnet34 pip(3) install -r requirements.txt ``` **NOTE: This is extremely important, as fastai versions are aggressively pinned based on training environment** And finally load in the fastai `Learner` and predict ```python from fastai.learner import load_learner learn = load_learner('model.pth') pred = learn.predict('myImage.jpg') ``` Versions of model used were taken with [dependency_checker](https://muellerzr.github.io/dependency_checker)
tomwetherell/TOMFINSEN
tomwetherell
2022-01-20T18:19:24Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "perceiver", "text-classification", "generated_from_trainer", "dataset:financial_phrasebank", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - financial_phrasebank metrics: - recall - accuracy - precision model-index: - name: TOMFINSEN results: - task: name: Text Classification type: text-classification dataset: name: financial_phrasebank type: financial_phrasebank args: sentences_50agree metrics: - name: Recall type: recall value: 0.8985861629736692 - name: Accuracy type: accuracy value: 0.8742268041237113 - name: Precision type: precision value: 0.8509995913451198 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # TOMFINSEN This model is a fine-tuned version of [deepmind/language-perceiver](https://huggingface.co/deepmind/language-perceiver) on the financial_phrasebank dataset. It achieves the following results on the evaluation set: - Loss: 0.3642 - Recall: 0.8986 - Accuracy: 0.8742 - Precision: 0.8510 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - distributed_type: tpu - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Recall | Accuracy | Precision | |:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|:---------:| | 0.5403 | 1.0 | 273 | 0.4207 | 0.8358 | 0.8619 | 0.8534 | | 0.3939 | 2.0 | 546 | 0.3750 | 0.8943 | 0.8577 | 0.8225 | | 0.1993 | 3.0 | 819 | 0.3113 | 0.8882 | 0.8660 | 0.8367 | | 0.301 | 4.0 | 1092 | 0.3642 | 0.8986 | 0.8742 | 0.8510 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.9.0+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
ilevs/opus-mt-en-ru-finetuned-en-to-ru
ilevs
2022-01-20T18:18:30Z
9
0
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - bleu model-index: - name: opus-mt-en-ru-finetuned-en-to-ru results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-en-ru-finetuned-en-to-ru This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ru](https://huggingface.co/Helsinki-NLP/opus-mt-en-ru) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7682 - Bleu: 14.6112 - Gen Len: 7.202 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:| | 2.3198 | 1.0 | 4956 | 2.1261 | 9.5339 | 6.7709 | | 1.9732 | 2.0 | 9912 | 1.9639 | 10.4715 | 7.1254 | | 1.7127 | 3.0 | 14868 | 1.8780 | 11.6128 | 7.1106 | | 1.5614 | 4.0 | 19824 | 1.8367 | 12.8389 | 7.0468 | | 1.4276 | 5.0 | 24780 | 1.8040 | 13.7423 | 7.0403 | | 1.3096 | 6.0 | 29736 | 1.7820 | 14.1469 | 7.0555 | | 1.2381 | 7.0 | 34692 | 1.7761 | 13.9987 | 7.2225 | | 1.1784 | 8.0 | 39648 | 1.7725 | 14.4675 | 7.1799 | | 1.1376 | 9.0 | 44604 | 1.7692 | 14.4937 | 7.1957 | | 1.0862 | 10.0 | 49560 | 1.7682 | 14.6112 | 7.202 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
ucberkeley-dlab/hate-measure-roberta-large
ucberkeley-dlab
2022-01-20T17:57:30Z
7
4
tf-keras
[ "tf-keras", "text-classification", "hate-speech", "counterspeech", "irt", "arxiv:2009.10277", "en", "dataset:ucberkeley-dlab/measuring-hate-speech", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - en tags: - text-classification - hate-speech - counterspeech - irt - arxiv:2009.10277 datasets: - ucberkeley-dlab/measuring-hate-speech --- # Measuring hate speech: RoBERTa-Large This model predicts a continuous hate speech score as described in Kennedy et al. (2020). ## Citation ``` @article{kennedy2020constructing, title={Constructing interval variables via faceted Rasch measurement and multitask deep learning: a hate speech application}, author={Kennedy, Chris J and Bacon, Geoff and Sahn, Alexander and von Vacano, Claudia}, journal={arXiv preprint arXiv:2009.10277}, year={2020} } ``` ## References Kennedy, C. J., Bacon, G., Sahn, A., & von Vacano, C. (2020). [Constructing interval variables via faceted Rasch measurement and multitask deep learning: a hate speech application](https://arxiv.org/abs/2009.10277). arXiv preprint arXiv:2009.10277.
ml6team/distilbart-tos-summarizer-tosdr
ml6team
2022-01-20T15:21:41Z
22
15
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "summarization", "t&c", "tos", "distilbart", "distilbart-6-6", "en", "dataset:tosdr", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:05Z
--- language: - en tags: - summarization - t&c - tos - distilbart - distilbart-6-6 datasets: - tosdr metrics: - rouge1 - rouge2 - rougel inference: parameters: min_length: 5 max_length: 512 do_sample: False widget: - text: "In addition, certain portions of the Web Site may be subject to additional terms of use that we make available for your review or otherwise link to that portion of the Web Site to which such additional terms apply. By using such portions, or any part thereof, you agree to be bound by the additional terms of use applicable to such portions. Age Restrictions The Web Site may be accessed and used only by individuals who can form legally binding contracts under applicable laws, who are at least 18 years of age or the age of majority in their state or territory of residence (if higher than 18), and who are not barred from using the Web Site under applicable laws. Our Technology may not be copied, modified, reproduced, republished, posted, transmitted, sold, offered for sale, or redistributed in any way without our prior written permission and the prior written permission of our applicable licensors. Nothing in these Site Terms of Use grants you any right to receive delivery of a copy of Our Technology or to obtain access to Our Technology except as generally and ordinarily permitted through the Web Site according to these Site Terms of Use. Furthermore, nothing in these Site Terms of Use will be deemed to grant you, by implication, estoppel or otherwise, a license to Our Technology. Certain of the names, logos, and other materials displayed via the Web site constitute trademarks, tradenames, service marks or logos (“Marks”) of us or other entities. You are not authorized to use any such Marks. Ownership of all such Marks and the goodwill associated therewith remains with us or those other entities. Any use of third party software provided in connection with the Web Site will be governed by such third parties’ licenses and not by these Site Terms of Use. Information on this Web Site may contain technical inaccuracies or typographical errors. Lenovo provides no assurances that any reported problems may be resolved with the use of any information that Lenovo provides." --- # T&C Summarization Model T&C Summarization Model based on [sshleifer/distilbart-cnn-6-6](https://huggingface.co/sshleifer/distilbart-cnn-6-6), This abstractive summarization model is a part of a bigger end-to-end T&C summarizer pipeline which is preceded by LSA (Latent Semantic Analysis) extractive summarization. The extractive summarization shortens the T&C to be further summarized by this model. ## Finetuning Corpus We collaborated with [TOSDR](https://tosdr.org/) to work with their data, and the model is finetuned accordingly. The article and summarization text is reduced via extractive summarization before it is finetuned to the model. ## Contact Us https://ml6.eu/ . This abstractive model finetuning is the continuation of the Christmas Project 2021 done in ML6: https://bit.ly/XmasProjects . ## Load Finetuned Model ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("ml6team/distilbart-tos-summarizer-tosdr") model = AutoModelForSeq2SeqLM.from_pretrained("ml6team/distilbart-tos-summarizer-tosdr") ``` ## Code Sample This sample requires [sumy](https://pypi.org/project/sumy/), the LSA Extractive Summarization library, as additional package to run. ``` import re import nltk nltk.download('punkt') from sumy.parsers.plaintext import PlaintextParser from sumy.nlp.tokenizers import Tokenizer from sumy.nlp.stemmers import Stemmer from sumy.summarizers.lsa import LsaSummarizer from transformers import AutoTokenizer, AutoModelForSeq2SeqLM LANGUAGE = "english" EXTRACTED_ARTICLE_SENTENCES_LEN = 12 stemmer = Stemmer(LANGUAGE) lsa_summarizer = LsaSummarizer(stemmer) tokenizer = AutoTokenizer.from_pretrained("ml6team/distilbart-tos-summarizer-tosdr") model = AutoModelForSeq2SeqLM.from_pretrained("ml6team/distilbart-tos-summarizer-tosdr") def get_extractive_summary(text, sentences_count): parser = PlaintextParser.from_string(text, Tokenizer(LANGUAGE)) summarized_info = lsa_summarizer(parser.document, sentences_count) summarized_info = [element._text for element in summarized_info] return ' '.join(summarized_info) def get_summary(dict_summarizer_model, dict_tokenizer, text_content): text_content = get_extractive_summary(text_content, EXTRACTED_ARTICLE_SENTENCES_LEN) tokenizer = dict_tokenizer['tokenizer'] model = dict_summarizer_model['model'] inputs = tokenizer(text_content, max_length=dict_tokenizer['max_length'], truncation=True, return_tensors="pt") outputs = model.generate( inputs["input_ids"], max_length=dict_summarizer_model['max_length'], min_length=dict_summarizer_model['min_length'], ) summarized_text = tokenizer.decode(outputs[0]) match = re.search(r"<s>(.*)</s>", summarized_text) if match is not None: summarized_text = match.group(1) return summarized_text.replace('<s>', '').replace('</s>', '') test_tos = """ In addition, certain portions of the Web Site may be subject to additional terms of use that we make available for your review or otherwise link to that portion of the Web Site to which such additional terms apply. By using such portions, or any part thereof, you agree to be bound by the additional terms of use applicable to such portions. Age Restrictions The Web Site may be accessed and used only by individuals who can form legally binding contracts under applicable laws, who are at least 18 years of age or the age of majority in their state or territory of residence (if higher than 18), and who are not barred from using the Web Site under applicable laws. Our Technology may not be copied, modified, reproduced, republished, posted, transmitted, sold, offered for sale, or redistributed in any way without our prior written permission and the prior written permission of our applicable licensors. Nothing in these Site Terms of Use grants you any right to receive delivery of a copy of Our Technology or to obtain access to Our Technology except as generally and ordinarily permitted through the Web Site according to these Site Terms of Use. Furthermore, nothing in these Site Terms of Use will be deemed to grant you, by implication, estoppel or otherwise, a license to Our Technology. Certain of the names, logos, and other materials displayed via the Web site constitute trademarks, tradenames, service marks or logos (“Marks”) of us or other entities. You are not authorized to use any such Marks. Ownership of all such Marks and the goodwill associated therewith remains with us or those other entities. Any use of third party software provided in connection with the Web Site will be governed by such third parties’ licenses and not by these Site Terms of Use. Information on this Web Site may contain technical inaccuracies or typographical errors. Lenovo provides no assurances that any reported problems may be resolved with the use of any information that Lenovo provides """ model_dict = { 'model': model, 'max_length': 512, 'min_length': 4 } tokenizer_dict = { 'tokenizer': tokenizer, 'max_length': 1024 } print(get_summary(model_dict, tokenizer_dict, test_tos)) ```
milyiyo/distilbert-base-uncased-finetuned-amazon-review
milyiyo
2022-01-20T15:14:48Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:amazon_reviews_multi", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - amazon_reviews_multi metrics: - accuracy - f1 - precision - recall model-index: - name: distilbert-base-uncased-finetuned-amazon-review results: - task: name: Text Classification type: text-classification dataset: name: amazon_reviews_multi type: amazon_reviews_multi args: es metrics: - name: Accuracy type: accuracy value: 0.693 - name: F1 type: f1 value: 0.7002653469272611 - name: Precision type: precision value: 0.709541681233075 - name: Recall type: recall value: 0.693 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-amazon-review This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 1.3494 - Accuracy: 0.693 - F1: 0.7003 - Precision: 0.7095 - Recall: 0.693 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | No log | 0.5 | 500 | 0.8287 | 0.7104 | 0.7120 | 0.7152 | 0.7104 | | 0.4238 | 1.0 | 1000 | 0.8917 | 0.7094 | 0.6989 | 0.6917 | 0.7094 | | 0.4238 | 1.5 | 1500 | 0.9367 | 0.6884 | 0.6983 | 0.7151 | 0.6884 | | 0.3152 | 2.0 | 2000 | 0.9845 | 0.7116 | 0.7144 | 0.7176 | 0.7116 | | 0.3152 | 2.5 | 2500 | 1.0752 | 0.6814 | 0.6968 | 0.7232 | 0.6814 | | 0.2454 | 3.0 | 3000 | 1.1215 | 0.6918 | 0.6954 | 0.7068 | 0.6918 | | 0.2454 | 3.5 | 3500 | 1.2905 | 0.6976 | 0.7048 | 0.7138 | 0.6976 | | 0.1989 | 4.0 | 4000 | 1.2938 | 0.694 | 0.7016 | 0.7113 | 0.694 | | 0.1989 | 4.5 | 4500 | 1.3623 | 0.6972 | 0.7014 | 0.7062 | 0.6972 | | 0.1746 | 5.0 | 5000 | 1.3494 | 0.693 | 0.7003 | 0.7095 | 0.693 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
Mirjam/test-finetuned
Mirjam
2022-01-20T15:14:18Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: test-finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test-finetuned This model is a fine-tuned version of [yhavinga/t5-v1.1-base-dutch-cnn-test](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cnn-test) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 3 - eval_batch_size: 3 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | No log | 1.0 | 1 | nan | 33.8462 | 31.746 | 30.7692 | 30.7692 | 86.0 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1 - Datasets 1.15.1 - Tokenizers 0.10.3
aviator-neural/mbart_jokes
aviator-neural
2022-01-20T14:31:08Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: mbart_jokes results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart_jokes This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0282 ## Model description This model is trained of jokes dataset , where you can ask a question and the model gives funny answer. ## Intended uses & limitations ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.3455 | 1.0 | 1914 | 3.0282 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
dbsamu/distilbert-base-uncased-finetuned-ner
dbsamu
2022-01-20T10:30:26Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:wikiann", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wikiann metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: wikiann type: wikiann args: en metrics: - name: Precision type: precision value: 0.8120642485217545 - name: Recall type: recall value: 0.830235495804385 - name: F1 type: f1 value: 0.8210493441599 - name: Accuracy type: accuracy value: 0.9203828724683252 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the wikiann dataset. It achieves the following results on the evaluation set: - Loss: 0.2781 - Precision: 0.8121 - Recall: 0.8302 - F1: 0.8210 - Accuracy: 0.9204 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.3504 | 1.0 | 1250 | 0.2922 | 0.7930 | 0.8075 | 0.8002 | 0.9115 | | 0.2353 | 2.0 | 2500 | 0.2711 | 0.8127 | 0.8264 | 0.8195 | 0.9196 | | 0.1745 | 3.0 | 3750 | 0.2781 | 0.8121 | 0.8302 | 0.8210 | 0.9204 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
dehio/german-qg-t5-e2e-quad
dehio
2022-01-20T09:40:47Z
5
3
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "question generation", "de", "dataset:deepset/germanquad", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: mit widget: - text: "Naturschutzwarte haben auf der ostfriesischen Insel Wangerooge zwei seltene Kurzschnäuzige Seepferdchen entdeckt. Die Tiere seien vergangene Woche bei einer sogenannten Spülsaumkontrolle entdeckt worden, bei der die Strände eigentlich nach Müll und toten Vögeln abgesucht würden, sagte der Geschäftsführer der zuständigen Naturschutz- und Forschungsgemeinschaft Mellumrat, Mathias Heckroth. Dabei seien den Naturschützern am Nordstrand kurz hintereinander die beiden leblosen, nur wenige Zentimeter großen Tiere aufgefallen. Experten der Nationalparkverwaltung bestimmten beide Tiere als Kurzschnäuzige Seepferdchen (Hippocampus hippocampus)." inference: parameters: max_length: 128 language: - de tags: - question generation datasets: - deepset/germanquad model-index: - name: german-qg-t5-e2e-quad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # german-qg-t5-e2e-quad (Work in progress) This model is a end-to-end question generation model in German. Given a text, it generates several questions about it. This model is a fine-tuned version of [valhalla/t5-base-e2e-qg](https://huggingface.co/valhalla/t5-base-e2e-qg) on the [GermanQuAD dataset from deepset](https://huggingface.co/datasets/deepset/germanquad). ## Model description More information needed ## Training and evaluation data Bleu_1: 0.196051 Bleu_2: 0.122380 Bleu_3: 0.079980 Bleu_4: 0.053672 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
hrdipto/wav2vec2-xls-r-tf-left-right-shuru
hrdipto
2022-01-20T08:48:17Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-xls-r-tf-left-right-shuru results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-tf-left-right-shuru This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0921 - Wer: 1.2628 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 6.5528 | 23.81 | 500 | 0.5509 | 1.9487 | | 0.2926 | 47.62 | 1000 | 0.1306 | 1.2756 | | 0.1171 | 71.43 | 1500 | 0.1189 | 1.2628 | | 0.0681 | 95.24 | 2000 | 0.0921 | 1.2628 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
huggingtweets/chickenhalf
huggingtweets
2022-01-20T07:52:22Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/chickenhalf/1642665052826/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1482989404125806596/JtLgKHTu_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">chicken sandwich</div> <div style="text-align: center; font-size: 14px;">@chickenhalf</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from chicken sandwich. | Data | chicken sandwich | | --- | --- | | Tweets downloaded | 3202 | | Retweets | 126 | | Short tweets | 427 | | Tweets kept | 2649 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3r0cwhle/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @chickenhalf's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1zvaxh71) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1zvaxh71/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/chickenhalf') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
LiqiangXiao/ConvSearch_QU
LiqiangXiao
2022-01-20T06:32:35Z
7
4
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "arxiv:2109.05460", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
## End-to-end Conversational search model A end-to-end system of conversational search system for online shopping. It was introduced in [this paper](https://arxiv.org/abs/2109.05460) published on conference EMNLP. ## Model description ConvSearch is an end-to-end conversational search system that deeply combines the dialog and search system to improve the search performance. In particular, the Product Search module leverages both structured product attributes and unstructured product text (e.g. profile), where the product text may contain phrases matching with utterances when schema is incomplete or when a product attribute value is missing. Putting together, our system has the advantage of both reduced error accumulation along individual modules, and enhanced robustness against product schema/knowledge gaps. ## Intended uses & limitations You can use the raw model to understand the dialog between consumer and server. The concatenated dialogs can be parsed into intents (e.g. inform, request, buy, et al.) and attributes of products. You can also fine-tune this model on similar down-stream tasks, such as a dialog system for shopping in your scenario or customer service system. Since our model is seq-to-seq, any dialog system that can be reformed to seq-to-seq can be implemented based on this model. ## How to use You can use this model directly with: from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("LiqiangXiao/ConvSearch_QU") model = AutoModelForSeq2SeqLM.from_pretrained("LiqiangXiao/ConvSearch_QU") ## Training data ConvSearch was pretrained on a dialog corpus with 49,999 dialogs/942,766 turns.
abdelkader/distilbert-base-uncased-distilled-clinc
abdelkader
2022-01-20T05:15:31Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-distilled-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos args: plus metrics: - name: Accuracy type: accuracy value: 0.9464516129032258 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.3038 - Accuracy: 0.9465 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 318 | 2.8460 | 0.7506 | | 3.322 | 2.0 | 636 | 1.4301 | 0.8532 | | 3.322 | 3.0 | 954 | 0.7377 | 0.9152 | | 1.2296 | 4.0 | 1272 | 0.4784 | 0.9316 | | 0.449 | 5.0 | 1590 | 0.3730 | 0.9390 | | 0.449 | 6.0 | 1908 | 0.3367 | 0.9429 | | 0.2424 | 7.0 | 2226 | 0.3163 | 0.9468 | | 0.1741 | 8.0 | 2544 | 0.3074 | 0.9452 | | 0.1741 | 9.0 | 2862 | 0.3054 | 0.9458 | | 0.1501 | 10.0 | 3180 | 0.3038 | 0.9465 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
mrp/marian-finetuned-kde4-en-to-fr
mrp
2022-01-20T04:05:30Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "translation", "generated_from_trainer", "dataset:kde4", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - translation - generated_from_trainer datasets: - kde4 metrics: - bleu model-index: - name: marian-finetuned-kde4-en-to-fr results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: kde4 type: kde4 args: en-fr metrics: - name: Bleu type: bleu value: 50.20410659441166 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.9643 - Bleu: 50.2041 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
ethzanalytics/ai-msgbot-gpt2-XL
ethzanalytics
2022-01-20T01:40:42Z
9
1
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "gpt", "en", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: - en tags: - text-generation - gpt2 - gpt license: mit datasets: - natural questions widget: - text: "Do you like my new haircut?\nperson beta:\n\n" example_title: "haircut" - text: "I love to learn new things.. are you willing to teach me something?\nperson beta:\n\n" example_title: "teaching" - text: "What's your favorite animal? Mine is the dog? \nperson beta:\n\n" example_title: "favorite" - text: "how much does it cost?\nperson beta:\n\n" example_title: "money" inference: parameters: min_length: 2 max_length: 64 length_penalty: 0.6 no_repeat_ngram_size: 3 do_sample: True top_p: 0.85 top_k: 10 repetition_penalty: 2.1 --- # ai-msgbot GPT2-XL _NOTE: model card is WIP_ GPT2-XL (~1.5 B parameters) trained on [the Wizard of Wikipedia dataset](https://parl.ai/projects/wizard_of_wikipedia/) for 40k steps with **33**/36 layers frozen using `aitextgen`. Designed for use with [ai-msgbot](https://github.com/pszemraj/ai-msgbot) to create an open-ended chatbot (of course, if other use cases arise, have at it). ## conversation data The dataset was tokenized and fed to the model as a conversation between two speakers, whose names are below. This is relevant for writing prompts and filtering/extracting text from responses. `script_speaker_name` = `person alpha` `script_responder_name` = `person beta` ## examples - the default inference API examples should work _okay_ - an ideal test would be explicitly adding `person beta` into the prompt text the model is forced to respond to instead of adding onto the entered prompt. ### Example prompt: ``` do you like to eat beans? person beta: ``` ### Resulting output ``` do you like to eat beans?person beta: yes, i like fried beans. person alpha: i wonder when the first beans were cultivated and how they were processed. person beta: nitrogenic bacteria (in ``` _Note: the Inference API cuts off generation due to length, if run elsewhere you would see what comes after "(in"_ ## citations ``` @inproceedings{dinan2019wizard, author={Emily Dinan and Stephen Roller and Kurt Shuster and Angela Fan and Michael Auli and Jason Weston}, title={{W}izard of {W}ikipedia: Knowledge-powered Conversational Agents}, booktitle = {Proceedings of the International Conference on Learning Representations (ICLR)}, year={2019}, } @inproceedings{li-etal-2017-dailydialog, title = "{D}aily{D}ialog: A Manually Labelled Multi-turn Dialogue Dataset", author = "Li, Yanran and Su, Hui and Shen, Xiaoyu and Li, Wenjie and Cao, Ziqiang and Niu, Shuzi", booktitle = "Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = nov, year = "2017", address = "Taipei, Taiwan", publisher = "Asian Federation of Natural Language Processing", url = "https://aclanthology.org/I17-1099", pages = "986--995", abstract = "We develop a high-quality multi-turn dialog dataset, \textbf{DailyDialog}, which is intriguing in several aspects. The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way and cover various topics about our daily life. We also manually label the developed dataset with communication intention and emotion information. Then, we evaluate existing approaches on DailyDialog dataset and hope it benefit the research field of dialog systems. The dataset is available on \url{http://yanran.li/dailydialog}", } ```
D3xter1922/electra-base-discriminator-finetuned-cola
D3xter1922
2022-01-20T01:03:51Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "electra", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: electra-base-discriminator-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.6824089073723449 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-base-discriminator-finetuned-cola This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6367 - Matthews Correlation: 0.6824 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.4139 | 1.0 | 535 | 0.4137 | 0.6381 | | 0.2452 | 2.0 | 1070 | 0.4887 | 0.6504 | | 0.17 | 3.0 | 1605 | 0.5335 | 0.6757 | | 0.1135 | 4.0 | 2140 | 0.6367 | 0.6824 | | 0.0817 | 5.0 | 2675 | 0.6742 | 0.6755 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
nimrah/wav2vec2-large-xls-r-300m-hindi-colab
nimrah
2022-01-19T21:21:34Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-hindi-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-hindi-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
vuiseng9/bert-base-squadv1-pruneofa-90pc-bt-qat-lt
vuiseng9
2022-01-19T19:13:40Z
5
0
transformers
[ "transformers", "pytorch", "onnx", "bert", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
This model is a downstream optimization of [```vuiseng9/bert-base-squadv1-pruneofa-90pc-bt```](https://huggingface.co/vuiseng9/bert-base-squadv1-pruneofa-90pc-bt) using [OpenVINO/NNCF](https://github.com/openvinotoolkit/nncf). Applied optimization includes: 1. magnitude sparsification at 0% upon initialization. Custom reverse masking and sparsity freezing are applied. 2. NNCF Quantize-Aware Training - Symmetric 8-bit for both weight and activation on all learnable layers. 3. Custom distillation with large model ```bert-large-uncased-whole-word-masking-finetuned-squad``` ``` eval_exact_match = 80.6623 eval_f1 = 87.7147 eval_samples = 10784 ``` # Setup ```bash # OpenVINO/NNCF git clone https://github.com/vuiseng9/nncf && cd nncf git checkout tld-poc git reset --hard 5647610d5ee2bf9f1324604e6579bca1c391e260 python setup.py develop pip install -r examples/torch/requirements.txt # Huggingface nn_pruning git clone https://github.com/vuiseng9/nn_pruning && cd nn_pruning git checkout reproduce-evaluation git reset --hard 2d4e196d694c465e43e5fbce6c3836d0a60e1446 pip install -e ".[dev]" # Huggingface Transformers git clone https://github.com/vuiseng9/transformers && cd transformers git checkout tld-poc git reset --hard 5dd7402e9a316041dea4ff67508c01047323616e pip install -e . head -n 1 examples/pytorch/question-answering/requirements.txt | xargs -i pip install {} # Additional dependencies pip install onnx ``` # Train ```bash wget https://huggingface.co/vuiseng9/bert-base-squadv1-pruneofa-90pc-bt-qat-lt/raw/main/nncf_bert_squad_sparsity.json NNCF_CFG=/path/to/downloaded_nncf_cfg_above #to-revise OUTROOT=/path/to/train_output_root #to-revise WORKDIR=transformers/examples/pytorch/question-answering #to-revise RUNID=bert-base-squadv1-pruneofa-90pc-bt-qat-lt cd $WORKDIR OUTDIR=$OUTROOT/$RUNID mkdir -p $OUTDIR export CUDA_VISIBLE_DEVICES=0 NEPOCH=5 python run_qa.py \ --model_name_or_path vuiseng9/bert-base-squadv1-pruneofa-90pc-bt \ --pruneofa_qat \ --dataset_name squad \ --do_eval \ --do_train \ --evaluation_strategy steps \ --eval_steps 250 \ --learning_rate 3e-5 \ --lr_scheduler_type cosine_with_restarts \ --warmup_ratio 0.25 \ --cosine_cycles 1 \ --teacher bert-large-uncased-whole-word-masking-finetuned-squad \ --teacher_ratio 0.9 \ --num_train_epochs $NEPOCH \ --per_device_eval_batch_size 128 \ --per_device_train_batch_size 16 \ --max_seq_length 384 \ --doc_stride 128 \ --save_steps 250 \ --nncf_config $NNCF_CFG \ --logging_steps 1 \ --overwrite_output_dir \ --run_name $RUNID \ --output_dir $OUTDIR ``` # Eval This repo must be cloned locally. ```bash git clone https://huggingface.co/vuiseng9/bert-base-squadv1-pruneofa-90pc-bt-qat-lt MODELROOT=/path/to/cloned_repo_above #to-revise export CUDA_VISIBLE_DEVICES=0 OUTDIR=eval-bert-base-squadv1-pruneofa-90pc-bt-qat-lt WORKDIR=transformers/examples/pytorch/question-answering #to-revise cd $WORKDIR mkdir $OUTDIR nohup python run_qa.py \ --model_name_or_path vuiseng9/bert-base-squadv1-pruneofa-90pc-bt \ --dataset_name squad \ --qat_checkpoint $MODELROOT/checkpoint-22000 \ --nncf_config $MODELROOT/nncf_bert_squad_sparsity.json \ --to_onnx $OUTDIR/bert-base-squadv1-pruneofa-90pc-bt-qat-lt.onnx \ --do_eval \ --per_device_eval_batch_size 128 \ --max_seq_length 384 \ --doc_stride 128 \ --overwrite_output_dir \ --output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log & ```
masapasa/wav2vec2-large-xls-r-300m-turkish-colab
masapasa
2022-01-19T17:30:55Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-turkish-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-turkish-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 30 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.13.3 - Tokenizers 0.10.3
facebook/contriever
facebook
2022-01-19T17:23:28Z
303,332
60
transformers
[ "transformers", "pytorch", "bert", "arxiv:2112.09118", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
This model has been trained without supervision following the approach described in [Towards Unsupervised Dense Information Retrieval with Contrastive Learning](https://arxiv.org/abs/2112.09118). The associated GitHub repository is available here https://github.com/facebookresearch/contriever. ## Usage (HuggingFace Transformers) Using the model directly available in HuggingFace transformers requires to add a mean pooling operation to obtain a sentence embedding. ```python import torch from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('facebook/contriever') model = AutoModel.from_pretrained('facebook/contriever') sentences = [ "Where was Marie Curie born?", "Maria Sklodowska, later known as Marie Curie, was born on November 7, 1867.", "Born in Paris on 15 May 1859, Pierre Curie was the son of Eugène Curie, a doctor of French Catholic origin from Alsace." ] # Apply tokenizer inputs = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings outputs = model(**inputs) # Mean pooling def mean_pooling(token_embeddings, mask): token_embeddings = token_embeddings.masked_fill(~mask[..., None].bool(), 0.) sentence_embeddings = token_embeddings.sum(dim=1) / mask.sum(dim=1)[..., None] return sentence_embeddings embeddings = mean_pooling(outputs[0], inputs['attention_mask']) ```
huggingtweets/t_zahil
huggingtweets
2022-01-19T16:50:12Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1374040164180299791/ACw4G3nZ_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Thomas Sanlis 🌱</div> <div style="text-align: center; font-size: 14px;">@t_zahil</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Thomas Sanlis 🌱. | Data | Thomas Sanlis 🌱 | | --- | --- | | Tweets downloaded | 3242 | | Retweets | 597 | | Short tweets | 312 | | Tweets kept | 2333 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/33umauvo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @t_zahil's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3fhm3dlx) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3fhm3dlx/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/t_zahil') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
baaastien/xls-r-ab-test
baaastien
2022-01-19T12:03:47Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "common_voice", "generated_from_trainer", "ab", "dataset:common_voice", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - ab tags: - automatic-speech-recognition - common_voice - generated_from_trainer datasets: - common_voice model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the COMMON_VOICE - AB dataset. It achieves the following results on the evaluation set: - Loss: 133.5167 - Wer: 18.9286 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 2.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
chitra/finetuned-adversarial-paraphrase-model
chitra
2022-01-19T09:13:16Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer model-index: - name: finetuned-adversarial-paraphrase-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-adversarial-paraphrase-model This model is a fine-tuned version of [coderpotter/adversarial-paraphrasing-detector](https://huggingface.co/coderpotter/adversarial-paraphrasing-detector) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 7.5680 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0848 | 1.0 | 2000 | 5.4633 | | 0.0495 | 2.0 | 4000 | 6.0352 | | 0.0121 | 3.0 | 6000 | 7.5680 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
huggingtweets/wmascen
huggingtweets
2022-01-19T04:52:23Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/wmascen/1642567908765/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1453179488569802752/LsB82o0-_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">wihrel</div> <div style="text-align: center; font-size: 14px;">@wmascen</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from wihrel. | Data | wihrel | | --- | --- | | Tweets downloaded | 2900 | | Retweets | 203 | | Short tweets | 236 | | Tweets kept | 2461 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/bsbw98xm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @wmascen's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3pwlitks) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3pwlitks/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/wmascen') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/godslovepariah
huggingtweets
2022-01-19T04:12:22Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/godslovepariah/1642565537762/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1432780406777020417/XTrp9MCR_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">LOVER//PARIAH</div> <div style="text-align: center; font-size: 14px;">@godslovepariah</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from LOVER//PARIAH. | Data | LOVER//PARIAH | | --- | --- | | Tweets downloaded | 525 | | Retweets | 9 | | Short tweets | 10 | | Tweets kept | 506 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/6l5fj9xw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @godslovepariah's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3v0x5r1a) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3v0x5r1a/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/godslovepariah') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
NbAiLab/roberta_des_128
NbAiLab
2022-01-19T01:06:51Z
3
0
transformers
[ "transformers", "jax", "tensorboard", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
Just for performing some experiments. Do not use. This needed to be restarted at 100k. I am getting memory errors at the end of the epoch. Not really sure why. Step 2 is therefore on train_2__4. Static learning rate for a while. The first 100k ended at 0.59. This is decent so early. No point in running more epochs here though. Changing the corpus and continue training.
mrm8488/bert-tiny-5-finetuned-squadv2
mrm8488
2022-01-18T20:19:49Z
154
4
transformers
[ "transformers", "pytorch", "jax", "bert", "question-answering", "QA", "en", "arxiv:1908.08962", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: en thumbnail: tags: - QA --- # BERT-Tiny ([5](https://huggingface.co/google/bert_uncased_L-12_H-128_A-2)) fine-tuned on SQuAD v2 [BERT-Tiny](https://huggingface.co/google/bert_uncased_L-12_H-128_A-2) created by [Google Research](https://github.com/google-research) and fine-tuned on [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) for **Q&A** downstream task. **Mode size** (after training): **24.33 MB** ## Details of BERT-Tiny and its 'family' (from their documentation) Released on March 11th, 2020 This is model is a part of 24 smaller BERT models (English only, uncased, trained with WordPiece masking) referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962). The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. ## Details of the downstream task (Q&A) - Dataset [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD2.0 | train | 130k | | SQuAD2.0 | eval | 12.3k | ## Model training The model was trained on a Tesla P100 GPU and 25GB of RAM. The script for fine tuning can be found [here](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py) ## Results: | Metric | # Value | | ------ | --------- | | **EM** | **57.12** | | **F1** | **60.86** | | Model | EM | F1 score | SIZE (MB) | | ----------------------------------------------------------------------------------------- | --------- | --------- | --------- | | [bert-tiny-finetuned-squadv2](https://huggingface.co/mrm8488/bert-tiny-finetuned-squadv2) | 48.60 | 49.73 | **16.74** | | [bert-tiny-5-finetuned-squadv2](https://huggingface.co/mrm8488/bert-tiny-5-finetuned-squadv2) | **57.12** | **60.86** | 24.34 ## Model in action Fast usage with **pipelines**: ```python from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="mrm8488/bert-tiny-5-finetuned-squadv2", tokenizer="mrm8488/bert-tiny-5-finetuned-squadv2" ) qa_pipeline({ 'context': "Manuel Romero has been working hardly in the repository hugginface/transformers lately", 'question': "Who has been working hard for hugginface/transformers lately?" }) ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
malloc/OpenNMT-py-English-German-Transformer
malloc
2022-01-18T20:18:11Z
0
2
null
[ "translation", "pytorch", "de", "en", "dataset:WMT", "license:mit", "region:us" ]
translation
2022-03-02T23:29:05Z
--- language: - de - en tags: - translation - pytorch license: mit datasets: - WMT metrics: - bleu --- # OpenNMT-py-English-German-Transformer [OpenNMT-py](https://github.com/OpenNMT/OpenNMT-py) is the PyTorch version of the OpenNMT project, an open-source (MIT) neural machine translation framework. OpenNMT has several [pretrained models](https://opennmt.net/Models-py/). This one is trained particularly for English to German translation. - Configuration: Base Transformer configuration with [standard training options](http://opennmt.net/OpenNMT-py/FAQ.html#how-do-i-use-the-transformer-model-do-you-support-multi-gpu) - Data: WMT with shared SentencePiece model - BLEU: - newstest2014 = 26.89 - newstest2017 = 28.09
tal-yifat/injury-report-test
tal-yifat
2022-01-18T16:24:00Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: injury-report-test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # injury-report-test This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5697 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.8158 | 1.0 | 6633 | 1.7368 | | 1.6984 | 2.0 | 13266 | 1.6198 | | 1.6209 | 3.0 | 19899 | 1.5800 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
phueb/BabyBERTa-2
phueb
2022-01-18T14:44:44Z
60
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "BabyBERTa", "en", "dataset:CHILDES", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: en tags: - BabyBERTa datasets: - CHILDES widget: - text: "Look here. What is that <mask> ?" - text: "Do you like your <mask> ?" --- ## BabyBERTA ### Overview BabyBERTa is a light-weight version of RoBERTa trained on 5M words of American-English child-directed input. It is intended for language acquisition research, on a single desktop with a single GPU - no high-performance computing infrastructure needed. The three provided models are randomly selected from 10 that were trained and reported in the paper. ## Loading the tokenizer BabyBERTa was trained with `add_prefix_space=True`, so it will not work properly with the tokenizer defaults. For instance, to load the tokenizer for BabyBERTa-1, load it as follows: ```python tokenizer = RobertaTokenizerFast.from_pretrained("phueb/BabyBERTa-1", add_prefix_space=True) ``` ### Hyper-Parameters See the paper for details. All provided models were trained for 400K steps with a batch size of 16. Importantly, BabyBERTa never predicts unmasked tokens during training - `unmask_prob` is set to zero. ### Performance BabyBerta was developed for learning grammatical knowledge from child-directed input. Its grammatical knowledge was evaluated using the [Zorro](https://github.com/phueb/Zorro) test suite. The best model achieves an overall accuracy of 80.3, comparable to RoBERTa-base, which achieves an overall accuracy of 82.6 on the latest version of Zorro (as of October, 2021). Both values differ slightly from those reported in the [CoNLL 2021 paper](https://aclanthology.org/2021.conll-1.49/). There are two reasons for this: 1. Performance of RoBERTa-base is slightly larger because the authors previously lower-cased all words in Zorro before evaluation. Lower-casing of proper nouns is detrimental to RoBERTa-base because RoBERTa-base has likely been trained on proper nouns that are primarily title-cased. In contrast, because BabyBERTa is not case-sensitive, its performance is not influenced by this change. 2. The latest version of Zorro no longer contains ambiguous content words such as "Spanish" which can be both a noun and an adjective. this resulted in a small reduction in the performance of BabyBERTa. Overall Accuracy on Zorro: | Model Name | Accuracy (holistic scoring) | Accuracy (MLM-scoring) | |----------------------------------------|------------------------------|------------| | [BabyBERTa-1][link-BabyBERTa-1] | 80.3 | 79.9 | | [BabyBERTa-2][link-BabyBERTa-2] | 78.6 | 78.2 | | [BabyBERTa-3][link-BabyBERTa-3] | 74.5 | 78.1 | ### Additional Information This model was trained by [Philip Huebner](https://philhuebner.com), currently at the [UIUC Language and Learning Lab](http://www.learninglanguagelab.org). More info can be found [here](https://github.com/phueb/BabyBERTa). [link-BabyBERTa-1]: https://huggingface.co/phueb/BabyBERTa-1 [link-BabyBERTa-2]: https://huggingface.co/phueb/BabyBERTa-2 [link-BabyBERTa-3]: https://huggingface.co/phueb/BabyBERTa-3
phueb/BabyBERTa-1
phueb
2022-01-18T14:44:02Z
56
2
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "BabyBERTa", "en", "dataset:CHILDES", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: en tags: - BabyBERTa datasets: - CHILDES widget: - text: "Look here. What is that <mask> ?" - text: "Do you like your <mask> ?" --- ## BabyBERTA ### Overview BabyBERTa is a light-weight version of RoBERTa trained on 5M words of American-English child-directed input. It is intended for language acquisition research, on a single desktop with a single GPU - no high-performance computing infrastructure needed. The three provided models are randomly selected from 10 that were trained and reported in the paper. ## Loading the tokenizer BabyBERTa was trained with `add_prefix_space=True`, so it will not work properly with the tokenizer defaults. For instance, to load the tokenizer for BabyBERTa-1, load it as follows: ```python tokenizer = RobertaTokenizerFast.from_pretrained("phueb/BabyBERTa-1", add_prefix_space=True) ``` ### Hyper-Parameters See the paper for details. All provided models were trained for 400K steps with a batch size of 16. Importantly, BabyBERTa never predicts unmasked tokens during training - `unmask_prob` is set to zero. ### Performance BabyBerta was developed for learning grammatical knowledge from child-directed input. Its grammatical knowledge was evaluated using the [Zorro](https://github.com/phueb/Zorro) test suite. The best model achieves an overall accuracy of 80.3, comparable to RoBERTa-base, which achieves an overall accuracy of 82.6 on the latest version of Zorro (as of October, 2021). Both values differ slightly from those reported in the [CoNLL 2021 paper](https://aclanthology.org/2021.conll-1.49/). There are two reasons for this: 1. Performance of RoBERTa-base is slightly larger because the authors previously lower-cased all words in Zorro before evaluation. Lower-casing of proper nouns is detrimental to RoBERTa-base because RoBERTa-base has likely been trained on proper nouns that are primarily title-cased. In contrast, because BabyBERTa is not case-sensitive, its performance is not influenced by this change. 2. The latest version of Zorro no longer contains ambiguous content words such as "Spanish" which can be both a noun and an adjective. this resulted in a small reduction in the performance of BabyBERTa. Overall Accuracy on Zorro: | Model Name | Accuracy (holistic scoring) | Accuracy (MLM-scoring) | |----------------------------------------|------------------------------|------------| | [BabyBERTa-1][link-BabyBERTa-1] | 80.3 | 79.9 | | [BabyBERTa-2][link-BabyBERTa-2] | 78.6 | 78.2 | | [BabyBERTa-3][link-BabyBERTa-3] | 74.5 | 78.1 | ### Additional Information This model was trained by [Philip Huebner](https://philhuebner.com), currently at the [UIUC Language and Learning Lab](http://www.learninglanguagelab.org). More info can be found [here](https://github.com/phueb/BabyBERTa). [link-BabyBERTa-1]: https://huggingface.co/phueb/BabyBERTa-1 [link-BabyBERTa-2]: https://huggingface.co/phueb/BabyBERTa-2 [link-BabyBERTa-3]: https://huggingface.co/phueb/BabyBERTa-3
NbAiLab/roberta_des_ada_128_6e4
NbAiLab
2022-01-18T10:45:01Z
8
0
transformers
[ "transformers", "jax", "tensorboard", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
Just for performing some experiments. Do not use.
huggingtweets/dankogai-hirox246
huggingtweets
2022-01-18T09:55:05Z
0
0
null
[ "huggingtweets", "en", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/dankogai-hirox246/1642499700234/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/646595746905620480/oeKI14gB_400x400.png&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1190142566831984640/o4kO2hp-_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">ひろゆき, Hiroyuki Nishimura & Dan Kogai</div> <div style="text-align: center; font-size: 14px;">@dankogai-hirox246</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from ひろゆき, Hiroyuki Nishimura & Dan Kogai. | Data | ひろゆき, Hiroyuki Nishimura | Dan Kogai | | --- | --- | --- | | Tweets downloaded | 3249 | 3250 | | Retweets | 284 | 340 | | Short tweets | 1988 | 2416 | | Tweets kept | 977 | 494 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3vrtv6xf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dankogai-hirox246's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1yfxplpr) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1yfxplpr/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/dankogai-hirox246') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
philschmid/tf-distilbart-cnn-12-6-tradetheevent
philschmid
2022-01-18T05:02:13Z
5
0
transformers
[ "transformers", "tf", "tensorboard", "bart", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: philschmid/tf-distilbart-cnn-12-6-tradetheevent results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # philschmid/tf-distilbart-cnn-12-6-tradetheevent This model is a fine-tuned version of [philschmid/tf-distilbart-cnn-12-6](https://huggingface.co/philschmid/tf-distilbart-cnn-12-6) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6894 - Validation Loss: 1.7245 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 161440, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.6635 | 1.5957 | 0 | | 1.3144 | 1.5577 | 1 | | 1.0819 | 1.6059 | 2 | | 0.8702 | 1.6695 | 3 | | 0.6894 | 1.7245 | 4 | ### Framework versions - Transformers 4.16.0.dev0 - TensorFlow 2.7.0 - Datasets 1.17.0 - Tokenizers 0.10.3
huggingtweets/dankogai-hirox246-syakkin_dama
huggingtweets
2022-01-18T02:01:17Z
0
0
null
[ "huggingtweets", "en", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/dankogai-hirox246-syakkin_dama/1642471272927/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/646595746905620480/oeKI14gB_400x400.png&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1190142566831984640/o4kO2hp-_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1283621672541536259/WI_8OTJz_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">ひろゆき, Hiroyuki Nishimura & Dan Kogai & 借金玉</div> <div style="text-align: center; font-size: 14px;">@dankogai-hirox246-syakkin_dama</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from ひろゆき, Hiroyuki Nishimura & Dan Kogai & 借金玉. | Data | ひろゆき, Hiroyuki Nishimura | Dan Kogai | 借金玉 | | --- | --- | --- | --- | | Tweets downloaded | 3249 | 3250 | 3249 | | Retweets | 283 | 341 | 260 | | Short tweets | 1819 | 2313 | 2918 | | Tweets kept | 1147 | 596 | 71 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1meoqt2b/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dankogai-hirox246-syakkin_dama's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1gc1ic0l) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1gc1ic0l/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/dankogai-hirox246-syakkin_dama') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
jkang/drawing-artistic-trend-classifier
jkang
2022-01-18T01:19:29Z
3
0
tf-keras
[ "tf-keras", "en", "license:mit", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en license: mit datasets: - web crawled (coming soon) --- # Simple CNN-based Artist Classifier This repo contains a simple CNN-based Keras model which classifies images into one of 8 artistic trends. See also: `https://huggingface.co/jkang/drawing-artist-classifier` - The purpose of this model was for a quick prototyping - Data has been web-crawled using `https://github.com/YoongiKim/AutoCrawler` - 8 popular artists/painters were chosen: - \[TREND\]: \[ID\] - cubism: 0, - expressionism: 1, - fauvisme: 2, - graffitiar: 3, - impressionism: 4, - popart: 5, - post_impressionism: 6, - surrealism: 7} - About 100 representative paintings per artist considering 8 trends were crawled and manually checked - Dataset will be shared later # How to use ```python import tensorflow as tf from huggingface_hub import from_pretrained_keras model = from_pretrained_keras("jkang/drawing-artistic-trend-classifier") image_file = 'monet.jpg' img = tf.io.read_file(image_file) img = tf.io.decode_jpeg(img, channels=3) last_layer_activation, predictions = model(img[tf.newaxis,...]) ``` # Intended uses & limitations You can use this model freely for predicting artists or trends of a given image. Please keep in mind that this model is not intended for production, but for research and quick prototyping. Web-crawled image data might not have a balanced amount of drawings that sufficiently represent the artists. --- - 2022-01-18 first created by jaekoo kang
huggingtweets/ayatokura-chomado-ikeay
huggingtweets
2022-01-17T23:42:42Z
0
0
null
[ "huggingtweets", "en", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/ayatokura-chomado-ikeay/1642462957980/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1334136134234849280/XgE0O39a_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1480842681182220288/ywam5sXK_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1480168235417083905/Kp8uyXIy_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">池澤あやか / いけあや & ちょまど🎀💻エンジニア兼漫画家 & 職業「戸倉彩」👩‍💻とくあや</div> <div style="text-align: center; font-size: 14px;">@ayatokura-chomado-ikeay</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from 池澤あやか / いけあや & ちょまど🎀💻エンジニア兼漫画家 & 職業「戸倉彩」👩‍💻とくあや. | Data | 池澤あやか / いけあや | ちょまど🎀💻エンジニア兼漫画家 | 職業「戸倉彩」👩‍💻とくあや | | --- | --- | --- | --- | | Tweets downloaded | 3250 | 3245 | 3249 | | Retweets | 224 | 717 | 1266 | | Short tweets | 2813 | 867 | 1036 | | Tweets kept | 213 | 1661 | 947 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2rhguk5h/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ayatokura-chomado-ikeay's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/34bxjwb8) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/34bxjwb8/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/ayatokura-chomado-ikeay') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Huertas97/en_roberta_base_leetspeak_ner
Huertas97
2022-01-17T21:54:01Z
5
1
spacy
[ "spacy", "token-classification", "en", "license:apache-2.0", "model-index", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- tags: - spacy - token-classification language: - en license: apache-2.0 widget: - text: "But one other thing that we have to re;think is the way that we dy£ our #c!l.o|th?£+s." example_title: "Word camouflage detection" model-index: - name: en_roberta_base_leetspeak_ner results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.7966001851 - name: NER Recall type: recall value: 0.8619559279 - name: NER F Score type: f_score value: 0.8279903783 --- | Feature | Description | | --- | --- | | **Name** | `en_roberta_base_leetspeak_ner` | | **Version** | `0.0.0` | | **spaCy** | `>=3.2.1,<3.3.0` | | **Default Pipeline** | `transformer`, `ner` | | **Components** | `transformer`, `ner` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [roberta-base](https://huggingface.co/roberta-base) pre-trained model on English language using a masked language modeling (MLM) objective by Yinhan Liu et al. <br> [LeetSpeak-NER](https://huggingface.co/spaces/Huertas97/LeetSpeak-NER) app where this model is in production for countering information disorders| | **License** | Apache 2.0 | | **Author** | [Álvaro Huertas García](https://www.linkedin.com/in/alvaro-huertas-garcia/) at [AI+DA](http://aida.etsisi.upm.es/) | ### Label Scheme <details> <summary>View label scheme (4 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`ner`** | `INV_CAMO`, `LEETSPEAK`, `MIX`, `PUNCT_CAMO` | </details> ### Accuracy | Type | Score | | --- | --- | | `ENTS_F` | 82.80 | | `ENTS_P` | 79.66 | | `ENTS_R` | 86.20 | | `TRANSFORMER_LOSS` | 177808.42 | | `NER_LOSS` | 608427.31 |
Rocketknight1/marian-finetuned-kde4-en-to-fr
Rocketknight1
2022-01-17T20:42:34Z
5
0
transformers
[ "transformers", "tf", "marian", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Rocketknight1/marian-finetuned-kde4-en-to-fr results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Rocketknight1/marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6862 - Validation Loss: 0.8050 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 17733, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.0615 | 0.8832 | 0 | | 0.7983 | 0.8211 | 1 | | 0.6862 | 0.8050 | 2 | ### Framework versions - Transformers 4.16.0.dev0 - TensorFlow 2.7.0 - Datasets 1.17.0 - Tokenizers 0.10.3
ronanki/xlmr_17-01-2022_v3
ronanki
2022-01-17T20:34:20Z
3
0
sentence-transformers
[ "sentence-transformers", "pytorch", "xlm-roberta", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # ronanki/xlmr_17-01-2022_v3 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('ronanki/xlmr_17-01-2022_v3') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('ronanki/xlmr_17-01-2022_v3') model = AutoModel.from_pretrained('ronanki/xlmr_17-01-2022_v3') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ronanki/xlmr_17-01-2022_v3) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 40 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 4, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
dshvadskiy/bert-finetuned-ner
dshvadskiy
2022-01-17T17:54:13Z
9
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:conll2002", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2002 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2002 type: conll2002 args: es metrics: - name: Precision type: precision value: 0.7394396551724138 - name: Recall type: recall value: 0.7883731617647058 - name: F1 type: f1 value: 0.7631227758007118 - name: Accuracy type: accuracy value: 0.9655744705631151 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2002 dataset. It achieves the following results on the evaluation set: - Loss: 0.1458 - Precision: 0.7394 - Recall: 0.7884 - F1: 0.7631 - Accuracy: 0.9656 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1047 | 1.0 | 1041 | 0.1516 | 0.7173 | 0.7505 | 0.7335 | 0.9602 | | 0.068 | 2.0 | 2082 | 0.1280 | 0.7470 | 0.7888 | 0.7673 | 0.9664 | | 0.0406 | 3.0 | 3123 | 0.1458 | 0.7394 | 0.7884 | 0.7631 | 0.9656 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
jiobiala24/wav2vec2-base-checkpoint-6
jiobiala24
2022-01-17T14:22:20Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-base-checkpoint-6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-checkpoint-6 This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-5](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-5) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.9738 - Wer: 0.3323 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.3435 | 1.82 | 1000 | 0.5637 | 0.3419 | | 0.2599 | 3.65 | 2000 | 0.5804 | 0.3473 | | 0.2043 | 5.47 | 3000 | 0.6481 | 0.3474 | | 0.1651 | 7.3 | 4000 | 0.6937 | 0.3452 | | 0.1376 | 9.12 | 5000 | 0.7221 | 0.3429 | | 0.118 | 10.95 | 6000 | 0.7634 | 0.3441 | | 0.105 | 12.77 | 7000 | 0.7789 | 0.3444 | | 0.0925 | 14.6 | 8000 | 0.8209 | 0.3444 | | 0.0863 | 16.42 | 9000 | 0.8293 | 0.3440 | | 0.0756 | 18.25 | 10000 | 0.8553 | 0.3412 | | 0.0718 | 20.07 | 11000 | 0.9006 | 0.3430 | | 0.0654 | 21.9 | 12000 | 0.9541 | 0.3458 | | 0.0605 | 23.72 | 13000 | 0.9400 | 0.3350 | | 0.0552 | 25.55 | 14000 | 0.9547 | 0.3363 | | 0.0543 | 27.37 | 15000 | 0.9715 | 0.3348 | | 0.0493 | 29.2 | 16000 | 0.9738 | 0.3323 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
DSI/human-directed-sentiment
DSI
2022-01-17T14:20:52Z
8
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
** Human-Directed Sentiment Analysis in Arabic A supervised training procedure to classify human-directed-sentiment in a text. We define the human-directed-sentiment as the polarity of one user towards a second person who is involved with him in a discussion.
nielsr/tapex-large-finetuned-tabfact
nielsr
2022-01-17T13:39:28Z
5
0
transformers
[ "transformers", "pytorch", "bart", "text-classification", "tapex", "en", "dataset:tab_fact", "arxiv:2107.07653", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: en tags: - tapex license: apache-2.0 datasets: - tab_fact inference: false --- TAPEX-large model fine-tuned on WTQ. This model was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. Original repo can be found [here](https://github.com/microsoft/Table-Pretraining). To load it and run inference, you can do the following: ``` from transformers import BartTokenizer, BartForSequenceClassification import pandas as pd tokenizer = BartTokenizer.from_pretrained("nielsr/tapex-large-finetuned-tabfact") model = BartForSequenceClassification.from_pretrained("nielsr/tapex-large-finetuned-tabfact") # create table data = {'Actors': ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], 'Number of movies': ["87", "53", "69"]} table = pd.DataFrame.from_dict(data) # turn into dict table_dict = {"header": list(table.columns), "rows": [list(row.values) for i,row in table.iterrows()]} # turn into format TAPEX expects # define the linearizer based on this code: https://github.com/microsoft/Table-Pretraining/blob/main/tapex/processor/table_linearize.py linearizer = IndexedRowTableLinearize() linear_table = linearizer.process_table(table_dict) # add sentence sentence = "George Clooney has 69 movies" joint_input = sentence + " " + linear_table # encode encoding = tokenizer(joint_input, return_tensors="pt") # forward pass outputs = model(**encoding) # print prediction logits = outputs.logits print(logits.argmax(-1)) ```
Dumiiii/wav2vec2-xls-r-300m-romanian
Dumiiii
2022-01-17T13:34:59Z
12
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: name: wav2vec2-xls-r-300m-romanian --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> ## This model achieves WER on common-voice ro test split of WER: 12.457178% # wav2vec2-xls-r-300m-romanian This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an common voice ro and RSS dataset. It achieves the following results on the evaluation set: - eval_loss: 0.0836 - eval_wer: 0.0705 - eval_runtime: 160.4549 - eval_samples_per_second: 11.081 - eval_steps_per_second: 1.39 - epoch: 14.38 - step: 2703 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 15 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3 Used the following code for evaluation: ``` import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "ro", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("Dumiiii/wav2vec2-xls-r-300m-romanian") model = Wav2Vec2ForCTC.from_pretrained("Dumiiii/wav2vec2-xls-r-300m-romanian") model.to("cuda") chars_to_ignore_regex = '['+string.punctuation+']' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` Credits for evaluation: https://huggingface.co/anton-l
addy88/t5-grammar-correction
addy88
2022-01-17T12:09:14Z
109
2
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
### How to use Here is how to use this model in PyTorch: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("addy88/t5-grammar-correction") model = AutoModelForSeq2SeqLM.from_pretrained("addy88/t5-grammar-correction") input_ids = tokenizer('grammar: This sentences has has bads grammar.', return_tensors='pt').input_ids outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ```
nickmuchi/minilm-finetuned-emotion_nm
nickmuchi
2022-01-17T08:15:50Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - emotion metrics: - f1 model-index: - name: minilm-finetuned-emotion_nm results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: F1 type: f1 value: 0.9322805793931607 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # minilm-finetuned-emotion_nm This model is a fine-tuned version of [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1918 - F1: 0.9323 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.3627 | 1.0 | 250 | 1.0048 | 0.5936 | | 0.8406 | 2.0 | 500 | 0.6477 | 0.8608 | | 0.5344 | 3.0 | 750 | 0.4025 | 0.9099 | | 0.3619 | 4.0 | 1000 | 0.3142 | 0.9188 | | 0.274 | 5.0 | 1250 | 0.2489 | 0.9277 | | 0.2225 | 6.0 | 1500 | 0.2320 | 0.9303 | | 0.191 | 7.0 | 1750 | 0.2083 | 0.9298 | | 0.1731 | 8.0 | 2000 | 0.1969 | 0.9334 | | 0.1606 | 9.0 | 2250 | 0.1928 | 0.9362 | | 0.1462 | 10.0 | 2500 | 0.1918 | 0.9323 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
YasinShihab/asr-en-bn-test
YasinShihab
2022-01-17T06:37:54Z
0
1
null
[ "bn", "audio", "automatic-speech-recognition", "speech", "dataset:OpenSLR", "license:cc-by-sa-4.0", "model-index", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: Bengali datasets: - OpenSLR metrics: - wer tags: - bn - audio - automatic-speech-recognition - speech license: cc-by-sa-4.0 model-index: - name: XLSR Wav2Vec2 Bengali by Arijit results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: OpenSLR type: OpenSLR args: ben metrics: - name: Test WER type: wer value: 32.45 --- # Wav2Vec2-Large-XLSR-Bengali Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) Bengali using a subset of 40,000 utterances from [Bengali ASR training data set containing ~196K utterances](https://www.openslr.org/53/). Tested WER using ~4200 held out from training. When using this model, make sure that your speech input is sampled at 16kHz. Train Script can be Found at : train.py Data Prep Notebook : https://colab.research.google.com/drive/1JMlZPU-DrezXjZ2t7sOVqn7CJjZhdK2q?usp=sharing Inference Notebook : https://colab.research.google.com/drive/1uKC2cK9JfUPDTUHbrNdOYqKtNozhxqgZ?usp=sharing ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor processor = Wav2Vec2Processor.from_pretrained("arijitx/wav2vec2-large-xlsr-bengali") model = Wav2Vec2ForCTC.from_pretrained("arijitx/wav2vec2-large-xlsr-bengali") # model = model.to("cuda") resampler = torchaudio.transforms.Resample(TEST_AUDIO_SR, 16_000) def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch) speech = resampler(speech_array).squeeze().numpy() return speech speech_array = speech_file_to_array_fn("test_file.wav") inputs = processor(speech_array, sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values).logits predicted_ids = torch.argmax(logits, dim=-1) preds = processor.batch_decode(predicted_ids)[0] print(preds.replace("[PAD]","")) ``` **Test Result**: WER on ~4200 utterance : 32.45 %
sahri/indonesiasentiment
sahri
2022-01-17T04:50:03Z
19
0
transformers
[ "transformers", "pytorch", "tf", "roberta", "text-classification", "indonesian-roberta-base-sentiment-classifier", "id", "dataset:indonlu", "arxiv:1907.11692", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: id tags: - indonesian-roberta-base-sentiment-classifier license: mit datasets: - indonlu widget: - text: "tidak jelek tapi keren" --- ## Indonesian RoBERTa Base Sentiment Classifier Indonesian RoBERTa Base Sentiment Classifier is a sentiment-text-classification model based on the [RoBERTa](https://arxiv.org/abs/1907.11692) model. The model was originally the pre-trained [Indonesian RoBERTa Base](https://hf.co/flax-community/indonesian-roberta-base) model, which is then fine-tuned on [`indonlu`](https://hf.co/datasets/indonlu)'s `SmSA` dataset consisting of Indonesian comments and reviews. After training, the model achieved an evaluation accuracy of 94.36% and F1-macro of 92.42%. On the benchmark test set, the model achieved an accuracy of 93.2% and F1-macro of 91.02%. Hugging Face's `Trainer` class from the [Transformers](https://huggingface.co/transformers) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with other frameworks nonetheless. ## Model | Model | #params | Arch. | Training/Validation data (text) | | ---------------------------------------------- | ------- | ------------ | ------------------------------- | | `indonesian-roberta-base-sentiment-classifier` | 124M | RoBERTa Base | `SmSA` | ## Evaluation Results The model was trained for 5 epochs and the best model was loaded at the end. | Epoch | Training Loss | Validation Loss | Accuracy | F1 | Precision | Recall | | ----- | ------------- | --------------- | -------- | -------- | --------- | -------- | | 1 | 0.342600 | 0.213551 | 0.928571 | 0.898539 | 0.909803 | 0.890694 | | 2 | 0.190700 | 0.213466 | 0.934127 | 0.901135 | 0.925297 | 0.882757 | | 3 | 0.125500 | 0.219539 | 0.942857 | 0.920901 | 0.927511 | 0.915193 | | 4 | 0.083600 | 0.235232 | 0.943651 | 0.924227 | 0.926494 | 0.922048 | | 5 | 0.059200 | 0.262473 | 0.942063 | 0.920583 | 0.924084 | 0.917351 | ## How to Use ### As Text Classifier ```python from transformers import pipeline pretrained_name = "sahri/sentiment" nlp = pipeline( "sentiment-analysis", model=pretrained_name, tokenizer=pretrained_name ) nlp("tidak jelek tapi keren") ``` ## Disclaimer Do consider the biases which come from both the pre-trained RoBERTa model and the `SmSA` dataset that may be carried over into the results of this model. ## Author Indonesian RoBERTa Base Sentiment Classifier was trained and evaluated by [sahri ramadhan] All computation and development are done on Google Colaboratory using their free GPU access.