modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-30 00:39:23
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
526 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-30 00:39:08
card
stringlengths
11
1.01M
Khalsuu/filipino-wav2vec2-l-xls-r-300m-test
Khalsuu
2022-04-23T08:27:45Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:filipino_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-04-22T04:36:24Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - filipino_voice model-index: - name: filipino-wav2vec2-l-xls-r-300m-test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # filipino-wav2vec2-l-xls-r-300m-test This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the filipino_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.7753 - Wer: 0.4831 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.7314 | 2.09 | 400 | 0.7541 | 0.7262 | | 0.6065 | 4.19 | 800 | 0.6738 | 0.6314 | | 0.4063 | 6.28 | 1200 | 0.6310 | 0.5992 | | 0.2986 | 8.38 | 1600 | 0.6301 | 0.5340 | | 0.2263 | 10.47 | 2000 | 0.6598 | 0.5391 | | 0.1714 | 12.57 | 2400 | 0.7778 | 0.5593 | | 0.1303 | 14.66 | 2800 | 0.7231 | 0.4907 | | 0.1056 | 16.75 | 3200 | 0.8031 | 0.4885 | | 0.0851 | 18.85 | 3600 | 0.7753 | 0.4831 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
zhufy/squad-en-bert-base
zhufy
2022-04-23T05:09:27Z
10
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "bert-base", "endpoints_compatible", "region:us" ]
question-answering
2022-03-10T10:12:56Z
--- language: English task: extractive question answering datasets: SQuAD 2.0 tags: - bert-base --- # Model Description This model is for English extractive question answering. It is based on the [bert-base-cased](https://huggingface.co/bert-base-uncased) model, and it is case-sensitive: it makes a difference between english and English. # Training data [English SQuAD v2.0](https://rajpurkar.github.io/SQuAD-explorer/) # How to use You can use it directly from the [🤗 Transformers](https://github.com/huggingface/transformers) library with a pipeline: ``` python >>> from transformers.pipelines import pipeline >>> from transformers import AutoTokenizer, AutoModelForQuestionAnswering >>> tokenizer = AutoTokenizer.from_pretrained("zhufy/squad-en-bert-base") >>> model = AutoModelForQuestionAnswering.from_pretrained("zhufy/squad-en-bert-base") >>> nlp = pipeline("question-answering", model=model, tokenizer=tokenizer) >>> context = "A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are also used, such as the amount of communication (used in communication complexity), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do." >>> question = "What are two basic primary resources used to guage complexity?" >>> inputs = {"question": question, "context":context } >>> nlp(inputs) {'score': 0.8589141368865967, 'start': 305, 'end': 321, 'answer': 'time and storage'} ```
zhufy/squad-ms-bert-base
zhufy
2022-04-23T05:09:03Z
5
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "bert-base", "endpoints_compatible", "region:us" ]
question-answering
2022-03-11T04:46:48Z
--- language: Malay task: extractive question answering datasets: Malay SQuAD tags: - bert-base --- # Model Description This model is for Malay extractive question answering. It is based on the [malay-huggingface/bert-base-bahasa-cased](https://huggingface.co/malay-huggingface/bert-base-bahasa-cased/tree/main) model, and it is case-sensitive: it makes a difference between english and English. # Training data [Malay SQuAD v2.0](https://github.com/huseinzol05/malay-dataset/tree/master/question-answer/squad) # How to use You can use it directly from the [🤗 Transformers](https://github.com/huggingface/transformers) library with a pipeline: ``` python >>> from transformers.pipelines import pipeline >>> from transformers import AutoTokenizer, AutoModelForQuestionAnswering >>> tokenizer = AutoTokenizer.from_pretrained("zhufy/squad-ms-bert-base") >>> model = AutoModelForQuestionAnswering.from_pretrained("zhufy/squad-ms-bert-base") >>> nlp = pipeline("question-answering", model=model, tokenizer=tokenizer) >>> context = "Pada manusia, tindak balas ini diaktifkan dengan pelengkap pengikatan kepada antibodi yang telah melekat pada mikrob ini atau pengikatan protein pelengkap kepada karbohidrat pada permukaan mikrob. Isyarat pengiktirafan ini mencetuskan tindak balas pembunuhan yang pantas. Kelajuan tindak balas adalah hasil penguatan isyarat yang berlaku berikutan pengaktifan proteolytik berturutan molekul pelengkap, yang juga protease. Selepas protein pelengkap pada mulanya mengikat kepada mikrob, mereka mengaktifkan aktiviti protease mereka, yang seterusnya mengaktifkan protease pelengkap lain, dan sebagainya. Ini menghasilkan cascade bermangkin yang menguatkan isyarat awal dengan maklum balas positif terkawal. Kastil menghasilkan penghasilan peptida yang menarik sel imun, meningkatkan kebolehtelapan vaskular, dan opsonize (kot) permukaan patogen, menandakannya untuk kemusnahan. Pemendapan pelengkap ini juga boleh membunuh sel secara terus dengan mengganggu membran plasma mereka." >>> question = "Protein pelengkap mengikat molekul apa yang berada di permukaan mikrob untuk mendapatkan tindak balas imunWhat are two basic primary resources used to guage complexity?" >>> inputs = {"question": question, "context":context } >>> nlp(inputs) {'score': 0.9848766922950745, 'start': 162, 'end': 173, 'answer': 'karbohidrat'} ```
agi-css/distilroberta-base-mrl-sym
agi-css
2022-04-23T04:30:29Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-23T04:28:50Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilroberta-base-mrl-sym results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-mrl-sym This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0001 - Accuracy: 1.0 - F1: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.740146306575944e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---:| | No log | 1.0 | 150 | 0.0001 | 1.0 | 1.0 | | No log | 2.0 | 300 | 0.0001 | 1.0 | 1.0 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.12.0.dev20220422+cu116 - Datasets 2.1.0 - Tokenizers 0.12.1
agi-css/distilroberta-base-etc-sym
agi-css
2022-04-23T04:26:16Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-23T04:24:06Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilroberta-base-etc-sym results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-etc-sym This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0005 - Accuracy: 0.9997 - F1: 0.9997 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.740146306575944e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 262 | 0.0068 | 0.9987 | 0.9987 | | No log | 2.0 | 524 | 0.0005 | 0.9997 | 0.9997 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.12.0.dev20220422+cu116 - Datasets 2.1.0 - Tokenizers 0.12.1
agi-css/distilroberta-base-etc-nlp
agi-css
2022-04-23T04:20:09Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-23T04:18:08Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilroberta-base-etc-nlp results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-etc-nlp This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0039 - Accuracy: 0.9993 - F1: 0.9993 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.740146306575944e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 262 | 0.0025 | 0.9997 | 0.9997 | | No log | 2.0 | 524 | 0.0039 | 0.9993 | 0.9993 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.12.0.dev20220422+cu116 - Datasets 2.1.0 - Tokenizers 0.12.1
TibbtechUser/wav2vec2-base-urdu-demo-colab
TibbtechUser
2022-04-23T02:50:59Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-04-23T01:14:07Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-urdu-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-urdu-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
huggingtweets/angelicism010-propertyexile-wretched_worm
huggingtweets
2022-04-23T01:52:59Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-04-23T01:18:51Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1517583783020666881/mmUj6mkI_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1383763210314997773/aIIDR23G_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1517290992361422848/E5jRRDlu_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Primo & offlineism010 & wretched worm</div> <div style="text-align: center; font-size: 14px;">@angelicism010-propertyexile-wretched_worm</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Primo & offlineism010 & wretched worm. | Data | Primo | offlineism010 | wretched worm | | --- | --- | --- | --- | | Tweets downloaded | 200 | 278 | 3234 | | Retweets | 32 | 4 | 320 | | Short tweets | 17 | 28 | 549 | | Tweets kept | 151 | 246 | 2365 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3o7b93qp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @angelicism010-propertyexile-wretched_worm's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/30uxuf66) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/30uxuf66/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/angelicism010-propertyexile-wretched_worm') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Tanhim/gpt2-model-de
Tanhim
2022-04-22T23:24:24Z
21
3
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "de", "license:gpl", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: de widget: - text: Hallo, ich bin ein Sprachmodell license: gpl --- <h2> GPT2 Model for German Language </h2> Model Name: Tanhim/gpt2-model-de <br /> language: German or Deutsch <br /> thumbnail: https://huggingface.co/Tanhim/gpt2-model-de <br /> datasets: Ten Thousand German News Articles Dataset <br /> ### How to use You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, I set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generation= pipeline('text-generation', model='Tanhim/gpt2-model-de', tokenizer='Tanhim/gpt2-model-de') >>> set_seed(42) >>> generation("Hallo, ich bin ein Sprachmodell,", max_length=30, num_return_sequences=5) ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("Tanhim/gpt2-model-de") model = AutoModelWithLMHead.from_pretrained("Tanhim/gpt2-model-de") text = "Ersetzen Sie mich durch einen beliebigen Text, den Sie wünschen." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` Citation request: If you use the model of this repository in your research, please consider citing the following way: ```python @misc{GermanTransformer, author = {Tanhim Islam}, title = {{PyTorch Based Transformer Machine Learning Model for German Text Generation Task}}, howpublished = "\url{https://huggingface.co/Tanhim/gpt2-model-de}", year = {2021}, note = "[Online; accessed 17-June-2021]" } ```
huggingtweets/plsnobullywaaa
huggingtweets
2022-04-22T20:47:21Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-04-22T16:00:54Z
--- language: en thumbnail: http://www.huggingtweets.com/plsnobullywaaa/1650660437516/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1511292594214551557/4T_znkpc_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">clementine</div> <div style="text-align: center; font-size: 14px;">@plsnobullywaaa</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from clementine. | Data | clementine | | --- | --- | | Tweets downloaded | 774 | | Retweets | 32 | | Short tweets | 258 | | Tweets kept | 484 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/125ldexx/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @plsnobullywaaa's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2whc68l3) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2whc68l3/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/plsnobullywaaa') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
salihkavaf/distilbert-base-uncased-finetuned-imdb
salihkavaf
2022-04-22T19:34:15Z
3
0
transformers
[ "transformers", "tf", "distilbert", "fill-mask", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-04-22T11:19:02Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: salihkavaf/distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # salihkavaf/distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [salihkavaf/distilbert-base-uncased-finetuned-imdb](https://huggingface.co/salihkavaf/distilbert-base-uncased-finetuned-imdb) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.6769 - Validation Loss: 2.5848 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -688, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.6769 | 2.5848 | 0 | ### Framework versions - Transformers 4.18.0 - TensorFlow 2.8.0 - Datasets 2.1.0 - Tokenizers 0.12.1
huggingtweets/it_its_are_are-miyarepostbot-unbridled_id
huggingtweets
2022-04-22T19:04:30Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-04-22T19:04:23Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1376263696389914629/_FzhUcTW_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1480214799539740676/S3W8I0f2_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1400304659688878088/Lbb8zMZE_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Sierra Armour 𝔼𝕣𝕚𝕤 & angelicism2727272628 & Miya</div> <div style="text-align: center; font-size: 14px;">@it_its_are_are-miyarepostbot-unbridled_id</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Sierra Armour 𝔼𝕣𝕚𝕤 & angelicism2727272628 & Miya. | Data | Sierra Armour 𝔼𝕣𝕚𝕤 | angelicism2727272628 | Miya | | --- | --- | --- | --- | | Tweets downloaded | 3146 | 179 | 1840 | | Retweets | 545 | 28 | 23 | | Short tweets | 413 | 20 | 214 | | Tweets kept | 2188 | 131 | 1603 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/wlae4njw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @it_its_are_are-miyarepostbot-unbridled_id's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2xs5iik1) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2xs5iik1/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/it_its_are_are-miyarepostbot-unbridled_id') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/mimpathy
huggingtweets
2022-04-22T18:39:10Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-04-22T18:38:06Z
--- language: en thumbnail: http://www.huggingtweets.com/mimpathy/1650652745938/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1269411300624363520/-xYW6d_6_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">𝓗𝓸𝓷𝓸𝓻</div> <div style="text-align: center; font-size: 14px;">@mimpathy</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from 𝓗𝓸𝓷𝓸𝓻. | Data | 𝓗𝓸𝓷𝓸𝓻 | | --- | --- | | Tweets downloaded | 2299 | | Retweets | 211 | | Short tweets | 331 | | Tweets kept | 1757 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/17w4ucd3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mimpathy's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1qr7mqkc) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1qr7mqkc/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/mimpathy') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
cj-mills/bert-base-uncased-issues-128
cj-mills
2022-04-22T18:29:07Z
5
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-04-22T18:10:32Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-base-uncased-issues-128 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-issues-128 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2526 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 16 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.1071 | 1.0 | 291 | 1.6964 | | 1.6421 | 2.0 | 582 | 1.4279 | | 1.4853 | 3.0 | 873 | 1.3924 | | 1.4014 | 4.0 | 1164 | 1.3701 | | 1.3388 | 5.0 | 1455 | 1.1944 | | 1.283 | 6.0 | 1746 | 1.2795 | | 1.2394 | 7.0 | 2037 | 1.2671 | | 1.2014 | 8.0 | 2328 | 1.2084 | | 1.1668 | 9.0 | 2619 | 1.1783 | | 1.14 | 10.0 | 2910 | 1.2076 | | 1.1277 | 11.0 | 3201 | 1.2081 | | 1.1053 | 12.0 | 3492 | 1.1628 | | 1.0819 | 13.0 | 3783 | 1.2544 | | 1.0763 | 14.0 | 4074 | 1.1695 | | 1.0634 | 15.0 | 4365 | 1.1157 | | 1.0637 | 16.0 | 4656 | 1.2526 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
huggingtweets/proanatwink
huggingtweets
2022-04-22T17:26:21Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-04-22T16:43:24Z
--- language: en thumbnail: http://www.huggingtweets.com/proanatwink/1650648376939/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1509040026625224705/B_S4MCbD_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">God is Love (((they)))/them🪲✊🏼🇺🇦🇮🇱🏳️‍⚧️</div> <div style="text-align: center; font-size: 14px;">@proanatwink</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from God is Love (((they)))/them🪲✊🏼🇺🇦🇮🇱🏳️‍⚧️. | Data | God is Love (((they)))/them🪲✊🏼🇺🇦🇮🇱🏳️‍⚧️ | | --- | --- | | Tweets downloaded | 613 | | Retweets | 120 | | Short tweets | 142 | | Tweets kept | 351 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/yp8eka3q/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @proanatwink's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3lu2xkr5) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3lu2xkr5/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/proanatwink') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
caurdy/wav2vec2-large-960h-lv60-self_MIDIARIES_72H_FT
caurdy
2022-04-22T16:45:56Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "license:afl-3.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-04-21T20:03:06Z
--- license: afl-3.0 --- FineTuned wav2vec2 large 960H lv60 self pre-trained facebook model on 72 Hours of MI Diaries Data WER 13 % -> 9.7% on 20 min test set of MI Diaries audio clips (https://mi-diaries.org/) ### Usage ### model = Wav2Vec2ForCTC.from_pretrained("caurdy/wav2vec2-large-960h-lv60-self_MIDIARIES_72H_FT") processor = Wav2Vec2Processor.from_pretrained("caurdy/wav2vec2-large-960h-lv60-self_MIDIARIES_72H_FT")
csebuetnlp/mT5_m2o_english_crossSum
csebuetnlp
2022-04-22T15:06:41Z
31
4
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "summarization", "mT5", "am", "ar", "az", "bn", "my", "zh", "en", "fr", "gu", "ha", "hi", "ig", "id", "ja", "rn", "ko", "ky", "mr", "ne", "om", "ps", "fa", "pcm", "pt", "pa", "ru", "gd", "sr", "si", "so", "es", "sw", "ta", "te", "th", "ti", "tr", "uk", "ur", "uz", "vi", "cy", "yo", "arxiv:2112.08804", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:05Z
--- tags: - summarization - mT5 language: - am - ar - az - bn - my - zh - en - fr - gu - ha - hi - ig - id - ja - rn - ko - ky - mr - ne - om - ps - fa - pcm - pt - pa - ru - gd - sr - si - so - es - sw - ta - te - th - ti - tr - uk - ur - uz - vi - cy - yo licenses: - cc-by-nc-sa-4.0 widget: - text: "Videos that say approved vaccines are dangerous and cause autism, cancer or infertility are among those that will be taken down, the company said. The policy includes the termination of accounts of anti-vaccine influencers. Tech giants have been criticised for not doing more to counter false health information on their sites. In July, US President Joe Biden said social media platforms were largely responsible for people's scepticism in getting vaccinated by spreading misinformation, and appealed for them to address the issue. YouTube, which is owned by Google, said 130,000 videos were removed from its platform since last year, when it implemented a ban on content spreading misinformation about Covid vaccines. In a blog post, the company said it had seen false claims about Covid jabs \"spill over into misinformation about vaccines in general\". The new policy covers long-approved vaccines, such as those against measles or hepatitis B. \"We're expanding our medical misinformation policies on YouTube with new guidelines on currently administered vaccines that are approved and confirmed to be safe and effective by local health authorities and the WHO,\" the post said, referring to the World Health Organization." --- # mT5-m2o-english-CrossSum This repository contains the many-to-one (m2o) mT5 checkpoint finetuned on all cross-lingual pairs of the [CrossSum](https://huggingface.co/datasets/csebuetnlp/CrossSum) dataset, where the target summary was in **english**, i.e. this model tries to **summarize text written in any language in English.** For finetuning details and scripts, see the [paper](https://arxiv.org/abs/2112.08804) and the [official repository](https://github.com/csebuetnlp/CrossSum). ## Using this model in `transformers` (tested on 4.11.0.dev0) ```python import re from transformers import AutoTokenizer, AutoModelForSeq2SeqLM WHITESPACE_HANDLER = lambda k: re.sub('\s+', ' ', re.sub('\n+', ' ', k.strip())) article_text = """Videos that say approved vaccines are dangerous and cause autism, cancer or infertility are among those that will be taken down, the company said. The policy includes the termination of accounts of anti-vaccine influencers. Tech giants have been criticised for not doing more to counter false health information on their sites. In July, US President Joe Biden said social media platforms were largely responsible for people's scepticism in getting vaccinated by spreading misinformation, and appealed for them to address the issue. YouTube, which is owned by Google, said 130,000 videos were removed from its platform since last year, when it implemented a ban on content spreading misinformation about Covid vaccines. In a blog post, the company said it had seen false claims about Covid jabs "spill over into misinformation about vaccines in general". The new policy covers long-approved vaccines, such as those against measles or hepatitis B. "We're expanding our medical misinformation policies on YouTube with new guidelines on currently administered vaccines that are approved and confirmed to be safe and effective by local health authorities and the WHO," the post said, referring to the World Health Organization.""" model_name = "csebuetnlp/mT5_m2o_english_crossSum" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSeq2SeqLM.from_pretrained(model_name) input_ids = tokenizer( [WHITESPACE_HANDLER(article_text)], return_tensors="pt", padding="max_length", truncation=True, max_length=512 )["input_ids"] output_ids = model.generate( input_ids=input_ids, max_length=84, no_repeat_ngram_size=2, num_beams=4 )[0] summary = tokenizer.decode( output_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(summary) ``` ## Citation If you use this model, please cite the following paper: ``` @article{hasan2021crosssum, author = {Tahmid Hasan and Abhik Bhattacharjee and Wasi Uddin Ahmad and Yuan-Fang Li and Yong-bin Kang and Rifat Shahriyar}, title = {CrossSum: Beyond English-Centric Cross-Lingual Abstractive Text Summarization for 1500+ Language Pairs}, journal = {CoRR}, volume = {abs/2112.08804}, year = {2021}, url = {https://arxiv.org/abs/2112.08804}, eprinttype = {arXiv}, eprint = {2112.08804} } ```
csebuetnlp/mT5_m2o_hindi_crossSum
csebuetnlp
2022-04-22T15:03:33Z
17
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "summarization", "mT5", "am", "ar", "az", "bn", "my", "zh", "en", "fr", "gu", "ha", "hi", "ig", "id", "ja", "rn", "ko", "ky", "mr", "ne", "om", "ps", "fa", "pcm", "pt", "pa", "ru", "gd", "sr", "si", "so", "es", "sw", "ta", "te", "th", "ti", "tr", "uk", "ur", "uz", "vi", "cy", "yo", "arxiv:2112.08804", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-04-20T15:41:03Z
--- tags: - summarization - mT5 language: - am - ar - az - bn - my - zh - en - fr - gu - ha - hi - ig - id - ja - rn - ko - ky - mr - ne - om - ps - fa - pcm - pt - pa - ru - gd - sr - si - so - es - sw - ta - te - th - ti - tr - uk - ur - uz - vi - cy - yo licenses: - cc-by-nc-sa-4.0 widget: - text: "Videos that say approved vaccines are dangerous and cause autism, cancer or infertility are among those that will be taken down, the company said. The policy includes the termination of accounts of anti-vaccine influencers. Tech giants have been criticised for not doing more to counter false health information on their sites. In July, US President Joe Biden said social media platforms were largely responsible for people's scepticism in getting vaccinated by spreading misinformation, and appealed for them to address the issue. YouTube, which is owned by Google, said 130,000 videos were removed from its platform since last year, when it implemented a ban on content spreading misinformation about Covid vaccines. In a blog post, the company said it had seen false claims about Covid jabs \"spill over into misinformation about vaccines in general\". The new policy covers long-approved vaccines, such as those against measles or hepatitis B. \"We're expanding our medical misinformation policies on YouTube with new guidelines on currently administered vaccines that are approved and confirmed to be safe and effective by local health authorities and the WHO,\" the post said, referring to the World Health Organization." --- # mT5-m2o-hindi-CrossSum This repository contains the many-to-one (m2o) mT5 checkpoint finetuned on all cross-lingual pairs of the [CrossSum](https://huggingface.co/datasets/csebuetnlp/CrossSum) dataset, where the target summary was in **hindi**, i.e. this model tries to **summarize text written in any language in Hindi.** For finetuning details and scripts, see the [paper](https://arxiv.org/abs/2112.08804) and the [official repository](https://github.com/csebuetnlp/CrossSum). ## Using this model in `transformers` (tested on 4.11.0.dev0) ```python import re from transformers import AutoTokenizer, AutoModelForSeq2SeqLM WHITESPACE_HANDLER = lambda k: re.sub('\s+', ' ', re.sub('\n+', ' ', k.strip())) article_text = """Videos that say approved vaccines are dangerous and cause autism, cancer or infertility are among those that will be taken down, the company said. The policy includes the termination of accounts of anti-vaccine influencers. Tech giants have been criticised for not doing more to counter false health information on their sites. In July, US President Joe Biden said social media platforms were largely responsible for people's scepticism in getting vaccinated by spreading misinformation, and appealed for them to address the issue. YouTube, which is owned by Google, said 130,000 videos were removed from its platform since last year, when it implemented a ban on content spreading misinformation about Covid vaccines. In a blog post, the company said it had seen false claims about Covid jabs "spill over into misinformation about vaccines in general". The new policy covers long-approved vaccines, such as those against measles or hepatitis B. "We're expanding our medical misinformation policies on YouTube with new guidelines on currently administered vaccines that are approved and confirmed to be safe and effective by local health authorities and the WHO," the post said, referring to the World Health Organization.""" model_name = "csebuetnlp/mT5_m2o_hindi_crossSum" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSeq2SeqLM.from_pretrained(model_name) input_ids = tokenizer( [WHITESPACE_HANDLER(article_text)], return_tensors="pt", padding="max_length", truncation=True, max_length=512 )["input_ids"] output_ids = model.generate( input_ids=input_ids, max_length=84, no_repeat_ngram_size=2, num_beams=4 )[0] summary = tokenizer.decode( output_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(summary) ``` ## Citation If you use this model, please cite the following paper: ``` @article{hasan2021crosssum, author = {Tahmid Hasan and Abhik Bhattacharjee and Wasi Uddin Ahmad and Yuan-Fang Li and Yong-bin Kang and Rifat Shahriyar}, title = {CrossSum: Beyond English-Centric Cross-Lingual Abstractive Text Summarization for 1500+ Language Pairs}, journal = {CoRR}, volume = {abs/2112.08804}, year = {2021}, url = {https://arxiv.org/abs/2112.08804}, eprinttype = {arXiv}, eprint = {2112.08804} } ```
junnyu/chinese_GAU-alpha-char_L-24_H-768-paddle
junnyu
2022-04-22T12:29:07Z
0
0
null
[ "paddlepaddle", "gau-alpha", "zh", "region:us" ]
null
2022-04-22T12:13:38Z
--- language: zh tags: - gau-alpha - paddlepaddle inference: False --- # pytorch 和 paddle代码 https://github.com/JunnYu/GAU-alpha-pytorch # bert4keras代码 https://github.com/ZhuiyiTechnology/GAU-alpha # Install ```bash 进入https://github.com/JunnYu/GAU-alpha-pytorch, 下载paddle代码gau_alpha_paddle ``` # Usage ```python import paddle from transformers import BertTokenizer as GAUAlphaTokenizer from gau_alpha_paddle import GAUAlphaForMaskedLM text = "今天[MASK]很好,我[MASK]去公园玩。" tokenizer = GAUAlphaTokenizer.from_pretrained( "junnyu/chinese_GAU-alpha-char_L-24_H-768" ) pd_model = GAUAlphaForMaskedLM.from_pretrained("chinese_GAU-alpha-char_L-24_H-768") pd_model.eval() pd_inputs = tokenizer(text) pd_inputs = {k: paddle.to_tensor([v]) for k, v in pd_inputs.items()} with paddle.no_grad(): pd_outputs = pd_model(**pd_inputs)[0][0] pd_outputs_sentence = "paddle: " for i, id in enumerate(tokenizer.encode(text)): if id == tokenizer.mask_token_id: val, idx = paddle.nn.functional.softmax(pd_outputs[i], -1).topk(k=5) tokens = tokenizer.convert_ids_to_tokens(idx) new_tokens = [] for v, t in zip(val.cpu(), tokens): new_tokens.append(f"{t}+{round(v.item(),4)}") pd_outputs_sentence += "[" + "||".join(new_tokens) + "]" else: pd_outputs_sentence += "".join( tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True) ) print(pd_outputs_sentence) # paddle: 今天[天+0.8657||气+0.0535||阳+0.0165||,+0.0126||晴+0.0111]很好,我[要+0.4619||想+0.4352||又+0.0252||就+0.0157||跑+0.0064]去公园玩。 ``` # Reference Bibtex: ```tex @techreport{gau-alpha, title={GAU-α: GAU-based Transformers for NLP - ZhuiyiAI}, author={Jianlin Su, Shengfeng Pan, Bo Wen, Yunfeng Liu}, year={2022}, url="https://github.com/ZhuiyiTechnology/GAU-alpha", } ```
maretamasaeva/thesis-freeform-yesno
maretamasaeva
2022-04-22T12:14:20Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-13T12:34:24Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: thesis-freeform-yesno results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # thesis-freeform-yesno This model is a fine-tuned version of [maretamasaeva/thesis-freeform](https://huggingface.co/maretamasaeva/thesis-freeform) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.4547 - Accuracy: 0.0194 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 2.5001 | 1.0 | 9052 | 2.4600 | 0.0194 | | 2.4921 | 2.0 | 18104 | 2.4595 | 0.0194 | | 2.4879 | 3.0 | 27156 | 2.4576 | 0.0194 | | 2.4793 | 4.0 | 36208 | 2.4547 | 0.0194 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1
Vishfeb27/wav2vec2-base-timit-demo-colab
Vishfeb27
2022-04-22T11:31:09Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-04-22T10:30:28Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
stevems1/bert-base-uncased-Ganesh123
stevems1
2022-04-22T07:46:54Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-04-22T07:12:32Z
--- tags: - generated_from_trainer model-index: - name: bert-base-uncased-Ganesh123 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-Ganesh123 This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1
cosmo/distilbert-base-uncased-finetuned-squad
cosmo
2022-04-22T07:14:22Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-04-13T10:18:06Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1
obokkkk/wav2vec2-base-960h-timit-demo-colab
obokkkk
2022-04-22T04:45:54Z
3
1
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-04-22T02:59:15Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-960h-timit-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-960h-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2002 - Wer: 0.2160 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.7805 | 4.0 | 500 | 3.0558 | 1.0 | | 2.2936 | 8.0 | 1000 | 0.2937 | 0.3479 | | 0.4155 | 12.0 | 1500 | 0.2108 | 0.2473 | | 0.2439 | 16.0 | 2000 | 0.2313 | 0.2391 | | 0.1617 | 20.0 | 2500 | 0.2003 | 0.2255 | | 0.1443 | 24.0 | 3000 | 0.2175 | 0.2207 | | 0.119 | 28.0 | 3500 | 0.2002 | 0.2160 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
aalogan/bert-finetuned-ner
aalogan
2022-04-22T04:15:19Z
3
0
transformers
[ "transformers", "tf", "bert", "token-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-04-21T15:39:21Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: aalogan/bert-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # aalogan/bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0170 - Validation Loss: 0.0546 - Epoch: 3 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3508, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.1722 | 0.0676 | 0 | | 0.0481 | 0.0531 | 1 | | 0.0270 | 0.0551 | 2 | | 0.0170 | 0.0546 | 3 | ### Framework versions - Transformers 4.18.0 - TensorFlow 2.8.0 - Datasets 2.1.0 - Tokenizers 0.12.1
kabelomalapane/model_zu-en_updated
kabelomalapane
2022-04-22T02:55:18Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "translation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-04-21T09:33:12Z
--- license: apache-2.0 tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: model_zu-en_updated results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_zu-en_updated This model is a fine-tuned version of [Helsinki-NLP/opus-mt-mul-en](https://huggingface.co/Helsinki-NLP/opus-mt-mul-en) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8306 - Bleu: 27.1218 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2 - Datasets 1.18.3 - Tokenizers 0.11.0
satish860/sms_spam_detection-manning
satish860
2022-04-22T02:22:56Z
13
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-22T02:20:36Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: sms_spam_detection-manning results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sms_spam_detection-manning This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0512 - Accuracy: 0.9886 - F1: 0.9573 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0a0+17540c5 - Datasets 2.1.0 - Tokenizers 0.12.1
okho0653/distilbert-base-uncased-zero-shot-sentiment-model
okho0653
2022-04-22T01:33:28Z
326
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-22T01:28:21Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-zero-shot-sentiment-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-zero-shot-sentiment-model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1
PrasunMishra/finetuning-sentiment-model-3000-samples
PrasunMishra
2022-04-22T01:20:52Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-22T00:59:22Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: finetuning-sentiment-model-3000-samples results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.9.1 - Datasets 2.1.0 - Tokenizers 0.11.6
shyamsn97/Jax-NCA
shyamsn97
2022-04-21T23:35:37Z
0
0
null
[ "image-generation", "region:us" ]
null
2022-04-21T20:09:15Z
--- tags: - image-generation --- # Neural Cellular Automata (Based on https://distill.pub/2020/growing-ca/) implemented in Jax (Flax) ## Installation from source ```bash git clone git@github.com:shyamsn97/jax-nca.git cd jax-nca python setup.py install ``` from PYPI ```bash pip install jax-nca ``` ## How do NCAs work? For more information, view the awesome article https://distill.pub/2020/growing-ca/ -- Mordvintsev, et al., "Growing Neural Cellular Automata", Distill, 2020 Image below describes a single update step: https://github.com/distillpub/post--growing-ca/blob/master/public/figures/model.svg ## Why Jax? <b> Note: This project served as a nice introduction to jax, so its performance can probably be improved </b> NCAs are autoregressive models like RNNs, where new states are calculated from previous ones. With jax, we can make these operations a lot more performant with `jax.lax.scan` and `jax.jit` (https://jax.readthedocs.io/en/latest/_autosummary/jax.lax.scan.html) Instead of writing the nca growth process as ```python def multi_step(params, nca, current_state, num_steps): # params: parameters for NCA # nca: Flax Module describing NCA # current_state: Current NCA state # num_steps: number of steps to run for i in range(num_steps): current_state = nca.apply(params, current_state) return current_state ``` We can write this with `jax.lax.scan` ```python def multi_step(params, nca, current_state, num_steps): # params: parameters for NCA # nca: Flax Module describing NCA # current_state: Current NCA state # num_steps: number of steps to run def forward(carry, inp): carry = nca.apply({"params": params}, carry) return carry, carry final_state, nca_states = jax.lax.scan(forward, current_state, None, length=num_steps) return final_state ``` The actual multi_step implementation can be found here: https://github.com/shyamsn97/jax-nca/blob/main/jax_nca/nca.py#L103 ## Usage See [notebooks/Gecko.ipynb](notebooks/Gecko.ipynb) for a full example <b> Currently there's a bug with the stochastic update, so only `cell_fire_rate = 1.0` works at the moment </b> Creating and using NCA ```python class NCA(nn.Module): num_hidden_channels: int num_target_channels: int = 3 alpha_living_threshold: float = 0.1 cell_fire_rate: float = 1.0 trainable_perception: bool = False alpha: float = 1.0 """ num_hidden_channels: Number of hidden channels for each cell to use num_target_channels: Number of target channels to be used alpha_living_threshold: threshold to determine whether a cell lives or dies cell_fire_rate: probability that a cell receives an update per step trainable_perception: if true, instead of using sobel filters use a trainable conv net alpha: scalar value to be multiplied to updates """ ... from jax_nca.nca import NCA # usage nca = NCA( num_hidden_channels = 16, num_target_channels = 3, trainable_perception = False, cell_fire_rate = 1.0, alpha_living_threshold = 0.1 ) nca_seed = nca.create_seed( nca.num_hidden_channels, nca.num_target_channels, shape=(64,64), batch_size=1 ) rng = jax.random.PRNGKey(0) params = = nca.init(rng, nca_seed, rng)["params"] update = nca.apply({"params":params}, nca_seed, jax.random.PRNGKey(10)) # multi step final_state, nca_states = nca.multi_step(poarams, nca_seed, jax.random.PRNGKey(10), num_steps=32) ``` To train the NCA ```python from jax_nca.dataset import ImageDataset from jax_nca.trainer import EmojiTrainer dataset = ImageDataset(emoji='🦎', img_size=64) nca = NCA( num_hidden_channels = 16, num_target_channels = 3, trainable_perception = False, cell_fire_rate = 1.0, alpha_living_threshold = 0.1 ) trainer = EmojiTrainer(dataset, nca, n_damage=0) trainer.train(100000, batch_size=8, seed=10, lr=2e-4, min_steps=64, max_steps=96) # to access train state: state = trainer.state # save nca.save(state.params, "saved_params") # load params loaded_params = nca.load("saved_params") ```
Sarim24/xlm-roberta-base-finetuned-panx-de
Sarim24
2022-04-21T23:12:20Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-04-21T22:07:59Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.862669465085938 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1374 - F1: 0.8627 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2596 | 1.0 | 525 | 0.1571 | 0.8302 | | 0.1292 | 2.0 | 1050 | 0.1416 | 0.8455 | | 0.0809 | 3.0 | 1575 | 0.1374 | 0.8627 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1
surajnair/r3m-18
surajnair
2022-04-21T20:32:32Z
3
1
transformers
[ "transformers", "pytorch", "r3m", "endpoints_compatible", "region:us" ]
null
2022-04-21T20:10:15Z
This model contains the pre-trained ResNet18 R3M model from the paper "R3M: A Universal Visual Representation for Robot Manipulation" (Nair et al.) The model is trained on the Ego4D dataset using time-contrastive learning, video-language alignment, and sparsity objectives. It is used for efficient downstream robotic learning.
jackmleitch/distilbert-base-uncased-distilled-clinc
jackmleitch
2022-04-21T20:04:39Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-21T19:48:59Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-distilled-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos args: plus metrics: - name: Accuracy type: accuracy value: 0.9432258064516129 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.1004 - Accuracy: 0.9432 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9044 | 1.0 | 318 | 0.5748 | 0.7390 | | 0.4491 | 2.0 | 636 | 0.2876 | 0.88 | | 0.2538 | 3.0 | 954 | 0.1813 | 0.9229 | | 0.1765 | 4.0 | 1272 | 0.1388 | 0.9294 | | 0.1422 | 5.0 | 1590 | 0.1214 | 0.9345 | | 0.1243 | 6.0 | 1908 | 0.1114 | 0.9406 | | 0.1138 | 7.0 | 2226 | 0.1066 | 0.94 | | 0.1076 | 8.0 | 2544 | 0.1030 | 0.9423 | | 0.104 | 9.0 | 2862 | 0.1010 | 0.9419 | | 0.1019 | 10.0 | 3180 | 0.1004 | 0.9432 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0 - Datasets 2.0.0 - Tokenizers 0.11.6
4m1g0/wav2vec2-large-xls-r-300m-gl-jupyter9
4m1g0
2022-04-21T19:45:24Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-04-21T09:07:56Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-large-xls-r-300m-gl-jupyter9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-gl-jupyter9 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0970 - Wer: 0.0624 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 60 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.6977 | 3.36 | 400 | 0.4273 | 0.4574 | | 0.2282 | 6.72 | 800 | 0.1492 | 0.1723 | | 0.0884 | 10.08 | 1200 | 0.1344 | 0.1336 | | 0.0594 | 13.44 | 1600 | 0.1329 | 0.1238 | | 0.0437 | 16.8 | 2000 | 0.1137 | 0.1153 | | 0.0384 | 20.17 | 2400 | 0.1197 | 0.1033 | | 0.0332 | 23.53 | 2800 | 0.1147 | 0.0980 | | 0.0282 | 26.89 | 3200 | 0.1079 | 0.0917 | | 0.0236 | 30.25 | 3600 | 0.1144 | 0.0922 | | 0.0237 | 33.61 | 4000 | 0.1130 | 0.0880 | | 0.019 | 36.97 | 4400 | 0.1035 | 0.0818 | | 0.0164 | 40.33 | 4800 | 0.1045 | 0.0813 | | 0.0146 | 43.69 | 5200 | 0.1037 | 0.0735 | | 0.0111 | 47.06 | 5600 | 0.1085 | 0.0701 | | 0.0093 | 50.42 | 6000 | 0.1039 | 0.0659 | | 0.0084 | 53.78 | 6400 | 0.0970 | 0.0636 | | 0.0073 | 57.14 | 6800 | 0.0970 | 0.0624 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
Tejas21/Totto_t5_base_BERT_Score_20k_steps
Tejas21
2022-04-21T18:47:18Z
4
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-04-20T17:35:32Z
--- license: apache-2.0 --- language: - en tags: - Table to text - Data to text ## Dataset: - [ToTTo](https://github.com/google-research-datasets/ToTTo) A Controlled Table-to-Text Dataset. Totto is an open-source table-to-text dataset with over 1,20,000 examples in the English language. It defines a controlled generation task as: given a Wikipedia table and a set of highlighted cells, generate a one-sentence description. ## Base Model - T5-Base [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) The T5 was built by the Google team in order to create a general-purpose model that can understand the text. The basic idea behind t5 was to deal with the text processing problem as a “text-to-text” problem, i.e. taking the text as input and producing new text as output. ## Baseline Preprocessing [Baseline Preprocessing](https://github.com/google-research/language/tree/master/language/totto) This code repository serves as a supplementary for the main repository, which can be used to do basic preprocessing of the Totto dataset. ## Fine-tuning On the Totto dataset, we used the T5 for the conditional generation model and fine-tuned it with 10000 steps BLEU and then 20000 steps [BERT-SCORE](https://github.com/Tiiiger/bert_score) as a metric.
Tejas21/Totto_t5_base_BLEURT_24k_steps
Tejas21
2022-04-21T18:43:02Z
4
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "arxiv:2004.04696", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-04-20T13:36:14Z
--- license: apache-2.0 --- language: - en tags: - Table to text - Data to text ## Dataset: - [ToTTo](https://github.com/google-research-datasets/ToTTo) A Controlled Table-to-Text Dataset. Totto is an open-source table-to-text dataset with over 1,20,000 examples in the English language. It defines a controlled generation task as: given a Wikipedia table and a set of highlighted cells, generate a one-sentence description. ## Base Model - T5-Base [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) The T5 was built by the Google team in order to create a general-purpose model that can understand the text. The basic idea behind t5 was to deal with the text processing problem as a “text-to-text” problem, i.e. taking the text as input and producing new text as output. ## Baseline Preprocessing [Baseline Preprocessing](https://github.com/google-research/language/tree/master/language/totto) This code repository serves as a supplementary for the main repository, which can be used to do basic preprocessing of the Totto dataset. ## Fine-tuning We used the T5 for the conditional generation model to fine-tune with, 24000 steps with the ToTTo dataset using [BLEURT](https://arxiv.org/abs/2004.04696) as a metric.
KevinForm/bert-finetuned-ner
KevinForm
2022-04-21T17:08:02Z
3
0
transformers
[ "transformers", "tf", "bert", "token-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-04-21T17:05:16Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: KevinForm/bert-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # KevinForm/bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1831 - Validation Loss: 0.0644 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2631, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.1831 | 0.0644 | 0 | ### Framework versions - Transformers 4.16.2 - TensorFlow 2.6.2 - Datasets 1.18.4 - Tokenizers 0.11.6
satish860/finetuning-sentiment-model-3000-samples
satish860
2022-04-21T17:02:49Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-21T16:53:44Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0454 - Accuracy: 0.9886 - F1: 0.9571 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0a0+17540c5 - Datasets 2.1.0 - Tokenizers 0.12.1
hyesunyun/update-summarization-led-edit-at-a-time
hyesunyun
2022-04-21T16:05:24Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "led", "text2text-generation", "update summarization", "longformer", "BART", "PyTorch", "Tensorboard", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-04-21T14:55:31Z
--- language: - en tags: - update summarization - longformer - transformers - BART - PyTorch - Tensorboard - led metrics: - edit distance - ROUGE - BertScore --- # Update Summarization with BART Large and Longformer Encoder Decoder ## Model description This model is a Transformer-based model that supports long document generative sequence-to-sequence. Based on [BART Large](https://huggingface.co/transformers/model_doc/bart.html) with [Longformer Encode Decoder](https://huggingface.co/transformers/model_doc/led.html) to allow for longer inputs. The output is one edit operation which includes action (deletion or insertion), index of where edit should happen (represented by words), and the actual text to delete or insert. ## Intended uses & limitations #### How to use Format your data so that each new article or evidence to add have `<EV>` token in front with each title prefixed by `<t>` and each abstract prefixed by `<abs>`. Please have the original summary also in the same format. You can have the list of articles and original summary concatenated in any order as long as they have the correct separator tokens. ```python from transformers import LEDTokenizer, LEDForConditionalGeneration tokenizer = LEDTokenizer.from_pretrained("hyesunyun/update-summarization-led-edit-at-a-time") model = LEDForConditionalGeneration.from_pretrained("hyesunyun/update-summarization-led-edit-at-a-time") input = "<EV> <t> Hypoglycemic effect of bitter melon compared with metformin in newly diagnosed type 2 diabetes patients. <abs> ETHNOPHARMACOLOGICAL RELEVANCE: Bitter melon (Momordica charantia L.) has been widely used as an traditional medicine treatment for diabetic patients in Asia. In vitro and animal studies suggested its hypoglycemic activity, but limited human studies are available to support its use. AIM OF STUDY: This study was conducted to assess the efficacy and safety of three doses of bitter melon compared with metformin. MATERIALS AND METHODS: This is a 4-week, multicenter, randomized, double-blind, active-control trial. Patients were randomized into 4 groups to receive bitter melon 500 mg/day, 1,000 mg/day, and 2,000 mg/day or metformin 1,000 mg/day. All patients were followed for 4 weeks. RESULTS: There was a significant decline in fructosamine at week 4 of the metformin group (-16.8; 95% CI, -31.2, -2.4 mumol/L) and the bitter melon 2,000 mg/day group (-10.2; 95% CI, -19.1, -1.3 mumol/L). Bitter melon 500 and 1,000 mg/day did not significantly decrease fructosamine levels (-3.5; 95% CI -11.7, 4.6 and -10.3; 95% CI -22.7, 2.2 mumol/L, respectively). CONCLUSIONS: Bitter melon had a modest hypoglycemic effect and significantly reduced fructosamine levels from baseline among patients with type 2 diabetes who received 2,000 mg/day. However, the hypoglycemic effect of bitter melon was less than metformin 1,000 mg/day. <EV> <t> Momordica charantia for type 2 diabetes mellitus. <abs> There is insufficient evidence to recommend momordica charantia for type 2 diabetes mellitus. Further studies are therefore required to address the issues of standardization and the quality control of preparations. For medical nutritional therapy, further observational trials evaluating the effects of momordica charantia are needed before RCTs are established to guide any recommendations in clinical practice." inputs_dict = tokenizer(input, padding="max_length", max_length=10240, return_tensors="pt", truncation=True) input_ids = inputs_dict.input_ids attention_mask = inputs_dict.attention_mask global_attention_mask = torch.zeros_like(attention_mask) # put global attention on <s> token global_attention_mask[:, 0] = 1 predicted_summary_ids = model.generate(input_ids, attention_mask=attention_mask, global_attention_mask=global_attention_mask) print(tokenizer.batch_decode(predicted_summary_ids, skip_special_tokens=False)) ``` The expected output should be something like `<s> insertion <edit_pad> zero <edit_pad> bla bla bla some text </s>` #### Limitations and bias Provide examples of latent issues and potential remediations. ## Training data Used pre-trained [LED model](https://huggingface.co/transformers/model_doc/led.html) and fine-tuned using the dataset found in [this github repo](https://github.com/hyesunyun/update_summarization_data). ## Training procedure Preprocessing, hardware used, hyperparameters... ## Eval results ### BibTeX entry and citation info ```bibtex @inproceedings{..., year={2022} } ```
waboucay/camembert-base-finetuned-nli-repnum_wl-rua_wl
waboucay
2022-04-21T15:10:51Z
5
0
transformers
[ "transformers", "pytorch", "camembert", "text-classification", "nli", "fr", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-19T09:39:53Z
--- language: - fr tags: - nli metrics: - f1 --- ## Eval results We obtain the following results on ```validation``` and ```test``` sets: | Set | F1<sub>micro</sub> | F1<sub>macro</sub> | |------------|--------------------|--------------------| | validation | 73.5 | 73.5 | | test | 75.5 | 75.5 |
Intel/electra-small-discriminator-mrpc
Intel
2022-04-21T14:33:49Z
5
0
transformers
[ "transformers", "pytorch", "electra", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-21T14:32:59Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: electra-small-discriminator-mrpc results: - task: name: Text Classification type: text-classification dataset: name: GLUE MRPC type: glue args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8529411764705882 - name: F1 type: f1 value: 0.8983050847457628 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-small-discriminator-mrpc This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.3909 - Accuracy: 0.8529 - F1: 0.8983 - Combined Score: 0.8756 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu102 - Datasets 2.1.0 - Tokenizers 0.11.6
lamyae/distilroberta-base-finetuned-wikitext2
lamyae
2022-04-21T12:48:59Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-04-21T11:40:23Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilroberta-base-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-finetuned-wikitext2 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0917 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 9 | 3.3324 | | No log | 2.0 | 18 | 3.1066 | | No log | 3.0 | 27 | 3.2930 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1
okho0653/distilbert-base-uncased-few-shot-sentiment-model
okho0653
2022-04-21T12:28:05Z
10
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-21T12:20:11Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-few-shot-sentiment-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-few-shot-sentiment-model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6819 - Accuracy: 0.75 - F1: 0.8 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1
patrickvonplaten/data2vec-audio-base-100h-4-gram
patrickvonplaten
2022-04-21T10:39:14Z
4
0
transformers
[ "transformers", "pytorch", "data2vec-audio", "automatic-speech-recognition", "speech", "en", "dataset:librispeech_asr", "arxiv:2202.03555", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-04-20T10:28:34Z
--- language: en datasets: - librispeech_asr tags: - speech license: apache-2.0 --- # Data2Vec-Audio-Base-100h [Facebook's Data2Vec](https://ai.facebook.com/research/data2vec-a-general-framework-for-self-supervised-learning-in-speech-vision-and-language/) The base model pretrained and fine-tuned on 100 hours of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. [Paper](https://arxiv.org/abs/2202.03555) Authors: Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli **Abstract** While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind. To get us closer to general self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech, NLP or computer vision. The core idea is to predict latent representations of the full input data based on a masked view of the input in a self-distillation setup using a standard Transformer architecture. Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which are local in nature, data2vec predicts contextualized latent representations that contain information from the entire input. Experiments on the major benchmarks of speech recognition, image classification, and natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches. The original model can be found under https://github.com/pytorch/fairseq/tree/main/examples/data2vec . # Pre-Training method ![model image](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/data2vec.png) For more information, please take a look at the [official paper](https://arxiv.org/abs/2202.03555). # Usage To transcribe audio files the model can be used as a standalone acoustic model as follows: ```python from transformers import Wav2Vec2Processor, Data2VecForCTC from datasets import load_dataset import torch # load model and processor processor = Wav2Vec2Processor.from_pretrained("facebook/data2vec-audio-base-100h") model = Data2VecForCTC.from_pretrained("facebook/data2vec-audio-base-100h") # load dummy dataset and read soundfiles ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") # tokenize input_values = processor(ds[0]["audio"]["array"],, return_tensors="pt", padding="longest").input_values # Batch size 1 # retrieve logits logits = model(input_values).logits # take argmax and decode predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) ```
huggingtweets/route2fi
huggingtweets
2022-04-21T10:07:42Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-04-21T10:07:34Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1469588644088451073/VEu0DKDG_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Route 2 FI</div> <div style="text-align: center; font-size: 14px;">@route2fi</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Route 2 FI. | Data | Route 2 FI | | --- | --- | | Tweets downloaded | 3250 | | Retweets | 1 | | Short tweets | 264 | | Tweets kept | 2985 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1gjkyb1x/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @route2fi's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3q0o96ub) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3q0o96ub/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/route2fi') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
liamcripwell/ctrl44-simp
liamcripwell
2022-04-21T09:32:59Z
232
4
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-04-21T08:38:41Z
--- language: en --- # CTRL44 Simplification model This is a pretrained version of the controllable simplification model presented in the NAACL 2022 paper "Controllable Sentence Simplification via Operation Classification". It was trained on the IRSD simplification dataset. A control token is expected at the start of input sequences to dictate which simplification operation should be performed. This can either be done manually or with an operation classifier like [this one](https://huggingface.co/liamcripwell/ctrl44-clf). Possible control tokens are: "\<ident\>", "\<para\>", "\<ssplit\>", and "\<dsplit\>". ## How to use Here is how to use this model in PyTorch: ```python from transformers import BartForConditionalGeneration, AutoTokenizer model = BartForConditionalGeneration.from_pretrained("liamcripwell/ctrl44-simp") tokenizer = AutoTokenizer.from_pretrained("liamcripwell/ctrl44-simp") text = "<para> Barack Hussein Obama II is an American politician who served as the 44th president of the United States from 2009 to 2017." inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(**inputs, num_beams=10, max_length=128) ```
Intel/xlnet-base-cased-mrpc
Intel
2022-04-21T07:46:07Z
4
1
transformers
[ "transformers", "pytorch", "xlnet", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-21T07:44:55Z
--- language: - en license: mit tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: xlnet-base-cased-mrpc results: - task: name: Text Classification type: text-classification dataset: name: GLUE MRPC type: glue args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8455882352941176 - name: F1 type: f1 value: 0.8896672504378283 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlnet-base-cased-mrpc This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.7156 - Accuracy: 0.8456 - F1: 0.8897 - Combined Score: 0.8676 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu102 - Datasets 2.1.0 - Tokenizers 0.11.6
Intel/xlm-roberta-base-mrpc
Intel
2022-04-21T07:08:18Z
5
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-21T06:35:48Z
--- language: - en license: mit tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: xlm-roberta-base-mrpc results: - task: name: Text Classification type: text-classification dataset: name: GLUE MRPC type: glue args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8578431372549019 - name: F1 type: f1 value: 0.901023890784983 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-mrpc This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.3703 - Accuracy: 0.8578 - F1: 0.9010 - Combined Score: 0.8794 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu102 - Datasets 2.1.0 - Tokenizers 0.11.6
thanawan/bert-base-uncased-finetuned-humordetection
thanawan
2022-04-21T06:35:51Z
28
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-20T18:57:09Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - f1 model-index: - name: bert-base-uncased-finetuned-humordetection results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-humordetection This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3136 - F1: 0.9586 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 375 | 0.1768 | 0.9507 | | 0.2266 | 2.0 | 750 | 0.1910 | 0.9553 | | 0.08 | 3.0 | 1125 | 0.2822 | 0.9529 | | 0.0194 | 4.0 | 1500 | 0.2989 | 0.9560 | | 0.0194 | 5.0 | 1875 | 0.3136 | 0.9586 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1
kurianbenoy/blurr_cnnmail_textsumarrisation
kurianbenoy
2022-04-21T06:28:16Z
0
0
fastai
[ "fastai", "summarization", "license:mit", "region:us" ]
summarization
2022-04-21T06:11:33Z
--- license: mit tags: - fastai - summarization --- ## Fine-tuned Text Summarization Model - CNNMail (blurr model) This model is trained as shown in [the link](https://github.com/kurianbenoy/chaloRR/blob/master/TextSummarisation_Seq2Seq.ipynb). Most of the code is developed based on [blurr tutorial on modelling with mid-level APIs](https://ohmeow.github.io/blurr/text-modeling-seq2seq-summarization.html#Mid-level-API)
vaariis/distilbert-base-uncased-finetuned-emotion
vaariis
2022-04-21T06:20:25Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-10T10:46:35Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2218 - Accuracy: 0.9205 - F1: 0.9208 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8262 | 1.0 | 250 | 0.3223 | 0.9005 | 0.8971 | | 0.2474 | 2.0 | 500 | 0.2218 | 0.9205 | 0.9208 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Tokenizers 0.12.1
4m1g0/wav2vec2-large-xls-r-300m-gl-jupyter7
4m1g0
2022-04-21T05:54:48Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-04-20T22:28:22Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-large-xls-r-300m-gl-jupyter7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-gl-jupyter7 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1004 - Wer: 0.0647 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 60 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.8074 | 3.36 | 400 | 0.4882 | 0.5245 | | 0.2396 | 6.72 | 800 | 0.1335 | 0.1524 | | 0.0876 | 10.08 | 1200 | 0.1216 | 0.1199 | | 0.0597 | 13.44 | 1600 | 0.1289 | 0.1241 | | 0.0449 | 16.8 | 2000 | 0.1164 | 0.1028 | | 0.0372 | 20.17 | 2400 | 0.1270 | 0.1023 | | 0.0319 | 23.53 | 2800 | 0.1111 | 0.0966 | | 0.0286 | 26.89 | 3200 | 0.1142 | 0.0925 | | 0.0246 | 30.25 | 3600 | 0.1142 | 0.0926 | | 0.0235 | 33.61 | 4000 | 0.1075 | 0.0836 | | 0.0181 | 36.97 | 4400 | 0.1083 | 0.0837 | | 0.0151 | 40.33 | 4800 | 0.1140 | 0.0768 | | 0.014 | 43.69 | 5200 | 0.1015 | 0.0748 | | 0.0111 | 47.06 | 5600 | 0.1023 | 0.0702 | | 0.0093 | 50.42 | 6000 | 0.1028 | 0.0708 | | 0.0078 | 53.78 | 6400 | 0.0999 | 0.0645 | | 0.0071 | 57.14 | 6800 | 0.1004 | 0.0647 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
achyut/patronizing_detection
achyut
2022-04-21T05:18:01Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-14T17:34:38Z
This model is fine tuned for Patronizing and Condescending Language Classification task. Have fun.
eagles/focus_sum_mT5_minshi
eagles
2022-04-21T04:23:12Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "mt5", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-04-21T03:26:00Z
--- tags: - generated_from_trainer model-index: - name: focus_sum_mT5_minshi results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # focus_sum_mT5_minshi This model is a fine-tuned version of [csebuetnlp/mT5_multilingual_XLSum](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0930 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.268 | 83.33 | 500 | 0.0930 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1
obokkkk/wav2vec2-base-timit-demo-colab3
obokkkk
2022-04-21T04:10:35Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-04-21T01:39:21Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab3 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4832 - Wer: 0.3419 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.292 | 4.0 | 500 | 0.7903 | 0.6305 | | 0.5022 | 8.0 | 1000 | 0.4497 | 0.4332 | | 0.2129 | 12.0 | 1500 | 0.4998 | 0.3940 | | 0.1251 | 16.0 | 2000 | 0.4728 | 0.3667 | | 0.0861 | 20.0 | 2500 | 0.4663 | 0.3644 | | 0.0594 | 24.0 | 3000 | 0.4773 | 0.3497 | | 0.0446 | 28.0 | 3500 | 0.4832 | 0.3419 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
dlu66061/wav2vec2-base-timit-demo
dlu66061
2022-04-21T03:16:11Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-04-20T21:55:56Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4094 - Wer: 0.2825 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.5419 | 3.45 | 500 | 1.2376 | 0.8772 | | 0.5393 | 6.9 | 1000 | 0.4489 | 0.3894 | | 0.1916 | 10.34 | 1500 | 0.3777 | 0.3185 | | 0.1139 | 13.79 | 2000 | 0.4041 | 0.3058 | | 0.0798 | 17.24 | 2500 | 0.3742 | 0.2988 | | 0.0602 | 20.69 | 3000 | 0.3751 | 0.2897 | | 0.0463 | 24.14 | 3500 | 0.4067 | 0.2865 | | 0.0388 | 27.59 | 4000 | 0.4094 | 0.2825 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu102 - Datasets 1.18.3 - Tokenizers 0.11.6
espejelomar/fastai-dummy-learner
espejelomar
2022-04-21T03:09:00Z
0
0
fastai
[ "fastai", "region:us" ]
null
2022-04-21T03:08:52Z
--- tags: - fastai --- # Amazing! Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (template below and [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using the 🤗Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join our fastai community on the Hugging Face Discord! Greetings fellow fastlearner 🤝! --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
huggingtweets/torstenvolk
huggingtweets
2022-04-21T00:16:11Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-04-21T00:10:29Z
--- language: en thumbnail: http://www.huggingtweets.com/torstenvolk/1650500124030/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1575782906/110930-ENMA-115240-web_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Torsten Volk</div> <div style="text-align: center; font-size: 14px;">@torstenvolk</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Torsten Volk. | Data | Torsten Volk | | --- | --- | | Tweets downloaded | 3250 | | Retweets | 449 | | Short tweets | 60 | | Tweets kept | 2741 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2pgfl6jg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @torstenvolk's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1iccl44p) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1iccl44p/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/torstenvolk') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
hasnainnaeem/mnist_model
hasnainnaeem
2022-04-20T23:10:09Z
0
0
tf-keras
[ "tf-keras", "license:mit", "region:us" ]
null
2022-04-05T14:41:48Z
--- license: mit --- **Dataset:** MNIST **Accuracy:** 0.986% **Model Structure:** ![Model Summary](model_summary.png)
brad1141/Bert_v5
brad1141
2022-04-20T22:23:00Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "longformer", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-04-20T19:15:19Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: Bert_v5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Bert_v5 This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9191 - Precision: 0.7612 - Recall: 0.8007 - F1: 0.5106 - Accuracy: 0.7357 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 1.0663 | 1.0 | 934 | 0.8636 | 0.6973 | 0.8467 | 0.4082 | 0.7023 | | 0.8354 | 2.0 | 1868 | 0.8261 | 0.7367 | 0.8086 | 0.4733 | 0.7221 | | 0.7164 | 3.0 | 2802 | 0.7737 | 0.7572 | 0.7988 | 0.5055 | 0.7347 | | 0.6149 | 4.0 | 3736 | 0.7542 | 0.7488 | 0.8402 | 0.5176 | 0.7438 | | 0.5153 | 5.0 | 4670 | 0.8185 | 0.7614 | 0.8123 | 0.5017 | 0.7389 | | 0.4314 | 6.0 | 5604 | 0.8599 | 0.7543 | 0.8259 | 0.5085 | 0.7395 | | 0.3689 | 7.0 | 6538 | 0.9191 | 0.7612 | 0.8007 | 0.5106 | 0.7357 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1
brad1141/Longformer_v5
brad1141
2022-04-20T19:13:09Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "longformer", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-04-20T14:39:26Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: Longformer_v5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Longformer_v5 This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7919 - Precision: 0.8516 - Recall: 0.8678 - F1: 0.6520 - Accuracy: 0.8259 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.7744 | 1.0 | 1012 | 0.5785 | 0.8375 | 0.8501 | 0.5798 | 0.8098 | | 0.5211 | 2.0 | 2024 | 0.5415 | 0.8434 | 0.8801 | 0.6251 | 0.8282 | | 0.3996 | 3.0 | 3036 | 0.5565 | 0.8500 | 0.8766 | 0.6303 | 0.8274 | | 0.2964 | 4.0 | 4048 | 0.6017 | 0.8617 | 0.8546 | 0.6415 | 0.8240 | | 0.2187 | 5.0 | 5060 | 0.6660 | 0.8485 | 0.8718 | 0.6431 | 0.8271 | | 0.1603 | 6.0 | 6072 | 0.7235 | 0.8493 | 0.8759 | 0.6544 | 0.8290 | | 0.1208 | 7.0 | 7084 | 0.7919 | 0.8516 | 0.8678 | 0.6520 | 0.8259 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1
eslamxm/mT5_multilingual_XLSum-finetuned-ar-wikilingua
eslamxm
2022-04-20T18:31:30Z
10
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "summarization", "generated_from_trainer", "dataset:wiki_lingua", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-04-20T06:33:58Z
--- tags: - summarization - generated_from_trainer datasets: - wiki_lingua model-index: - name: mT5_multilingual_XLSum-finetuned-ar-wikilingua results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mT5_multilingual_XLSum-finetuned-ar-wikilingua This model is a fine-tuned version of [csebuetnlp/mT5_multilingual_XLSum](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) on the wiki_lingua dataset. It achieves the following results on the evaluation set: - Loss: 3.6903 - Rouge-1: 24.47 - Rouge-2: 7.69 - Rouge-l: 20.04 - Gen Len: 39.64 - Bertscore: 72.63 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 250 - num_epochs: 8 - label_smoothing_factor: 0.1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:| | 4.4406 | 1.0 | 5111 | 3.9582 | 22.35 | 6.84 | 18.39 | 34.78 | 71.94 | | 4.0158 | 2.0 | 10222 | 3.8316 | 22.87 | 7.24 | 18.92 | 34.7 | 71.99 | | 3.8626 | 3.0 | 15333 | 3.7695 | 23.65 | 7.5 | 19.6 | 35.53 | 72.31 | | 3.7626 | 4.0 | 20444 | 3.7313 | 24.01 | 7.59 | 19.68 | 38.16 | 72.41 | | 3.6934 | 5.0 | 25555 | 3.7118 | 24.37 | 7.77 | 19.93 | 39.36 | 72.47 | | 3.6421 | 6.0 | 30666 | 3.7016 | 24.48 | 7.8 | 20.07 | 38.58 | 72.58 | | 3.6073 | 7.0 | 35777 | 3.6907 | 24.31 | 7.83 | 20.13 | 38.07 | 72.5 | | 3.5843 | 8.0 | 40888 | 3.6903 | 24.55 | 7.88 | 20.2 | 38.33 | 72.6 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1
OWG/convbert-base-spanish
OWG
2022-04-20T18:06:55Z
0
0
null
[ "onnx", "ConvBERT", "es", "dataset:large_spanish_corpus", "arxiv:2008.02496", "license:mit", "region:us" ]
null
2022-04-20T17:52:31Z
--- language: - es tags: - ConvBERT license: mit datasets: - large_spanish_corpus --- # ConvBERT ## Model Description ConvBERT base pre-trained on large_spanish_corpus. The ConvBERT architecture has been presented in the ["ConvBERT: Improving BERT with Span-based Dynamic Convolution"](https://arxiv.org/abs/2008.02496) paper. ## Original implementation Follow [this link](https://huggingface.co/mrm8488/convbert-base-spanish) to see the original implementation. # How to use Download the model by cloning the repository via: ```bash git clone https://huggingface.co/OWG/convbert-base-spanish ``` Then you can use the model. Because it's a `base` model you need to fine-tune it on your specific task, depending on your needs, before using it.
sap218/PatientINF
sap218
2022-04-20T17:28:39Z
0
0
null
[ "license:mit", "region:us" ]
null
2022-04-20T17:20:08Z
--- license: mit --- PatientINF embedding model, derived from ClinicalBERT and retrained with patient forum conversation. See the GitHub for documentation: https://github.com/sap218/PatientINF model is intended for my PhD thesis and open source for secondary research, thesis will be available soon.
EColi/SB_Classifier
EColi
2022-04-20T17:27:13Z
63
1
generic
[ "generic", "pytorch", "bert", "text-classification", "region:us" ]
text-classification
2022-04-20T01:19:56Z
--- tags: - text-classification - generic library_name: generic widget: - text: 'This video is sponsored by squarespace' example_title: Sponsor - text: 'Check out the merch at linustechtips.com' example_title: Unpaid/self promotion - text: "Don't forget to like, comment and subscribe" example_title: Interaction reminder - text: 'pqh4LfPeCYs,824.695,826.267,826.133,829.876,835.933,927.581' example_title: Extract text from video ---
ahmednasser/DistilBert-FakeNews
ahmednasser
2022-04-20T16:29:21Z
8
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "fake-news", "en", "arxiv:1910.01108", "endpoints_compatible", "region:us" ]
text-classification
2022-04-07T17:40:27Z
--- language: - en tags: - text-classification - fake-news - pytorch datasets: - Fake News https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset metrics: - Accuracy, AUC --- ## Model description: [Distilbert](https://arxiv.org/abs/1910.01108) is created with knowledge distillation during the pre-training phase which reduces the size of a BERT model by 40%, while retaining 97% of its language understanding. It's smaller, faster than Bert and any other Bert-based model. [Distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) finetuned on the fake news dataset with below Hyperparameters ``` learning rate 5e-5, batch size 32, num_train_epochs=2, ``` Full code available @ [DistilBert-FakeNews](https://github.com/anasserhussien/DistilBert-FakeNews) Dataset available @ [Fake News dataset](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset)
espnet/GunnarThor_talromur_h_fastspeech2
espnet
2022-04-20T15:37:27Z
1
0
espnet
[ "espnet", "audio", "text-to-speech", "en", "dataset:talromur", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-04-20T15:37:06Z
--- tags: - espnet - audio - text-to-speech language: en datasets: - talromur license: cc-by-4.0 --- ## ESPnet2 TTS model ### `espnet/GunnarThor_talromur_h_fastspeech2` This model was trained by Gunnar Thor using talromur recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout 49a284e69308d81c142b89795de255b4ce290c54 pip install -e . cd egs2/talromur/tts1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/GunnarThor_talromur_h_fastspeech2 ``` ## TTS config <details><summary>expand</summary> ``` config: conf/tuning/train_fastspeech2.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/h/tts_train_fastspeech2_raw_phn_none ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 100 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - loss - min - - train - loss - min keep_nbest_models: 5 nbest_averaging_interval: 0 grad_clip: 1.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 8 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: 800 batch_size: 20 valid_batch_size: null batch_bins: 2500000 valid_batch_bins: null train_shape_file: - exp/h/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/text_shape.phn - exp/h/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/speech_shape valid_shape_file: - exp/h/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/valid/text_shape.phn - exp/h/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/valid/speech_shape batch_type: numel valid_batch_type: null fold_length: - 150 - 204800 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_h_phn/text - text - text - - exp/h/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/train_h_phn/durations - durations - text_int - - dump/raw/train_h_phn/wav.scp - speech - sound valid_data_path_and_name_and_type: - - dump/raw/dev_h_phn/text - text - text - - exp/h/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/dev_h_phn/durations - durations - text_int - - dump/raw/dev_h_phn/wav.scp - speech - sound allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 1.0 scheduler: noamlr scheduler_conf: model_size: 384 warmup_steps: 4000 token_list: - <blank> - <unk> - ',' - . - r - t - n - a0 - s - I0 - D - l - Y0 - m - v - h - E1 - k - a:1 - E:1 - f - G - j - T - a1 - p - c - au:1 - i:1 - O:1 - I:1 - E0 - I1 - r_0 - t_h - k_h - Y1 - ei1 - i0 - ou:1 - ei:1 - u:1 - O1 - N - l_0 - '91' - ai0 - au1 - ou0 - n_0 - ei0 - O0 - ou1 - ai:1 - '9:1' - ai1 - i1 - '90' - au0 - c_h - x - 9i:1 - C - p_h - u0 - Y:1 - J - 9i1 - u1 - 9i0 - N_0 - m_0 - J_0 - Oi1 - Yi0 - Yi1 - Oi0 - au:0 - '9:0' - E:0 - <sos/eos> odim: null model_conf: {} use_preprocessor: true token_type: phn bpemodel: null non_linguistic_symbols: null cleaner: null g2p: null feats_extract: fbank feats_extract_conf: n_fft: 1024 hop_length: 256 win_length: null fs: 22050 fmin: 80 fmax: 7600 n_mels: 80 normalize: global_mvn normalize_conf: stats_file: exp/h/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/feats_stats.npz tts: fastspeech2 tts_conf: adim: 384 aheads: 2 elayers: 4 eunits: 1536 dlayers: 4 dunits: 1536 positionwise_layer_type: conv1d positionwise_conv_kernel_size: 3 duration_predictor_layers: 2 duration_predictor_chans: 256 duration_predictor_kernel_size: 3 postnet_layers: 5 postnet_filts: 5 postnet_chans: 256 use_masking: true use_scaled_pos_enc: true encoder_normalize_before: true decoder_normalize_before: true reduction_factor: 1 init_type: xavier_uniform init_enc_alpha: 1.0 init_dec_alpha: 1.0 transformer_enc_dropout_rate: 0.2 transformer_enc_positional_dropout_rate: 0.2 transformer_enc_attn_dropout_rate: 0.2 transformer_dec_dropout_rate: 0.2 transformer_dec_positional_dropout_rate: 0.2 transformer_dec_attn_dropout_rate: 0.2 pitch_predictor_layers: 5 pitch_predictor_chans: 256 pitch_predictor_kernel_size: 5 pitch_predictor_dropout: 0.5 pitch_embed_kernel_size: 1 pitch_embed_dropout: 0.0 stop_gradient_from_pitch_predictor: true energy_predictor_layers: 2 energy_predictor_chans: 256 energy_predictor_kernel_size: 3 energy_predictor_dropout: 0.5 energy_embed_kernel_size: 1 energy_embed_dropout: 0.0 stop_gradient_from_energy_predictor: false pitch_extract: dio pitch_extract_conf: fs: 22050 n_fft: 1024 hop_length: 256 f0max: 400 f0min: 80 reduction_factor: 1 pitch_normalize: global_mvn pitch_normalize_conf: stats_file: exp/h/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/pitch_stats.npz energy_extract: energy energy_extract_conf: fs: 22050 n_fft: 1024 hop_length: 256 win_length: null reduction_factor: 1 energy_normalize: global_mvn energy_normalize_conf: stats_file: exp/h/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/energy_stats.npz required: - output_dir - token_list version: 0.10.7a1 distributed: false ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/GunnarThor_talromur_g_tacotron2
espnet
2022-04-20T15:36:20Z
1
0
espnet
[ "espnet", "audio", "text-to-speech", "en", "dataset:talromur", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-04-20T15:35:25Z
--- tags: - espnet - audio - text-to-speech language: en datasets: - talromur license: cc-by-4.0 --- ## ESPnet2 TTS model ### `espnet/GunnarThor_talromur_g_tacotron2` This model was trained by Gunnar Thor using talromur recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout 49a284e69308d81c142b89795de255b4ce290c54 pip install -e . cd egs2/talromur/tts1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/GunnarThor_talromur_g_tacotron2 ``` ## TTS config <details><summary>expand</summary> ``` config: ./conf/tuning/train_tacotron2.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/g/tts_train_tacotron2_raw_phn_none ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 2 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 39151 dist_launcher: null multiprocessing_distributed: true unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 100 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - loss - min - - train - loss - min keep_nbest_models: 5 nbest_averaging_interval: 0 grad_clip: 1.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: 500 batch_size: 20 valid_batch_size: null batch_bins: 2560000 valid_batch_bins: null train_shape_file: - exp/g/tts_stats_raw_phn_none/train/text_shape.phn - exp/g/tts_stats_raw_phn_none/train/speech_shape valid_shape_file: - exp/g/tts_stats_raw_phn_none/valid/text_shape.phn - exp/g/tts_stats_raw_phn_none/valid/speech_shape batch_type: numel valid_batch_type: null fold_length: - 150 - 204800 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_g_phn/text - text - text - - dump/raw/train_g_phn/wav.scp - speech - sound valid_data_path_and_name_and_type: - - dump/raw/dev_g_phn/text - text - text - - dump/raw/dev_g_phn/wav.scp - speech - sound allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.001 eps: 1.0e-06 weight_decay: 0.0 scheduler: null scheduler_conf: {} token_list: - <blank> - <unk> - ',' - . - r - t - n - a0 - s - I0 - D - l - Y0 - m - v - h - E1 - k - a:1 - E:1 - f - G - j - T - a1 - p - c - au:1 - i:1 - O:1 - I:1 - E0 - I1 - r_0 - t_h - k_h - Y1 - ei1 - i0 - ou:1 - ei:1 - u:1 - O1 - N - l_0 - '91' - ai0 - au1 - ou0 - n_0 - ei0 - O0 - ou1 - ai:1 - '9:1' - ai1 - i1 - '90' - au0 - c_h - x - 9i:1 - C - p_h - u0 - Y:1 - J - 9i1 - u1 - 9i0 - N_0 - m_0 - J_0 - Oi1 - Yi0 - Yi1 - Oi0 - au:0 - '9:0' - E:0 - <sos/eos> odim: null model_conf: {} use_preprocessor: true token_type: phn bpemodel: null non_linguistic_symbols: null cleaner: null g2p: null feats_extract: fbank feats_extract_conf: n_fft: 1024 hop_length: 256 win_length: null fs: 22050 fmin: 80 fmax: 7600 n_mels: 80 normalize: global_mvn normalize_conf: stats_file: exp/g/tts_stats_raw_phn_none/train/feats_stats.npz tts: tacotron2 tts_conf: embed_dim: 512 elayers: 1 eunits: 512 econv_layers: 3 econv_chans: 512 econv_filts: 5 atype: location adim: 512 aconv_chans: 32 aconv_filts: 15 cumulate_att_w: true dlayers: 2 dunits: 1024 prenet_layers: 2 prenet_units: 256 postnet_layers: 5 postnet_chans: 512 postnet_filts: 5 output_activation: null use_batch_norm: true use_concate: true use_residual: false dropout_rate: 0.5 zoneout_rate: 0.1 reduction_factor: 1 spk_embed_dim: null use_masking: true bce_pos_weight: 5.0 use_guided_attn_loss: true guided_attn_loss_sigma: 0.4 guided_attn_loss_lambda: 1.0 pitch_extract: null pitch_extract_conf: {} pitch_normalize: null pitch_normalize_conf: {} energy_extract: null energy_extract_conf: {} energy_normalize: null energy_normalize_conf: {} required: - output_dir - token_list version: 0.10.7a1 distributed: true ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/GunnarThor_talromur_g_fastspeech2
espnet
2022-04-20T15:35:50Z
0
0
espnet
[ "espnet", "audio", "text-to-speech", "en", "dataset:talromur", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-04-20T15:35:34Z
--- tags: - espnet - audio - text-to-speech language: en datasets: - talromur license: cc-by-4.0 --- ## ESPnet2 TTS model ### `espnet/GunnarThor_talromur_g_fastspeech2` This model was trained by Gunnar Thor using talromur recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout 49a284e69308d81c142b89795de255b4ce290c54 pip install -e . cd egs2/talromur/tts1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/GunnarThor_talromur_g_fastspeech2 ``` ## TTS config <details><summary>expand</summary> ``` config: conf/tuning/train_fastspeech2.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/g/tts_train_fastspeech2_raw_phn_none ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 100 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - loss - min - - train - loss - min keep_nbest_models: 5 nbest_averaging_interval: 0 grad_clip: 1.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 8 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: 800 batch_size: 20 valid_batch_size: null batch_bins: 2500000 valid_batch_bins: null train_shape_file: - exp/g/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/text_shape.phn - exp/g/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/speech_shape valid_shape_file: - exp/g/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/valid/text_shape.phn - exp/g/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/valid/speech_shape batch_type: numel valid_batch_type: null fold_length: - 150 - 204800 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_g_phn/text - text - text - - exp/g/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/train_g_phn/durations - durations - text_int - - dump/raw/train_g_phn/wav.scp - speech - sound valid_data_path_and_name_and_type: - - dump/raw/dev_g_phn/text - text - text - - exp/g/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/dev_g_phn/durations - durations - text_int - - dump/raw/dev_g_phn/wav.scp - speech - sound allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 1.0 scheduler: noamlr scheduler_conf: model_size: 384 warmup_steps: 4000 token_list: - <blank> - <unk> - ',' - . - r - t - n - a0 - s - I0 - D - l - Y0 - m - v - h - E1 - k - a:1 - E:1 - f - G - j - T - a1 - p - c - au:1 - i:1 - O:1 - I:1 - E0 - I1 - r_0 - t_h - k_h - Y1 - ei1 - i0 - ou:1 - ei:1 - u:1 - O1 - N - l_0 - '91' - ai0 - au1 - ou0 - n_0 - ei0 - O0 - ou1 - ai:1 - '9:1' - ai1 - i1 - '90' - au0 - c_h - x - 9i:1 - C - p_h - u0 - Y:1 - J - 9i1 - u1 - 9i0 - N_0 - m_0 - J_0 - Oi1 - Yi0 - Yi1 - Oi0 - au:0 - '9:0' - E:0 - <sos/eos> odim: null model_conf: {} use_preprocessor: true token_type: phn bpemodel: null non_linguistic_symbols: null cleaner: null g2p: null feats_extract: fbank feats_extract_conf: n_fft: 1024 hop_length: 256 win_length: null fs: 22050 fmin: 80 fmax: 7600 n_mels: 80 normalize: global_mvn normalize_conf: stats_file: exp/g/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/feats_stats.npz tts: fastspeech2 tts_conf: adim: 384 aheads: 2 elayers: 4 eunits: 1536 dlayers: 4 dunits: 1536 positionwise_layer_type: conv1d positionwise_conv_kernel_size: 3 duration_predictor_layers: 2 duration_predictor_chans: 256 duration_predictor_kernel_size: 3 postnet_layers: 5 postnet_filts: 5 postnet_chans: 256 use_masking: true use_scaled_pos_enc: true encoder_normalize_before: true decoder_normalize_before: true reduction_factor: 1 init_type: xavier_uniform init_enc_alpha: 1.0 init_dec_alpha: 1.0 transformer_enc_dropout_rate: 0.2 transformer_enc_positional_dropout_rate: 0.2 transformer_enc_attn_dropout_rate: 0.2 transformer_dec_dropout_rate: 0.2 transformer_dec_positional_dropout_rate: 0.2 transformer_dec_attn_dropout_rate: 0.2 pitch_predictor_layers: 5 pitch_predictor_chans: 256 pitch_predictor_kernel_size: 5 pitch_predictor_dropout: 0.5 pitch_embed_kernel_size: 1 pitch_embed_dropout: 0.0 stop_gradient_from_pitch_predictor: true energy_predictor_layers: 2 energy_predictor_chans: 256 energy_predictor_kernel_size: 3 energy_predictor_dropout: 0.5 energy_embed_kernel_size: 1 energy_embed_dropout: 0.0 stop_gradient_from_energy_predictor: false pitch_extract: dio pitch_extract_conf: fs: 22050 n_fft: 1024 hop_length: 256 f0max: 400 f0min: 80 reduction_factor: 1 pitch_normalize: global_mvn pitch_normalize_conf: stats_file: exp/g/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/pitch_stats.npz energy_extract: energy energy_extract_conf: fs: 22050 n_fft: 1024 hop_length: 256 win_length: null reduction_factor: 1 energy_normalize: global_mvn energy_normalize_conf: stats_file: exp/g/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/energy_stats.npz required: - output_dir - token_list version: 0.10.7a1 distributed: false ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
celinelee/bart-finetuned-conala-3
celinelee
2022-04-20T15:10:58Z
4
1
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-04-20T02:00:22Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge - bleu model-index: - name: bart-finetuned-conala-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-finetuned-conala-3 This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an CoNaLa. It achieves the following results on the evaluation set: - Loss: 1.8253 - Rouge1: 47.4345 - Rouge2: 23.8936 - Rougel: 45.317 - Rougelsum: 45.4339 - Bleu: 0.0657 - Gen Len: 58.0 ## Model description More information needed ## Intended uses & limitations Code snippet -> NL intent ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:------:|:-------:| | No log | 0.08 | 50 | 2.7823 | 35.8458 | 12.1898 | 33.7466 | 33.8377 | 0.0041 | 58.0 | | No log | 0.17 | 100 | 2.4223 | 37.2633 | 13.429 | 34.4943 | 34.5533 | 0.0087 | 58.0 | | No log | 0.25 | 150 | 2.2696 | 40.6963 | 16.5785 | 38.1213 | 38.16 | 0.0167 | 58.0 | | No log | 0.34 | 200 | 2.3168 | 41.3324 | 17.292 | 39.0117 | 39.113 | 0.0173 | 58.0 | | No log | 0.42 | 250 | 2.3187 | 41.1345 | 16.6829 | 38.8514 | 38.891 | 0.0237 | 58.0 | | No log | 0.5 | 300 | 2.1701 | 41.0145 | 17.5601 | 39.166 | 39.249 | 0.0206 | 58.0 | | No log | 0.59 | 350 | 2.2035 | 41.7506 | 17.7251 | 39.4856 | 39.5647 | 0.0292 | 58.0 | | No log | 0.67 | 400 | 2.1006 | 43.0324 | 19.9801 | 40.8704 | 40.9399 | 0.0319 | 58.0 | | No log | 0.76 | 450 | 2.0563 | 43.2151 | 18.7409 | 40.4183 | 40.502 | 0.0244 | 58.0 | | 2.4902 | 0.84 | 500 | 2.0468 | 43.2215 | 18.3484 | 40.9498 | 41.0682 | 0.0317 | 58.0 | | 2.4902 | 0.92 | 550 | 2.0222 | 44.9934 | 19.8389 | 42.4478 | 42.5687 | 0.0372 | 58.0 | | 2.4902 | 1.01 | 600 | 2.1095 | 43.8293 | 19.5682 | 40.882 | 40.9518 | 0.0311 | 58.0 | | 2.4902 | 1.09 | 650 | 2.0124 | 43.6928 | 19.6878 | 39.6602 | 39.7368 | 0.0417 | 58.0 | | 2.4902 | 1.18 | 700 | 2.0027 | 46.2115 | 21.9475 | 43.5869 | 43.6713 | 0.0477 | 58.0 | | 2.4902 | 1.26 | 750 | 1.9599 | 45.9388 | 22.0368 | 43.4731 | 43.5656 | 0.043 | 58.0 | | 2.4902 | 1.34 | 800 | 1.9467 | 44.7518 | 20.4755 | 42.489 | 42.6274 | 0.0394 | 58.0 | | 2.4902 | 1.43 | 850 | 1.9643 | 44.1584 | 20.8833 | 41.8848 | 41.9733 | 0.0441 | 58.0 | | 2.4902 | 1.51 | 900 | 1.8926 | 47.3789 | 22.9104 | 45.0164 | 45.0822 | 0.0445 | 58.0 | | 2.4902 | 1.6 | 950 | 1.8855 | 46.8329 | 22.1133 | 44.1788 | 44.2666 | 0.0431 | 58.0 | | 1.8023 | 1.68 | 1000 | 1.9160 | 47.1319 | 22.9792 | 44.4807 | 44.6103 | 0.0475 | 58.0 | | 1.8023 | 1.76 | 1050 | 1.8498 | 48.8005 | 24.4785 | 46.4564 | 46.5427 | 0.0576 | 58.0 | | 1.8023 | 1.85 | 1100 | 1.8611 | 47.8327 | 23.2086 | 45.5999 | 45.6868 | 0.0487 | 58.0 | | 1.8023 | 1.93 | 1150 | 1.8497 | 47.7267 | 23.2021 | 45.5104 | 45.546 | 0.0512 | 58.0 | | 1.8023 | 2.02 | 1200 | 1.8335 | 47.1502 | 22.8336 | 44.7614 | 44.7927 | 0.0566 | 58.0 | | 1.8023 | 2.1 | 1250 | 1.8779 | 46.6645 | 22.9162 | 44.0086 | 44.2021 | 0.0539 | 58.0 | | 1.8023 | 2.18 | 1300 | 1.8514 | 48.1544 | 24.7977 | 45.949 | 46.0254 | 0.0719 | 58.0 | | 1.8023 | 2.27 | 1350 | 1.8658 | 46.7655 | 23.4813 | 44.5872 | 44.6907 | 0.069 | 58.0 | | 1.8023 | 2.35 | 1400 | 1.8400 | 46.2749 | 23.6528 | 44.3149 | 44.4056 | 0.0572 | 58.0 | | 1.8023 | 2.44 | 1450 | 1.8343 | 46.6169 | 23.8005 | 44.5486 | 44.6125 | 0.0547 | 58.0 | | 1.3851 | 2.52 | 1500 | 1.8220 | 47.4739 | 24.3457 | 45.4959 | 45.6216 | 0.0662 | 58.0 | | 1.3851 | 2.61 | 1550 | 1.8333 | 47.6311 | 24.3616 | 45.5904 | 45.6146 | 0.0666 | 58.0 | | 1.3851 | 2.69 | 1600 | 1.8091 | 47.4633 | 24.0785 | 45.2493 | 45.2845 | 0.0645 | 58.0 | | 1.3851 | 2.77 | 1650 | 1.8085 | 47.6495 | 23.8386 | 45.5077 | 45.5848 | 0.0639 | 58.0 | | 1.3851 | 2.86 | 1700 | 1.8377 | 46.9721 | 23.4325 | 44.8386 | 44.9003 | 0.0647 | 58.0 | | 1.3851 | 2.94 | 1750 | 1.8238 | 47.5266 | 23.9843 | 45.3897 | 45.473 | 0.0653 | 58.0 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2+cu102 - Datasets 2.1.0 - Tokenizers 0.10.3
ktangri/autotrain-financial-sentiment-765323474
ktangri
2022-04-20T14:35:01Z
17
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain", "en", "dataset:ktangri/autotrain-data-financial-sentiment", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-20T14:34:03Z
--- tags: autotrain language: en widget: - text: "I love AutoTrain 🤗" datasets: - ktangri/autotrain-data-financial-sentiment co2_eq_emissions: 0.007501354635994803 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 765323474 - CO2 Emissions (in grams): 0.007501354635994803 ## Validation Metrics - Loss: 0.0447433702647686 - Accuracy: 0.9823788546255506 - Macro F1: 0.974405452470854 - Micro F1: 0.9823788546255506 - Weighted F1: 0.9823043153179869 - Macro Precision: 0.978208375548801 - Micro Precision: 0.9823788546255506 - Weighted Precision: 0.9823204968555985 - Macro Recall: 0.9707159078140736 - Micro Recall: 0.9823788546255506 - Weighted Recall: 0.9823788546255506 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/ktangri/autotrain-financial-sentiment-765323474 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("ktangri/autotrain-financial-sentiment-765323474", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("ktangri/autotrain-financial-sentiment-765323474", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
samwell/english_to_twi
samwell
2022-04-20T13:21:08Z
0
0
keras
[ "keras", "tf-keras", "region:us" ]
null
2022-04-20T13:12:53Z
--- library_name: keras --- ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'RMSprop', 'learning_rate': 0.001, 'decay': 0.0, 'rho': 0.9, 'momentum': 0.0, 'epsilon': 1e-07, 'centered': False} - training_precision: float32 ## Training Metrics | Epochs | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | |--- |--- |--- |--- |--- | | 1| 2.157| 0.289| 1.828| 0.371| | 2| 1.795| 0.388| 1.746| 0.409| | 3| 1.639| 0.436| 1.597| 0.43| | 4| 1.523| 0.471| 1.54| 0.463| | 5| 1.427| 0.503| 1.485| 0.479| ## Model Plot <details> <summary>View Model Plot</summary> ![Model Image](./model.png) </details>
James-kc-min/AGT_Roberta2
James-kc-min
2022-04-20T12:51:42Z
5
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-20T10:13:46Z
Hugging Face's logo Hugging Face Search models, datasets, users... Models Datasets Spaces Docs Solutions Pricing Hugging Face is way more fun with friends and colleagues! 🤗 Join an organization James-kc-min / AGT_Roberta Copied like 0 Text Classification PyTorch Transformers apache-2.0 roberta generated_from_trainer Eval Results Infinity Compatible Model card Files and versions Settings AGT_Roberta / README.md James-kc-min's picture James-kc-min update model card README.md 3abd7dd about 3 hours ago raw history blame edit delete Safe 1.01 kB --- license: apache-2.0 tags: - generated_from_trainer model-index: - name: AGT_Roberta results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # AGT_Roberta This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 1.16.0 - Tokenizers 0.12.1
dimbyTa/rock-challenge-ViT-two-by-two
dimbyTa
2022-04-20T11:19:22Z
63
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-31T19:44:47Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: rock-challenge-ViT-two-by-two results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9663800001144409 --- # rock-challenge-ViT-two-by-two Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### fines ![fines](images/fines.png) #### large ![large](images/large.png) #### medium ![medium](images/medium.png) #### pellets ![pellets](images/pellets.png)
James-kc-min/AGT_Roberta
James-kc-min
2022-04-20T09:39:04Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-20T09:34:51Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: AGT_Roberta results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # AGT_Roberta This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 1.16.0 - Tokenizers 0.12.1
eagles/focus_sum_gpt2
eagles
2022-04-20T09:03:21Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-04-20T07:56:28Z
--- tags: - generated_from_trainer model-index: - name: focus_sum_gpt2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # focus_sum_gpt2 This model is a fine-tuned version of [uer/gpt2-chinese-cluecorpussmall](https://huggingface.co/uer/gpt2-chinese-cluecorpussmall) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1917 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.5 | 15.15 | 500 | 1.1917 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1
Souvikcmsa/Roberta_Sentiment_Analysis
Souvikcmsa
2022-04-20T08:53:33Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "autotrain", "en", "dataset:Souvikcmsa/autotrain-data-sentimentAnalysis_By_Souvik", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-20T08:50:58Z
--- tags: autotrain language: en widget: - text: "I love AutoTrain 🤗" datasets: - Souvikcmsa/autotrain-data-sentimentAnalysis_By_Souvik co2_eq_emissions: 4.453029772491864 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 762623422 - CO2 Emissions (in grams): 4.453029772491864 ## Validation Metrics - Loss: 0.40843138098716736 - Accuracy: 0.8302828618968386 - Macro F1: 0.8302447939743022 - Micro F1: 0.8302828618968385 - Weighted F1: 0.8302151855901072 - Macro Precision: 0.8310980209442669 - Micro Precision: 0.8302828618968386 - Weighted Precision: 0.8313262654775467 - Macro Recall: 0.8305699539252172 - Micro Recall: 0.8302828618968386 - Weighted Recall: 0.8302828618968386 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Souvikcmsa/autotrain-sentimentAnalysis_By_Souvik-762623422 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Souvikcmsa/autotrain-sentimentAnalysis_By_Souvik-762623422", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Souvikcmsa/autotrain-sentimentAnalysis_By_Souvik-762623422", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
AswiN037/tamil-Roberta-small
AswiN037
2022-04-20T08:43:52Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "Tamil-Tokenizer", "Tamil-language-model", "dataset:oscar", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-04-20T08:29:14Z
--- language: - Tamil tags: - Tamil-Tokenizer - Tamil-language-model license: "apache-2.0" datasets: - oscar --- # tokenizer - BPE 30_522 vocab size ## model - Roberta trained using MLM OSCAR dataset train data size 5000 lines olly
demdecuong/vihealthbert-base-word
demdecuong
2022-04-20T07:55:52Z
56
4
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "endpoints_compatible", "region:us" ]
feature-extraction
2022-04-20T07:49:34Z
# <a name="introduction"></a> ViHealthBERT: Pre-trained Language Models for Vietnamese in Health Text Mining ViHealthBERT is the a strong baseline language models for Vietnamese in Healthcare domain. We empirically investigate our model with different training strategies, achieving state of the art (SOTA) performances on 3 downstream tasks: NER (COVID-19 & ViMQ), Acronym Disambiguation, and Summarization. We introduce two Vietnamese datasets: the acronym dataset (acrDrAid) and the FAQ summarization dataset in the healthcare domain. Our acrDrAid dataset is annotated with 135 sets of keywords. The general approaches and experimental results of ViHealthBERT can be found in our LREC-2022 Poster [paper]() (updated soon): @article{vihealthbert, title = {{ViHealthBERT: Pre-trained Language Models for Vietnamese in Health Text Mining}}, author = {Minh Phuc Nguyen, Vu Hoang Tran, Vu Hoang, Ta Duc Huy, Trung H. Bui, Steven Q. H. Truong }, journal = {13th Edition of its Language Resources and Evaluation Conference}, year = {2022} } ### Installation <a name="install2"></a> - Python 3.6+, and PyTorch >= 1.6 - Install `transformers`: `pip install transformers==4.2.0` ### Pre-trained models <a name="models2"></a> Model | #params | Arch. | Tokenizer ---|---|---|--- `demdecuong/vihealthbert-base-word` | 135M | base | Word-level `demdecuong/vihealthbert-base-syllable` | 135M | base | Syllable-level ### Example usage <a name="usage1"></a> ```python import torch from transformers import AutoModel, AutoTokenizer vihealthbert = AutoModel.from_pretrained("demdecuong/vihealthbert-base-word") tokenizer = AutoTokenizer.from_pretrained("demdecuong/vihealthbert-base-word") # INPUT TEXT MUST BE ALREADY WORD-SEGMENTED! line = "Tôi là sinh_viên trường đại_học Công_nghệ ." input_ids = torch.tensor([tokenizer.encode(line)]) with torch.no_grad(): features = vihealthbert(input_ids) # Models outputs are now tuples ``` ### Example usage for raw text <a name="usage2"></a> Since ViHealthBERT used the [RDRSegmenter](https://github.com/datquocnguyen/RDRsegmenter) from [VnCoreNLP](https://github.com/vncorenlp/VnCoreNLP) to pre-process the pre-training data. We highly recommend use the same word-segmenter for ViHealthBERT downstream applications. #### Installation ``` # Install the vncorenlp python wrapper pip3 install vncorenlp # Download VnCoreNLP-1.1.1.jar & its word segmentation component (i.e. RDRSegmenter) mkdir -p vncorenlp/models/wordsegmenter wget https://raw.githubusercontent.com/vncorenlp/VnCoreNLP/master/VnCoreNLP-1.1.1.jar wget https://raw.githubusercontent.com/vncorenlp/VnCoreNLP/master/models/wordsegmenter/vi-vocab wget https://raw.githubusercontent.com/vncorenlp/VnCoreNLP/master/models/wordsegmenter/wordsegmenter.rdr mv VnCoreNLP-1.1.1.jar vncorenlp/ mv vi-vocab vncorenlp/models/wordsegmenter/ mv wordsegmenter.rdr vncorenlp/models/wordsegmenter/ ``` `VnCoreNLP-1.1.1.jar` (27MB) and folder `models/` must be placed in the same working folder. #### Example usage ``` # See more details at: https://github.com/vncorenlp/VnCoreNLP # Load rdrsegmenter from VnCoreNLP from vncorenlp import VnCoreNLP rdrsegmenter = VnCoreNLP("/Absolute-path-to/vncorenlp/VnCoreNLP-1.1.1.jar", annotators="wseg", max_heap_size='-Xmx500m') # Input text = "Ông Nguyễn Khắc Chúc đang làm việc tại Đại học Quốc gia Hà Nội. Bà Lan, vợ ông Chúc, cũng làm việc tại đây." # To perform word (and sentence) segmentation sentences = rdrsegmenter.tokenize(text) for sentence in sentences: print(" ".join(sentence)) ```
ctoraman/RoBERTa-TR-medium-wp-66k
ctoraman
2022-04-20T07:01:39Z
5
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "tr", "dataset:oscar", "arxiv:2204.08832", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-09T09:15:04Z
--- language: - tr tags: - roberta license: cc-by-nc-sa-4.0 datasets: - oscar --- # RoBERTa Turkish medium WordPiece 66k (uncased) Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased. The pretrained corpus is OSCAR's Turkish split, but it is further filtered and cleaned. Model architecture is similar to bert-medium (8 layers, 8 heads, and 512 hidden size). Tokenization algorithm is WordPiece. Vocabulary size is 66.7k. The details and performance comparisons can be found at this paper: https://arxiv.org/abs/2204.08832 The following code can be used for model loading and tokenization, example max length (514) can be changed: ``` model = AutoModel.from_pretrained([model_path]) #for sequence classification: #model = AutoModelForSequenceClassification.from_pretrained([model_path], num_labels=[num_classes]) tokenizer = PreTrainedTokenizerFast(tokenizer_file=[file_path]) tokenizer.mask_token = "[MASK]" tokenizer.cls_token = "[CLS]" tokenizer.sep_token = "[SEP]" tokenizer.pad_token = "[PAD]" tokenizer.unk_token = "[UNK]" tokenizer.bos_token = "[CLS]" tokenizer.eos_token = "[SEP]" tokenizer.model_max_length = 514 ``` ### BibTeX entry and citation info ```bibtex @misc{https://doi.org/10.48550/arxiv.2204.08832, doi = {10.48550/ARXIV.2204.08832}, url = {https://arxiv.org/abs/2204.08832}, author = {Toraman, Cagri and Yilmaz, Eyup Halit and Şahinuç, Furkan and Ozcelik, Oguzhan}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Impact of Tokenization on Language Models: An Analysis for Turkish}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International} } ```
ctoraman/RoBERTa-TR-medium-wp-16k
ctoraman
2022-04-20T07:00:50Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "tr", "dataset:oscar", "arxiv:2204.08832", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-08T14:03:17Z
--- language: - tr tags: - roberta license: cc-by-nc-sa-4.0 datasets: - oscar --- # RoBERTa Turkish medium WordPiece 16k (uncased) Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased. The pretrained corpus is OSCAR's Turkish split, but it is further filtered and cleaned. Model architecture is similar to bert-medium (8 layers, 8 heads, and 512 hidden size). Tokenization algorithm is WordPiece. Vocabulary size is 16.7k. The details and performance comparisons can be found at this paper: https://arxiv.org/abs/2204.08832 The following code can be used for model loading and tokenization, example max length (514) can be changed: ``` model = AutoModel.from_pretrained([model_path]) #for sequence classification: #model = AutoModelForSequenceClassification.from_pretrained([model_path], num_labels=[num_classes]) tokenizer = PreTrainedTokenizerFast(tokenizer_file=[file_path]) tokenizer.mask_token = "[MASK]" tokenizer.cls_token = "[CLS]" tokenizer.sep_token = "[SEP]" tokenizer.pad_token = "[PAD]" tokenizer.unk_token = "[UNK]" tokenizer.bos_token = "[CLS]" tokenizer.eos_token = "[SEP]" tokenizer.model_max_length = 514 ``` ### BibTeX entry and citation info ```bibtex @misc{https://doi.org/10.48550/arxiv.2204.08832, doi = {10.48550/ARXIV.2204.08832}, url = {https://arxiv.org/abs/2204.08832}, author = {Toraman, Cagri and Yilmaz, Eyup Halit and Şahinuç, Furkan and Ozcelik, Oguzhan}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Impact of Tokenization on Language Models: An Analysis for Turkish}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International} } ```
ctoraman/RoBERTa-TR-medium-word-7k
ctoraman
2022-04-20T07:00:26Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "tr", "dataset:oscar", "arxiv:2204.08832", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-09T13:17:25Z
--- language: - tr tags: - roberta license: cc-by-nc-sa-4.0 datasets: - oscar --- # RoBERTa Turkish medium Word-level 7k (uncased) Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased. The pretrained corpus is OSCAR's Turkish split, but it is further filtered and cleaned. Model architecture is similar to bert-medium (8 layers, 8 heads, and 512 hidden size). Tokenization algorithm is Word-level, which means text is split by white space. Vocabulary size is 7.5k. The details and performance comparisons can be found at this paper: https://arxiv.org/abs/2204.08832 The following code can be used for model loading and tokenization, example max length (514) can be changed: ``` model = AutoModel.from_pretrained([model_path]) #for sequence classification: #model = AutoModelForSequenceClassification.from_pretrained([model_path], num_labels=[num_classes]) tokenizer = PreTrainedTokenizerFast(tokenizer_file=[file_path]) tokenizer.mask_token = "[MASK]" tokenizer.cls_token = "[CLS]" tokenizer.sep_token = "[SEP]" tokenizer.pad_token = "[PAD]" tokenizer.unk_token = "[UNK]" tokenizer.bos_token = "[CLS]" tokenizer.eos_token = "[SEP]" tokenizer.model_max_length = 514 ``` ### BibTeX entry and citation info ```bibtex @misc{https://doi.org/10.48550/arxiv.2204.08832, doi = {10.48550/ARXIV.2204.08832}, url = {https://arxiv.org/abs/2204.08832}, author = {Toraman, Cagri and Yilmaz, Eyup Halit and Şahinuç, Furkan and Ozcelik, Oguzhan}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Impact of Tokenization on Language Models: An Analysis for Turkish}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International} } ```
ctoraman/RoBERTa-TR-medium-morph-66k
ctoraman
2022-04-20T06:58:48Z
7
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "tr", "dataset:oscar", "arxiv:2204.08832", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-09T12:47:05Z
--- language: - tr tags: - roberta license: cc-by-nc-sa-4.0 datasets: - oscar --- # RoBERTa Turkish medium Morph-level 66k (uncased) Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased. The pretrained corpus is OSCAR's Turkish split, but it is further filtered and cleaned. Model architecture is similar to bert-medium (8 layers, 8 heads, and 512 hidden size). Tokenization algorithm is Morph-level, which means that text is split according to a Turkish morphological analyzer (Zemberek). Vocabulary size is 64.2k. ## Note that this model needs a preprocessing step before running, because the tokenizer file is not a morphological anaylzer. That is, the test dataset can not be split into morphemes with the tokenizer file. The user needs to process any test dataset by a Turkish morphological analyzer (Zemberek in this case) before running evaluation. The details and performance comparisons can be found at this paper: https://arxiv.org/abs/2204.08832 The following code can be used for model loading and tokenization, example max length (514) can be changed: ``` model = AutoModel.from_pretrained([model_path]) #for sequence classification: #model = AutoModelForSequenceClassification.from_pretrained([model_path], num_labels=[num_classes]) tokenizer = PreTrainedTokenizerFast(tokenizer_file=[file_path]) tokenizer.mask_token = "[MASK]" tokenizer.cls_token = "[CLS]" tokenizer.sep_token = "[SEP]" tokenizer.pad_token = "[PAD]" tokenizer.unk_token = "[UNK]" tokenizer.bos_token = "[CLS]" tokenizer.eos_token = "[SEP]" tokenizer.model_max_length = 514 ``` ### BibTeX entry and citation info ```bibtex @misc{https://doi.org/10.48550/arxiv.2204.08832, doi = {10.48550/ARXIV.2204.08832}, url = {https://arxiv.org/abs/2204.08832}, author = {Toraman, Cagri and Yilmaz, Eyup Halit and Şahinuç, Furkan and Ozcelik, Oguzhan}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Impact of Tokenization on Language Models: An Analysis for Turkish}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International} } ```
ctoraman/RoBERTa-TR-medium-bpe-66k
ctoraman
2022-04-20T06:54:49Z
7
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "tr", "dataset:oscar", "arxiv:2204.08832", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-09T12:10:26Z
--- language: - tr tags: - roberta license: cc-by-nc-sa-4.0 datasets: - oscar --- # RoBERTa Turkish medium BPE 66k (uncased) Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased. The pretrained corpus is OSCAR's Turkish split, but it is further filtered and cleaned. Model architecture is similar to bert-medium (8 layers, 8 heads, and 512 hidden size). Tokenization algorithm is BPE. Vocabulary size is 66.7k. The details and performance comparisons can be found at this paper: https://arxiv.org/abs/2204.08832 The following code can be used for model loading and tokenization, example max length (514) can be changed: ``` model = AutoModel.from_pretrained([model_path]) #for sequence classification: #model = AutoModelForSequenceClassification.from_pretrained([model_path], num_labels=[num_classes]) tokenizer = PreTrainedTokenizerFast(tokenizer_file=[file_path]) tokenizer.mask_token = "[MASK]" tokenizer.cls_token = "[CLS]" tokenizer.sep_token = "[SEP]" tokenizer.pad_token = "[PAD]" tokenizer.unk_token = "[UNK]" tokenizer.bos_token = "[CLS]" tokenizer.eos_token = "[SEP]" tokenizer.model_max_length = 514 ``` ### BibTeX entry and citation info ```bibtex @misc{https://doi.org/10.48550/arxiv.2204.08832, doi = {10.48550/ARXIV.2204.08832}, url = {https://arxiv.org/abs/2204.08832}, author = {Toraman, Cagri and Yilmaz, Eyup Halit and Şahinuç, Furkan and Ozcelik, Oguzhan}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Impact of Tokenization on Language Models: An Analysis for Turkish}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International} } ```
ctoraman/RoBERTa-TR-medium-bpe-28k
ctoraman
2022-04-20T06:48:33Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "tr", "dataset:oscar", "arxiv:2204.08832", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-09T12:00:46Z
--- language: - tr tags: - roberta license: cc-by-nc-sa-4.0 datasets: - oscar --- # RoBERTa Turkish medium BPE 28k (uncased) Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased. The pretrained corpus is OSCAR's Turkish split, but it is further filtered and cleaned. Model architecture is similar to bert-medium (8 layers, 8 heads, and 512 hidden size). Tokenization algorithm is BPE. Vocabulary size is 28.6k. The details and performance comparisons can be found at this paper: https://arxiv.org/abs/2204.08832 The following code can be used for model loading and tokenization, example max length (514) can be changed: ``` model = AutoModel.from_pretrained([model_path]) #for sequence classification: #model = AutoModelForSequenceClassification.from_pretrained([model_path], num_labels=[num_classes]) tokenizer = PreTrainedTokenizerFast(tokenizer_file=[file_path]) tokenizer.mask_token = "[MASK]" tokenizer.cls_token = "[CLS]" tokenizer.sep_token = "[SEP]" tokenizer.pad_token = "[PAD]" tokenizer.unk_token = "[UNK]" tokenizer.bos_token = "[CLS]" tokenizer.eos_token = "[SEP]" tokenizer.model_max_length = 514 ``` ### BibTeX entry and citation info ```bibtex @misc{https://doi.org/10.48550/arxiv.2204.08832, doi = {10.48550/ARXIV.2204.08832}, url = {https://arxiv.org/abs/2204.08832}, author = {Toraman, Cagri and Yilmaz, Eyup Halit and Şahinuç, Furkan and Ozcelik, Oguzhan}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Impact of Tokenization on Language Models: An Analysis for Turkish}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International} } ```
ctoraman/RoBERTa-TR-medium-bpe-16k
ctoraman
2022-04-20T06:48:03Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "tr", "dataset:oscar", "arxiv:2204.08832", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-08T13:44:50Z
--- language: - tr tags: - roberta license: cc-by-nc-sa-4.0 datasets: - oscar --- # RoBERTa Turkish medium BPE 16k (uncased) Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased. The pretrained corpus is OSCAR's Turkish split, but it is further filtered and cleaned. Model architecture is similar to bert-medium (8 layers, 8 heads, and 512 hidden size). Tokenization algorithm is BPE. Vocabulary size is 16.7k. The details and performance comparisons can be found at this paper: https://arxiv.org/abs/2204.08832 The following code can be used for model loading and tokenization, example max length (514) can be changed: ``` model = AutoModel.from_pretrained([model_path]) #for sequence classification: #model = AutoModelForSequenceClassification.from_pretrained([model_path], num_labels=[num_classes]) tokenizer = PreTrainedTokenizerFast(tokenizer_file=[file_path]) tokenizer.mask_token = "[MASK]" tokenizer.cls_token = "[CLS]" tokenizer.sep_token = "[SEP]" tokenizer.pad_token = "[PAD]" tokenizer.unk_token = "[UNK]" tokenizer.bos_token = "[CLS]" tokenizer.eos_token = "[SEP]" tokenizer.model_max_length = 514 ``` ### BibTeX entry and citation info ```bibtex @misc{https://doi.org/10.48550/arxiv.2204.08832, doi = {10.48550/ARXIV.2204.08832}, url = {https://arxiv.org/abs/2204.08832}, author = {Toraman, Cagri and Yilmaz, Eyup Halit and Şahinuç, Furkan and Ozcelik, Oguzhan}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Impact of Tokenization on Language Models: An Analysis for Turkish}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International} } ```
huggingtweets/elonmusk-iamsrk
huggingtweets
2022-04-20T04:58:07Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-04-20T04:52:51Z
--- language: en thumbnail: http://www.huggingtweets.com/elonmusk-iamsrk/1650430682800/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1503591435324563456/foUrqiEw_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1318511011117199362/htNsviXp_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & Shah Rukh Khan</div> <div style="text-align: center; font-size: 14px;">@elonmusk-iamsrk</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Elon Musk & Shah Rukh Khan. | Data | Elon Musk | Shah Rukh Khan | | --- | --- | --- | | Tweets downloaded | 221 | 3212 | | Retweets | 14 | 56 | | Short tweets | 69 | 278 | | Tweets kept | 138 | 2878 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/39qg1l4s/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elonmusk-iamsrk's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/840j96ek) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/840j96ek/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/elonmusk-iamsrk') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
mdm/DialoGPT-small-Kanye
mdm
2022-04-20T04:25:18Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-04-16T04:34:24Z
--- tags: - conversational --- # Kanye West AI - DialoGPT Small Kanye West DialoGPT model built with lyrics from Kaggle (https://www.kaggle.com/datasets/convolutionalnn/kanye-west-lyrics-dataset) and resources from Lynn Zheng
proseph/ctrlv-wav2vec2-tokenizer
proseph
2022-04-20T03:40:35Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-04-20T01:08:37Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: ctrlv-wav2vec2-tokenizer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ctrlv-wav2vec2-tokenizer This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3967 - Wer: 0.3138 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.4359 | 3.45 | 500 | 1.3595 | 0.9159 | | 0.5692 | 6.9 | 1000 | 0.4332 | 0.4036 | | 0.2198 | 10.34 | 1500 | 0.4074 | 0.3678 | | 0.1314 | 13.79 | 2000 | 0.3480 | 0.3409 | | 0.0929 | 17.24 | 2500 | 0.3714 | 0.3346 | | 0.0692 | 20.69 | 3000 | 0.3977 | 0.3224 | | 0.0542 | 24.14 | 3500 | 0.4068 | 0.3187 | | 0.0422 | 27.59 | 4000 | 0.3967 | 0.3138 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
tuhailong/bi_encoder_roberta-wwm-ext
tuhailong
2022-04-20T02:45:22Z
10
0
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "sbert", "zh", "dataset:dialogue", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-04-19T11:28:07Z
--- language: zh tags: - sbert datasets: - dialogue --- # Data train data is similarity sentence data from E-commerce dialogue, about 50w sentence pairs. ## Model model created by [sentence-tansformers](https://www.sbert.net/index.html),model struct is bi-encoder ### Usage ```python >>> from sentence_transformers import SentenceTransformer, util >>> model = SentenceTransformer("tuhailong/bi_encoder_roberta-wwm-ext", device="cuda:1") >>> model.max_seq_length=32 >>> sentences = ["今天天气不错", "今天心情不错"] >>> embeddings1 = model.encode([sentences[0]], convert_to_tensor=True) >>> embeddings2 = model.encode([sentences[1]], convert_to_tensor=True) >>> scores = util.cos_sim(embeddings1, embeddings2).cpu().numpy() >>> print(scores) ``` #### Code train code from https://github.com/TTurn/bi-encoder ##### PS Because add the pooling layer and dense layer after model,has folders in model files. So here will be additional files "1_Pooling-config.json", "2_Dense-config.json" and "2_Dense-pytorch_model.bin". after download these files, rename them as "1_Pooling/config.json", "2_Dense/config.json" and "2_Dense/pytorch_model.bin".
tuhailong/PairSupCon-roberta-wwm-ext
tuhailong
2022-04-20T02:44:32Z
9
0
transformers
[ "transformers", "pytorch", "bert", "sbert", "zh", "dataset:dialogue", "endpoints_compatible", "region:us" ]
null
2022-04-19T13:09:36Z
--- language: zh tags: - sbert datasets: - dialogue --- # Data train data is similarity sentence data from E-commerce dialogue, about 50w sentence pairs. ## Model model created by [sentence-tansformers](https://www.sbert.net/index.html),model struct is bi-encoder model's train code by [PairSupCon](https://github.com/amazon-research/sentence-representations/tree/main/PairSupCon) ### Usage [test.py](https://github.com/TTurn/sentence-representations/edit/main/PairSupCon/test.py) #### Code train code from https://github.com/TTurn/sentence-representations/tree/main/PairSupCon
tuhailong/cross_encoder_roberta-wwm-ext_v2
tuhailong
2022-04-20T02:41:07Z
28
2
transformers
[ "transformers", "pytorch", "bert", "text-classification", "cross-encoder", "zh", "dataset:dialogue", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-19T11:21:05Z
--- language: zh tags: - cross-encoder datasets: - dialogue --- # Data train data is similarity sentence data from E-commerce dialogue, about 50w sentence pairs. ## Model model created by [sentence-tansformers](https://www.sbert.net/index.html),model struct is cross-encoder, pretrained model is hfl/chinese-roberta-wwm-ext. This model structure is as same as [tuhailong/cross_encoder_roberta-wwm-ext_v1](https://huggingface.co/tuhailong/cross_encoder_roberta-wwm-ext_v1),the difference is changing the epoch from 5 to 1, the performance is better in my dataset. ### Usage ```python >>> from sentence_transformers.cross_encoder import CrossEncoder >>> model = CrossEncoder(model_save_path, device="cuda", max_length=64) >>> sentences = ["今天天气不错", "今天心情不错"] >>> score = model.predict([sentences]) >>> print(score[0]) ``` #### Code train code from https://github.com/TTurn/cross-encoder
tuhailong/cross_encoder_electra-180g-large-discriminator
tuhailong
2022-04-20T02:40:23Z
12
1
transformers
[ "transformers", "pytorch", "electra", "text-classification", "cross-encoder", "zh", "dataset:dialogue", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-19T13:25:37Z
--- language: zh tags: - cross-encoder datasets: - dialogue --- # Data train data is similarity sentence data from E-commerce dialogue, about 50w sentence pairs. ## Model model created by [sentence-tansformers](https://www.sbert.net/index.html),model struct is cross-encoder,pretrained model is hfl/chinese-electra-180g-large-discriminator. ### Usage ```python >>> from sentence_transformers.cross_encoder import CrossEncoder >>> model = CrossEncoder(model_save_path, device="cuda", max_length=64) >>> sentences = ["今天天气不错", "今天心情不错"] >>> score = model.predict([sentences]) >>> print(score[0]) ``` #### Code train code from https://github.com/TTurn/cross-encoder
ChrisZeng/electra-large-discriminator-nli-efl-tweeteval
ChrisZeng
2022-04-20T02:05:43Z
15
0
transformers
[ "transformers", "pytorch", "electra", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-19T00:29:30Z
--- tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: electra-large-discriminator-nli-efl-tweeteval results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-large-discriminator-nli-efl-tweeteval This model is a fine-tuned version of [ynie/electra-large-discriminator-snli_mnli_fever_anli_R1_R2_R3-nli](https://huggingface.co/ynie/electra-large-discriminator-snli_mnli_fever_anli_R1_R2_R3-nli) on the None dataset. It achieves the following results on the evaluation set: - Accuracy: 0.7943 - F1: 0.7872 - Loss: 0.3004 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Accuracy | F1 | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:------:|:---------------:| | 0.4384 | 1.0 | 163 | 0.7444 | 0.7308 | 0.3962 | | 0.3447 | 2.0 | 326 | 0.7659 | 0.7552 | 0.3410 | | 0.3057 | 3.0 | 489 | 0.7750 | 0.7688 | 0.3234 | | 0.287 | 4.0 | 652 | 0.7857 | 0.7779 | 0.3069 | | 0.2742 | 5.0 | 815 | 0.7887 | 0.7822 | 0.3030 | | 0.2676 | 6.0 | 978 | 0.7939 | 0.7851 | 0.2982 | | 0.2585 | 7.0 | 1141 | 0.7909 | 0.7822 | 0.3002 | | 0.2526 | 8.0 | 1304 | 0.7943 | 0.7876 | 0.3052 | | 0.2479 | 9.0 | 1467 | 0.7939 | 0.7847 | 0.2997 | | 0.2451 | 10.0 | 1630 | 0.7956 | 0.7873 | 0.3014 | | 0.2397 | 11.0 | 1793 | 0.7943 | 0.7872 | 0.3004 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.12.0.dev20220417 - Datasets 2.1.0 - Tokenizers 0.10.3
Aldraz/distilbert-base-uncased-finetuned-emotion
Aldraz
2022-04-20T02:04:55Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-19T23:26:01Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2319 - Accuracy: 0.921 - F1: 0.9214 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 250 | 0.3369 | 0.8985 | 0.8947 | | No log | 2.0 | 500 | 0.2319 | 0.921 | 0.9214 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.9.1+cpu - Datasets 2.1.0 - Tokenizers 0.11.6
csikasote/xls-r-1b-bemba-10hrs
csikasote
2022-04-19T22:51:51Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-04-19T13:35:26Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: xls-r-1b-bemba-10hrs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xls-r-1b-bemba-10hrs This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2350 - Wer: 0.3524 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 400 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.2547 | 0.54 | 400 | 0.4199 | 0.5888 | | 0.5422 | 1.07 | 800 | 0.2689 | 0.4360 | | 0.4154 | 1.61 | 1200 | 0.2342 | 0.4008 | | 0.4075 | 2.15 | 1600 | 0.2172 | 0.3579 | | 0.3326 | 2.68 | 2000 | 0.2151 | 0.3603 | | 0.2837 | 3.22 | 2400 | 0.2117 | 0.3505 | | 0.2688 | 3.76 | 2800 | 0.2040 | 0.3559 | | 0.2401 | 4.3 | 3200 | 0.2099 | 0.3445 | | 0.2176 | 4.83 | 3600 | 0.1973 | 0.3299 | | 0.1913 | 5.37 | 4000 | 0.2123 | 0.3432 | | 0.1683 | 5.91 | 4400 | 0.2032 | 0.3358 | | 0.1445 | 6.44 | 4800 | 0.2350 | 0.3524 | ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1
zafercavdar/distilbert-base-turkish-cased-emotion
zafercavdar
2022-04-19T22:03:18Z
7
8
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "emotion", "tr", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-19T21:16:33Z
--- language: - tr thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4 tags: - text-classification - emotion - pytorch datasets: - emotion (Translated to Turkish) metrics: - Accuracy, F1 Score --- # distilbert-base-turkish-cased-emotion ## Model description: [Distilbert-base-turkish-cased](https://huggingface.co/dbmdz/distilbert-base-turkish-cased) finetuned on the emotion dataset (Translated to Turkish via Google Translate API) using HuggingFace Trainer with below Hyperparameters ``` learning rate 2e-5, batch size 64, num_train_epochs=8, ``` ## Model Performance Comparision on Emotion Dataset from Twitter: | Model | Accuracy | F1 Score | Test Sample per Second | | --- | --- | --- | --- | | [Distilbert-base-turkish-cased-emotion](https://huggingface.co/zafercavdar/distilbert-base-turkish-cased-emotion) | 83.25 | 83.17 | 232.197 | ## How to Use the model: ```python from transformers import pipeline classifier = pipeline("text-classification", model='zafercavdar/distilbert-base-turkish-cased-emotion', return_all_scores=True) prediction = classifier("Bu kütüphaneyi seviyorum, en iyi yanı kolay kullanımı.", ) print(prediction) """ Output: [ [ {'label': 'sadness', 'score': 0.0026786490343511105}, {'label': 'joy', 'score': 0.6600754261016846}, {'label': 'love', 'score': 0.3203163146972656}, {'label': 'anger', 'score': 0.004358913749456406}, {'label': 'fear', 'score': 0.002354539930820465}, {'label': 'surprise', 'score': 0.010216088965535164} ] ] """ ``` ## Dataset: [Twitter-Sentiment-Analysis](https://huggingface.co/nlp/viewer/?dataset=emotion). ## Eval results ```json { 'eval_accuracy': 0.8325, 'eval_f1': 0.8317301441160213, 'eval_loss': 0.5021793842315674, 'eval_runtime': 8.6167, 'eval_samples_per_second': 232.108, 'eval_steps_per_second': 3.714 } ```
ZZ99/tapt_nbme_deberta_v3_base
ZZ99
2022-04-19T21:18:00Z
4
0
transformers
[ "transformers", "pytorch", "deberta-v2", "fill-mask", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-04-11T13:02:18Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: test-mlm results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test-mlm This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0870 - Accuracy: 0.7576 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.18.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 2.0.0 - Tokenizers 0.11.6
gbennett/distilbert-base-uncased-finetuned-emotion
gbennett
2022-04-19T20:26:52Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-19T18:33:45Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9185 - name: F1 type: f1 value: 0.9188211123089982 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2260 - Accuracy: 0.9185 - F1: 0.9188 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8134 | 1.0 | 250 | 0.3117 | 0.908 | 0.9056 | | 0.2477 | 2.0 | 500 | 0.2260 | 0.9185 | 0.9188 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
huggingtweets/billgates-kellytclements-xychelsea
huggingtweets
2022-04-19T20:11:34Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-04-19T20:06:34Z
--- language: en thumbnail: http://www.huggingtweets.com/billgates-kellytclements-xychelsea/1650398924367/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1256728742292074496/96By_wwT_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1414439092373254147/JdS8yLGI_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1431338485504430082/zQ6S8nOo_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Chelsea E. Manning & Bill Gates & Kelly T. Clements</div> <div style="text-align: center; font-size: 14px;">@billgates-kellytclements-xychelsea</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Chelsea E. Manning & Bill Gates & Kelly T. Clements. | Data | Chelsea E. Manning | Bill Gates | Kelly T. Clements | | --- | --- | --- | --- | | Tweets downloaded | 3248 | 3213 | 1777 | | Retweets | 15 | 199 | 296 | | Short tweets | 1219 | 7 | 26 | | Tweets kept | 2014 | 3007 | 1455 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/37pv1ayu/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @billgates-kellytclements-xychelsea's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2e303z5q) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2e303z5q/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/billgates-kellytclements-xychelsea') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)