modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-11 18:29:29
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
555 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-11 18:25:24
card
stringlengths
11
1.01M
arogyaGurkha/koelectra-base-discriminator-finetuned-squad_kor_v1
arogyaGurkha
2021-09-11T08:34:39Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "electra", "question-answering", "generated_from_trainer", "dataset:squad_kor_v1", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - squad_kor_v1 model-index: - name: koelectra-base-discriminator-finetuned-squad_kor_v1 results: - task: name: Question Answering type: question-answering dataset: name: squad_kor_v1 type: squad_kor_v1 args: squad_kor_v1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # koelectra-base-discriminator-finetuned-squad_kor_v1 This model is a fine-tuned version of [monologg/koelectra-base-discriminator](https://huggingface.co/monologg/koelectra-base-discriminator) on the squad_kor_v1 dataset. It achieves the following results on the evaluation set: - Loss: 0.5589 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.5774 | 1.0 | 4025 | 0.5589 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
huggingartists/dababy
huggingartists
2021-09-11T08:01:28Z
5
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/dababy", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/dababy tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/b68b0e6ba289b80529dc0194cdb7d00d.639x640x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">πŸ€– HuggingArtists Model πŸ€–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">DaBaby</div> <a href="https://genius.com/artists/dababy"> <div style="text-align: center; font-size: 14px;">@dababy</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from DaBaby. Dataset is available [here](https://huggingface.co/datasets/huggingartists/dababy). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/dababy") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/qnkumvdw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on DaBaby's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/24o367up) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/24o367up/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/dababy') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/dababy") model = AutoModelWithLMHead.from_pretrained("huggingartists/dababy") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
hwaQing/distilbert-base-uncased-finetuned-mrpc-test
hwaQing
2021-09-11T04:10:39Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-mrpc results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: mrpc metrics: - name: Accuracy type: accuracy value: 0.7034313725490197 - name: F1 type: f1 value: 0.8207407407407408 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-mrpc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5708 - Accuracy: 0.7034 - F1: 0.8207 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 58 | 0.5708 | 0.7034 | 0.8207 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
huggingtweets/minimalaq
huggingtweets
2021-09-10T20:50:07Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/minimalaq/1631307002582/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1403107651291058185/3CBTwj6__400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">πŸ€– AI BOT πŸ€–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">MINIMAL</div> <div style="text-align: center; font-size: 14px;">@minimalaq</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from MINIMAL. | Data | MINIMAL | | --- | --- | | Tweets downloaded | 3238 | | Retweets | 288 | | Short tweets | 533 | | Tweets kept | 2417 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/v6z30t80/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @minimalaq's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/5kd2ws7g) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/5kd2ws7g/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/minimalaq') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingartists/aaron-watson
huggingartists
2021-09-10T15:49:57Z
6
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/aaron-watson", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/aaron-watson tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/894021d09a748eef8c6d63ad898b814b.650x430x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">πŸ€– HuggingArtists Model πŸ€–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Aaron Watson</div> <a href="https://genius.com/artists/aaron-watson"> <div style="text-align: center; font-size: 14px;">@aaron-watson</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Aaron Watson. Dataset is available [here](https://huggingface.co/datasets/huggingartists/aaron-watson). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/aaron-watson") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/14ha1tnc/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Aaron Watson's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/34e4zb2v) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/34e4zb2v/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/aaron-watson') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/aaron-watson") model = AutoModelWithLMHead.from_pretrained("huggingartists/aaron-watson") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/lil-peep
huggingartists
2021-09-10T14:54:32Z
4
1
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/lil-peep", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/lil-peep tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/919c7ba130d3861740cbe7fbd7f83c59.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">πŸ€– HuggingArtists Model πŸ€–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Lil Peep</div> <a href="https://genius.com/artists/lil-peep"> <div style="text-align: center; font-size: 14px;">@lil-peep</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Lil Peep. Dataset is available [here](https://huggingface.co/datasets/huggingartists/lil-peep). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/lil-peep") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/39q6kspr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Lil Peep's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/g0nxk974) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/g0nxk974/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/lil-peep') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/lil-peep") model = AutoModelWithLMHead.from_pretrained("huggingartists/lil-peep") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/burzum
huggingartists
2021-09-10T13:30:58Z
5
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/burzum", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/burzum tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/62edc981d303447265d23a3862abce43.589x589x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">πŸ€– HuggingArtists Model πŸ€–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Burzum</div> <a href="https://genius.com/artists/burzum"> <div style="text-align: center; font-size: 14px;">@burzum</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Burzum. Dataset is available [here](https://huggingface.co/datasets/huggingartists/burzum). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/burzum") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/j34qgww2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Burzum's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/3579mrib) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/3579mrib/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/burzum') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/burzum") model = AutoModelWithLMHead.from_pretrained("huggingartists/burzum") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/scriptonite
huggingartists
2021-09-10T13:10:06Z
5
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/scriptonite", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/scriptonite tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/411d50392aef867fe0e9dd55a074ecfb.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">πŸ€– HuggingArtists Model πŸ€–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Π‘ΠΊΡ€ΠΈΠΏΡ‚ΠΎΠ½ΠΈΡ‚ (Scriptonite)</div> <a href="https://genius.com/artists/scriptonite"> <div style="text-align: center; font-size: 14px;">@scriptonite</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Π‘ΠΊΡ€ΠΈΠΏΡ‚ΠΎΠ½ΠΈΡ‚ (Scriptonite). Dataset is available [here](https://huggingface.co/datasets/huggingartists/scriptonite). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/scriptonite") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/13pxeww0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Π‘ΠΊΡ€ΠΈΠΏΡ‚ΠΎΠ½ΠΈΡ‚ (Scriptonite)'s lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1itfp830) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1itfp830/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/scriptonite') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/scriptonite") model = AutoModelWithLMHead.from_pretrained("huggingartists/scriptonite") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/25-17
huggingartists
2021-09-10T12:55:59Z
5
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/25-17", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/25-17 tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/4fedc5dd2830a874a5274bf1cac62002.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">πŸ€– HuggingArtists Model πŸ€–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">25/17</div> <a href="https://genius.com/artists/25-17"> <div style="text-align: center; font-size: 14px;">@25-17</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from 25/17. Dataset is available [here](https://huggingface.co/datasets/huggingartists/25-17). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/25-17") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1iuytbjp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on 25/17's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/knv4l4gw) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/knv4l4gw/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/25-17') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/25-17") model = AutoModelWithLMHead.from_pretrained("huggingartists/25-17") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingtweets/freakytheory-insprepositive-masterythink
huggingtweets
2021-09-10T12:25:07Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/freakytheory-insprepositive-masterythink/1631276702724/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1155938695662505984/H3RmD4Fq_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/861903051669610496/dvuuio0A_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1362638938549018626/O2jBlckS_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">πŸ€– AI CYBORG πŸ€–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Inspiring Quotes - Be Positive & Motivation & Motivation & Success</div> <div style="text-align: center; font-size: 14px;">@freakytheory-insprepositive-masterythink</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Inspiring Quotes - Be Positive & Motivation & Motivation & Success. | Data | Inspiring Quotes - Be Positive | Motivation | Motivation & Success | | --- | --- | --- | --- | | Tweets downloaded | 3250 | 3233 | 706 | | Retweets | 789 | 13 | 4 | | Short tweets | 2 | 10 | 14 | | Tweets kept | 2459 | 3210 | 688 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3aupxbxm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @freakytheory-insprepositive-masterythink's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/p03go3pp) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/p03go3pp/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/freakytheory-insprepositive-masterythink') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingartists/john-lennon
huggingartists
2021-09-10T10:37:44Z
3
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/john-lennon", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/john-lennon tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/de14b272004b51dea8071e7cba21cbac.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">πŸ€– HuggingArtists Model πŸ€–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">John Lennon</div> <a href="https://genius.com/artists/john-lennon"> <div style="text-align: center; font-size: 14px;">@john-lennon</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from John Lennon. Dataset is available [here](https://huggingface.co/datasets/huggingartists/john-lennon). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/john-lennon") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/f3d8fseh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on John Lennon's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/36mtogkg) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/36mtogkg/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/john-lennon') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/john-lennon") model = AutoModelWithLMHead.from_pretrained("huggingartists/john-lennon") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
Riser/YOLOP
Riser
2021-09-10T09:08:34Z
0
9
null
[ "object-detection", "arxiv:2108.11250", "arxiv:1612.07695", "arxiv:1606.02147", "region:us" ]
object-detection
2022-03-02T23:29:04Z
--- tags: - object-detection --- <div align="left"> ## You Only Look Once for Panoptic ​ Driving Perception > [**You Only Look at Once for Panoptic driving Perception**](https://arxiv.org/abs/2108.11250) > > by Dong Wu, Manwen Liao, Weitian Zhang, [Xinggang Wang](https://xinggangw.info/) [*School of EIC, HUST*](http://eic.hust.edu.cn/English/Home.htm) > > *arXiv technical report ([arXiv 2108.11250](https://arxiv.org/abs/2108.11250))* --- ### The Illustration of YOLOP ![yolop](pictures/yolop.png) ### Contributions * We put forward an efficient multi-task network that can jointly handle three crucial tasks in autonomous driving: object detection, drivable area segmentation and lane detection to save computational costs, reduce inference time as well as improve the performance of each task. Our work is the first to reach real-time on embedded devices while maintaining state-of-the-art level performance on the `BDD100K `dataset. * We design the ablative experiments to verify the effectiveness of our multi-tasking scheme. It is proved that the three tasks can be learned jointly without tedious alternating optimization. ### Results #### Traffic Object Detection Result | Model | Recall(%) | mAP50(%) | Speed(fps) | | -------------- | --------- | -------- | ---------- | | `Multinet` | 81.3 | 60.2 | 8.6 | | `DLT-Net` | 89.4 | 68.4 | 9.3 | | `Faster R-CNN` | 77.2 | 55.6 | 5.3 | | `YOLOv5s` | 86.8 | 77.2 | 82 | | `YOLOP(ours)` | 89.2 | 76.5 | 41 | #### Drivable Area Segmentation Result | Model | mIOU(%) | Speed(fps) | | ------------- | ------- | ---------- | | `Multinet` | 71.6 | 8.6 | | `DLT-Net` | 71.3 | 9.3 | | `PSPNet` | 89.6 | 11.1 | | `YOLOP(ours)` | 91.5 | 41 | #### Lane Detection Result: | Model | mIOU(%) | IOU(%) | | ------------- | ------- | ------ | | `ENet` | 34.12 | 14.64 | | `SCNN` | 35.79 | 15.84 | | `ENet-SAD` | 36.56 | 16.02 | | `YOLOP(ours)` | 70.50 | 26.20 | #### Ablation Studies 1: End-to-end v.s. Step-by-step: | Training_method | Recall(%) | AP(%) | mIoU(%) | Accuracy(%) | IoU(%) | | --------------- | --------- | ----- | ------- | ----------- | ------ | | `ES-W` | 87.0 | 75.3 | 90.4 | 66.8 | 26.2 | | `ED-W` | 87.3 | 76.0 | 91.6 | 71.2 | 26.1 | | `ES-D-W` | 87.0 | 75.1 | 91.7 | 68.6 | 27.0 | | `ED-S-W` | 87.5 | 76.1 | 91.6 | 68.0 | 26.8 | | `End-to-end` | 89.2 | 76.5 | 91.5 | 70.5 | 26.2 | #### Ablation Studies 2: Multi-task v.s. Single task: | Training_method | Recall(%) | AP(%) | mIoU(%) | Accuracy(%) | IoU(%) | Speed(ms/frame) | | --------------- | --------- | ----- | ------- | ----------- | ------ | --------------- | | `Det(only)` | 88.2 | 76.9 | - | - | - | 15.7 | | `Da-Seg(only)` | - | - | 92.0 | - | - | 14.8 | | `Ll-Seg(only)` | - | - | - | 79.6 | 27.9 | 14.8 | | `Multitask` | 89.2 | 76.5 | 91.5 | 70.5 | 26.2 | 24.4 | **Notes**: - The works we has use for reference including `Multinet` ([paper](https://arxiv.org/pdf/1612.07695.pdf?utm_campaign=affiliate-ir-Optimise%20media%28%20South%20East%20Asia%29%20Pte.%20ltd._156_-99_national_R_all_ACQ_cpa_en&utm_content=&utm_source=%20388939),[code](https://github.com/MarvinTeichmann/MultiNet)οΌ‰,`DLT-Net` ([paper](https://ieeexplore.ieee.org/abstract/document/8937825)οΌ‰,`Faster R-CNN` ([paper](https://proceedings.neurips.cc/paper/2015/file/14bfa6bb14875e45bba028a21ed38046-Paper.pdf),[code](https://github.com/ShaoqingRen/faster_rcnn)οΌ‰,`YOLOv5s`([code](https://github.com/ultralytics/yolov5)) ,`PSPNet`([paper](https://openaccess.thecvf.com/content_cvpr_2017/papers/Zhao_Pyramid_Scene_Parsing_CVPR_2017_paper.pdf),[code](https://github.com/hszhao/PSPNet)) ,`ENet`([paper](https://arxiv.org/pdf/1606.02147.pdf),[code](https://github.com/osmr/imgclsmob)) `SCNN`([paper](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/download/16802/16322),[code](https://github.com/XingangPan/SCNN)) `SAD-ENet`([paper](https://openaccess.thecvf.com/content_ICCV_2019/papers/Hou_Learning_Lightweight_Lane_Detection_CNNs_by_Self_Attention_Distillation_ICCV_2019_paper.pdf),[code](https://github.com/cardwing/Codes-for-Lane-Detection)). Thanks for their wonderful works. - In table 4, E, D, S and W refer to Encoder, Detect head, two Segment heads and whole network. So the Algorithm (First, we only train Encoder and Detect head. Then we freeze the Encoder and Detect head as well as train two Segmentation heads. Finally, the entire network is trained jointly for all three tasks.) can be marked as ED-S-W, and the same for others. --- ### Visualization #### Traffic Object Detection Result ![detect result](pictures/detect.png) #### Drivable Area Segmentation Result ![](pictures/da.png) #### Lane Detection Result ![](pictures/ll.png) **Notes**: - The visualization of lane detection result has been post processed by quadratic fitting. --- ### Project Structure ```python β”œβ”€inference β”‚ β”œβ”€images # inference images β”‚ β”œβ”€output # inference result β”œβ”€lib β”‚ β”œβ”€config/default # configuration of training and validation β”‚ β”œβ”€core β”‚ β”‚ β”œβ”€activations.py # activation function β”‚ β”‚ β”œβ”€evaluate.py # calculation of metric β”‚ β”‚ β”œβ”€function.py # training and validation of model β”‚ β”‚ β”œβ”€general.py #calculation of metric、nms、conversion of data-format、visualization β”‚ β”‚ β”œβ”€loss.py # loss function β”‚ β”‚ β”œβ”€postprocess.py # postprocess(refine da-seg and ll-seg, unrelated to paper) β”‚ β”œβ”€dataset β”‚ β”‚ β”œβ”€AutoDriveDataset.py # Superclass dataset,general function β”‚ β”‚ β”œβ”€bdd.py # Subclass dataset,specific function β”‚ β”‚ β”œβ”€hust.py # Subclass dataset(Campus scene, unrelated to paper) β”‚ β”‚ β”œβ”€convect.py β”‚ β”‚ β”œβ”€DemoDataset.py # demo dataset(image, video and stream) β”‚ β”œβ”€models β”‚ β”‚ β”œβ”€YOLOP.py # Setup and Configuration of model β”‚ β”‚ β”œβ”€light.py # Model lightweight(unrelated to paper, zwt) β”‚ β”‚ β”œβ”€commom.py # calculation module β”‚ β”œβ”€utils β”‚ β”‚ β”œβ”€augmentations.py # data augumentation β”‚ β”‚ β”œβ”€autoanchor.py # auto anchor(k-means) β”‚ β”‚ β”œβ”€split_dataset.py # (Campus scene, unrelated to paper) β”‚ β”‚ β”œβ”€utils.py # logging、device_select、time_measure、optimizer_select、model_save&initialize 、Distributed training β”‚ β”œβ”€run β”‚ β”‚ β”œβ”€dataset/training time # Visualization, logging and model_save β”œβ”€tools β”‚ β”‚ β”œβ”€demo.py # demo(folder、camera) β”‚ β”‚ β”œβ”€test.py β”‚ β”‚ β”œβ”€train.py β”œβ”€toolkits β”‚ β”‚ β”œβ”€depoly # Deployment of model β”œβ”€weights # Pretraining model ``` --- ### Requirement This codebase has been developed with python version 3.7, PyTorch 1.7+ and torchvision 0.8+: ``` conda install pytorch==1.7.0 torchvision==0.8.0 cudatoolkit=10.2 -c pytorch ``` See `requirements.txt` for additional dependencies and version requirements. ```setup pip install -r requirements.txt ``` ### Data preparation #### Download - Download the images from [images](https://bdd-data.berkeley.edu/). - Download the annotations of detection from [det_annotations](https://drive.google.com/file/d/1Ge-R8NTxG1eqd4zbryFo-1Uonuh0Nxyl/view?usp=sharing). - Download the annotations of drivable area segmentation from [da_seg_annotations](https://drive.google.com/file/d/1xy_DhUZRHR8yrZG3OwTQAHhYTnXn7URv/view?usp=sharing). - Download the annotations of lane line segmentation from [ll_seg_annotations](https://drive.google.com/file/d/1lDNTPIQj_YLNZVkksKM25CvCHuquJ8AP/view?usp=sharing). We recommend the dataset directory structure to be the following: ``` # The id represent the correspondence relation β”œβ”€dataset root β”‚ β”œβ”€images β”‚ β”‚ β”œβ”€train β”‚ β”‚ β”œβ”€val β”‚ β”œβ”€det_annotations β”‚ β”‚ β”œβ”€train β”‚ β”‚ β”œβ”€val β”‚ β”œβ”€da_seg_annotations β”‚ β”‚ β”œβ”€train β”‚ β”‚ β”œβ”€val β”‚ β”œβ”€ll_seg_annotations β”‚ β”‚ β”œβ”€train β”‚ β”‚ β”œβ”€val ``` Update the your dataset path in the `./lib/config/default.py`. ### Training You can set the training configuration in the `./lib/config/default.py`. (Including: the loading of preliminary model, loss, data augmentation, optimizer, warm-up and cosine annealing, auto-anchor, training epochs, batch_size). If you want try alternating optimization or train model for single task, please modify the corresponding configuration in `./lib/config/default.py` to `True`. (As following, all configurations is `False`, which means training multiple tasks end to end). ```python # Alternating optimization _C.TRAIN.SEG_ONLY = False # Only train two segmentation branchs _C.TRAIN.DET_ONLY = False # Only train detection branch _C.TRAIN.ENC_SEG_ONLY = False # Only train encoder and two segmentation branchs _C.TRAIN.ENC_DET_ONLY = False # Only train encoder and detection branch # Single task _C.TRAIN.DRIVABLE_ONLY = False # Only train da_segmentation task _C.TRAIN.LANE_ONLY = False # Only train ll_segmentation task _C.TRAIN.DET_ONLY = False # Only train detection task ``` Start training: ```shell python tools/train.py ``` ### Evaluation You can set the evaluation configuration in the `./lib/config/default.py`. (Including: batch_size and threshold value for nms). Start evaluating: ```shell python tools/test.py --weights weights/End-to-end.pth ``` ### Demo Test We provide two testing method. #### Folder You can store the image or video in `--source`, and then save the reasoning result to `--save-dir` ```shell python tools/demo --source inference/images ``` #### Camera If there are any camera connected to your computer, you can set the `source` as the camera number(The default is 0). ```shell python tools/demo --source 0 ``` ### Deployment Our model can reason in real-time on `Jetson Tx2`, with `Zed Camera` to capture image. We use `TensorRT` tool for speeding up. We provide code for deployment and reasoning of model in `./toolkits/deploy`. ## Citation If you find our paper and code useful for your research, please consider giving a star and citation: ```BibTeX @misc{2108.11250, Author = {Dong Wu and Manwen Liao and Weitian Zhang and Xinggang Wang}, Title = {YOLOP: You Only Look Once for Panoptic Driving Perception}, Year = {2021}, Eprint = {arXiv:2108.11250}, } ```
huggingartists/the-velvet-underground
huggingartists
2021-09-10T09:04:08Z
6
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/the-velvet-underground", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/the-velvet-underground tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://s3.amazonaws.com/rapgenius/vu.jpeg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">πŸ€– HuggingArtists Model πŸ€–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">The Velvet Underground</div> <a href="https://genius.com/artists/the-velvet-underground"> <div style="text-align: center; font-size: 14px;">@the-velvet-underground</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from The Velvet Underground. Dataset is available [here](https://huggingface.co/datasets/huggingartists/the-velvet-underground). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/the-velvet-underground") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/lbkqy84q/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on The Velvet Underground's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1e4s74q4) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1e4s74q4/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/the-velvet-underground') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/the-velvet-underground") model = AutoModelWithLMHead.from_pretrained("huggingartists/the-velvet-underground") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/kipelov
huggingartists
2021-09-10T08:40:56Z
3
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/kipelov", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/kipelov tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/d4ae6ad73ca63bc97b2a10dfefc47b63.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">πŸ€– HuggingArtists Model πŸ€–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">КипСлов (Kipelov)</div> <a href="https://genius.com/artists/kipelov"> <div style="text-align: center; font-size: 14px;">@kipelov</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from КипСлов (Kipelov). Dataset is available [here](https://huggingface.co/datasets/huggingartists/kipelov). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/kipelov") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/225m5y65/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on КипСлов (Kipelov)'s lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/38es269x) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/38es269x/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/kipelov') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/kipelov") model = AutoModelWithLMHead.from_pretrained("huggingartists/kipelov") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/grigory-leps
huggingartists
2021-09-10T08:13:40Z
4
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/grigory-leps", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/grigory-leps tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/f30e8944a06a196868ee4b077a7926a6.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">πŸ€– HuggingArtists Model πŸ€–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Π“Ρ€ΠΈΠ³ΠΎΡ€ΠΈΠΉ ЛСпс (Grigory Leps)</div> <a href="https://genius.com/artists/grigory-leps"> <div style="text-align: center; font-size: 14px;">@grigory-leps</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Π“Ρ€ΠΈΠ³ΠΎΡ€ΠΈΠΉ ЛСпс (Grigory Leps). Dataset is available [here](https://huggingface.co/datasets/huggingartists/grigory-leps). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/grigory-leps") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/32wqexib/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Π“Ρ€ΠΈΠ³ΠΎΡ€ΠΈΠΉ ЛСпс (Grigory Leps)'s lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1j0f6nwb) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1j0f6nwb/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/grigory-leps') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/grigory-leps") model = AutoModelWithLMHead.from_pretrained("huggingartists/grigory-leps") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/mf-doom
huggingartists
2021-09-10T07:07:44Z
4
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/mf-doom", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/mf-doom tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/263743633b6e58854e753b25dca6beab.430x430x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">πŸ€– HuggingArtists Model πŸ€–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">MF DOOM</div> <a href="https://genius.com/artists/mf-doom"> <div style="text-align: center; font-size: 14px;">@mf-doom</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from MF DOOM. Dataset is available [here](https://huggingface.co/datasets/huggingartists/mf-doom). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/mf-doom") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/3lhrsfds/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on MF DOOM's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/vw48qbeh) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/vw48qbeh/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/mf-doom') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/mf-doom") model = AutoModelWithLMHead.from_pretrained("huggingartists/mf-doom") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/yung-plague
huggingartists
2021-09-10T06:49:38Z
6
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/yung-plague", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/yung-plague tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/6c0f8e02f467c694379f242ea2897efd.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">πŸ€– HuggingArtists Model πŸ€–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Yung Plague</div> <a href="https://genius.com/artists/yung-plague"> <div style="text-align: center; font-size: 14px;">@yung-plague</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Yung Plague. Dataset is available [here](https://huggingface.co/datasets/huggingartists/yung-plague). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/yung-plague") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/9hz73kye/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Yung Plague's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/28boe4q8) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/28boe4q8/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/yung-plague') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/yung-plague") model = AutoModelWithLMHead.from_pretrained("huggingartists/yung-plague") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
juanfiguera/ice_cream
juanfiguera
2021-09-10T02:59:58Z
66
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: ice_cream results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.5166666507720947 --- # ice_cream Autogenerated by HuggingPicsπŸ€—πŸ–ΌοΈ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### chocolate ice cream ![chocolate ice cream](images/chocolate_ice_cream.jpg) #### vanilla ice cream ![vanilla ice cream](images/vanilla_ice_cream.jpg)
flooptherocket/DialogGPT-small-rick
flooptherocket
2021-09-10T01:17:41Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- tags: conversational --- @Rick from Rick and Morty GPT-2 Conversation Model ---
bshlgrs/autonlp-old-data-trained-10022181
bshlgrs
2021-09-09T21:46:53Z
5
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autonlp", "en", "dataset:bshlgrs/autonlp-data-old-data-trained", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: en widget: - text: "I love AutoNLP πŸ€—" datasets: - bshlgrs/autonlp-data-old-data-trained --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 10022181 ## Validation Metrics - Loss: 0.369505375623703 - Accuracy: 0.8706206896551724 - Macro F1: 0.5410226656476808 - Micro F1: 0.8706206896551724 - Weighted F1: 0.8515634683886795 - Macro Precision: 0.5159711665622992 - Micro Precision: 0.8706206896551724 - Weighted Precision: 0.8346991124101657 - Macro Recall: 0.5711653346601209 - Micro Recall: 0.8706206896551724 - Weighted Recall: 0.8706206896551724 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/bshlgrs/autonlp-old-data-trained-10022181 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("bshlgrs/autonlp-old-data-trained-10022181", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("bshlgrs/autonlp-old-data-trained-10022181", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
dattam/DialoGPT-medium-TonyStarkBot
dattam
2021-09-09T18:05:36Z
4
3
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- tags: - conversational --- # Tony Stark DialoGPT model Invite me to your discord server : https://discord.com/api/oauth2/authorize?client_id=885065886787063848&permissions=137439365184&scope=bot
sevbqewre/hyou
sevbqewre
2021-09-09T17:57:48Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
https://maccaboard.paulmccartney.com/users/watch-shang-chi-2021-full-movie-watch-online-download-hdrip https://maccaboard.paulmccartney.com/users/watch-shang-chi-2021-online-full-free-download https://maccaboard.paulmccartney.com/users/watch-shang-chi-2021-full-movie-download-hd https://maccaboard.paulmccartney.com/users/download-shang-chi-2021-online-full-movie-free https://maccaboard.paulmccartney.com/users/123movies-online-shang-chi-2021-watch-free-hd https://maccaboard.paulmccartney.com/users/shang-chi-hindi-dubbed-movie-download-full-hd-720p-telegram https://maccaboard.paulmccartney.com/users/watch-shang-chi-hindi-dubbed-movie-download-full-hd-720p-0 https://maccaboard.paulmccartney.com/users/hindi-dubbed-watch-shang-chi-2021-full-hd-movie-online-free https://maccaboard.paulmccartney.com/users/download-shang-chi-2021-torrent-movie-free-hd-yts https://maccaboard.paulmccartney.com/users/download-720p-shang-chi-2021-full-movie-watch-free https://maccaboard.paulmccartney.com/users/watch-shang-chi-2021-full-movie-hd-online-free-download https://maccaboard.paulmccartney.com/users/download-shang-chi-full-movie-and-watch-online-free-hd-720p https://maccaboard.paulmccartney.com/users/123movies-free-shang-chi-2021-online-download-full-hd
Aleksandar/distilbert-srb-ner
Aleksandar
2021-09-09T06:27:16Z
14
0
transformers
[ "transformers", "pytorch", "distilbert", "token-classification", "generated_from_trainer", "sr", "dataset:wikiann", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- tags: - generated_from_trainer datasets: - wikiann metrics: - precision - recall - f1 - accuracy language: - sr model_index: - name: distilbert-srb-ner results: - task: name: Token Classification type: token-classification dataset: name: wikiann type: wikiann args: sr metric: name: Accuracy type: accuracy value: 0.9576561462374611 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-srb-ner This model was trained from scratch on the wikiann dataset. It achieves the following results on the evaluation set: - Loss: 0.2972 - Precision: 0.8871 - Recall: 0.9100 - F1: 0.8984 - Accuracy: 0.9577 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.3818 | 1.0 | 625 | 0.2175 | 0.8175 | 0.8370 | 0.8272 | 0.9306 | | 0.198 | 2.0 | 1250 | 0.1766 | 0.8551 | 0.8732 | 0.8640 | 0.9458 | | 0.1423 | 3.0 | 1875 | 0.1702 | 0.8597 | 0.8763 | 0.8679 | 0.9473 | | 0.079 | 4.0 | 2500 | 0.1774 | 0.8674 | 0.8875 | 0.8773 | 0.9515 | | 0.0531 | 5.0 | 3125 | 0.2011 | 0.8688 | 0.8965 | 0.8825 | 0.9522 | | 0.0429 | 6.0 | 3750 | 0.2082 | 0.8769 | 0.8970 | 0.8868 | 0.9538 | | 0.032 | 7.0 | 4375 | 0.2268 | 0.8764 | 0.8916 | 0.8839 | 0.9528 | | 0.0204 | 8.0 | 5000 | 0.2423 | 0.8726 | 0.8959 | 0.8841 | 0.9529 | | 0.0148 | 9.0 | 5625 | 0.2522 | 0.8774 | 0.8991 | 0.8881 | 0.9538 | | 0.0125 | 10.0 | 6250 | 0.2544 | 0.8823 | 0.9024 | 0.8922 | 0.9559 | | 0.0108 | 11.0 | 6875 | 0.2592 | 0.8780 | 0.9041 | 0.8909 | 0.9553 | | 0.007 | 12.0 | 7500 | 0.2672 | 0.8877 | 0.9056 | 0.8965 | 0.9571 | | 0.0048 | 13.0 | 8125 | 0.2714 | 0.8879 | 0.9089 | 0.8982 | 0.9583 | | 0.0049 | 14.0 | 8750 | 0.2872 | 0.8873 | 0.9068 | 0.8970 | 0.9573 | | 0.0034 | 15.0 | 9375 | 0.2915 | 0.8883 | 0.9114 | 0.8997 | 0.9577 | | 0.0027 | 16.0 | 10000 | 0.2890 | 0.8865 | 0.9103 | 0.8983 | 0.9581 | | 0.0028 | 17.0 | 10625 | 0.2885 | 0.8877 | 0.9085 | 0.8980 | 0.9576 | | 0.0014 | 18.0 | 11250 | 0.2928 | 0.8860 | 0.9073 | 0.8965 | 0.9577 | | 0.0013 | 19.0 | 11875 | 0.2963 | 0.8856 | 0.9099 | 0.8976 | 0.9576 | | 0.001 | 20.0 | 12500 | 0.2972 | 0.8871 | 0.9100 | 0.8984 | 0.9577 | ### Framework versions - Transformers 4.9.2 - Pytorch 1.9.0 - Datasets 1.11.0 - Tokenizers 0.10.1
rizky22/IndoBERT
rizky22
2021-09-09T05:33:05Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
https://sites.google.com/view/watchonline-full-hd-we-need-to/ https://sites.google.com/view/watch-hdthegateway2021fullmovi/ https://sites.google.com/view/downloadwatch-hdwildindian2021/ https://sites.google.com/view/putlocker123movieswatchkaren20/ https://sites.google.com/view/full-hdzone4142021moviewatchon/ https://sites.google.com/view/watch-hdmalignant2021onlinemov/ https://sites.google.com/view/watch-the-card-counter-2021-fu/ https://sites.google.com/view/queenpins2021onlinemoviefullhd/ https://sites.google.com/view/watch-hdsmallenginerepair2021f/ https://sites.google.com/view/shang-chi-watch/ https://sites.google.com/view/watch-vivo2021-online-free/ https://sites.google.com/view/watch-free-guy-download/ https://sites.google.com/view/hd-yakuza-princess-20/ https://www.metooo.io/e/watch-free-blue-bayou-2021-hd-movies-full-online-4k-uhd https://www.metooo.io/e/123movies-hd-watch-the-card-counter-online-movie-2021-full-free-download0 https://www.peacefirst.org/user-profile/cry-macho-2021-movie-online-full-hd-1 https://ok.ru/group/63840774127847/topic/153545931483367 https://medium.com/@arbor.hooper/123movies-watch-the-card-counter-2021-movie-online-full-free-download-1382366cc20a http://perencanaan.setjen.pertanian.go.id/index.php/forum/baca/123movies-watch-we-need-to-do-something-2021-movie-online-full-free-download-in-hd
eugenesiow/han
eugenesiow
2021-09-09T01:59:04Z
150
0
transformers
[ "transformers", "HAN", "super-image", "image-super-resolution", "dataset:eugenesiow/Div2k", "dataset:eugenesiow/Set5", "dataset:eugenesiow/Set14", "dataset:eugenesiow/BSD100", "dataset:eugenesiow/Urban100", "arxiv:2008.08767", "arxiv:2104.07566", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - super-image - image-super-resolution datasets: - eugenesiow/Div2k - eugenesiow/Set5 - eugenesiow/Set14 - eugenesiow/BSD100 - eugenesiow/Urban100 metrics: - pnsr - ssim --- # Holistic Attention Network (HAN) HAN model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper [Single Image Super-Resolution via a Holistic Attention Network](https://arxiv.org/abs/2008.08767) by Niu et al. (2020) and first released in [this repository](https://github.com/wwlCape/HAN). The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling and model upscaling. ![Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 4](images/han_4_4_compare.png "Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 4") ## Model description Informative features play a crucial role in the single image super-resolution task. Channel attention has been demonstrated to be effective for preserving information-rich features in each layer. However, channel attention treats each convolution layer as a separate process that misses the correlation among different layers. To address this problem, we propose a new holistic attention network (HAN), which consists of a layer attention module (LAM) and a channel-spatial attention module (CSAM), to model the holistic interdependencies among layers, channels, and positions. Specifically, the proposed LAM adaptively emphasizes hierarchical features by considering correlations among layers. Meanwhile, CSAM learns the confidence at all the positions of each channel to selectively capture more informative features. Extensive experiments demonstrate that the proposed HAN performs favorably against the state-of-the-art single image super- resolution approaches. ## Intended uses & limitations You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset. ### How to use The model can be used with the [super_image](https://github.com/eugenesiow/super-image) library: ```bash pip install super-image ``` Here is how to use a pre-trained model to upscale your image: ```python from super_image import HanModel, ImageLoader from PIL import Image import requests url = 'https://paperswithcode.com/media/datasets/Set5-0000002728-07a9793f_zA3bDjj.jpg' image = Image.open(requests.get(url, stream=True).raw) model = HanModel.from_pretrained('eugenesiow/han', scale=2) # scale 2, 3 and 4 models available inputs = ImageLoader.load_image(image) preds = model(inputs) ImageLoader.save_image(preds, './scaled_2x.png') # save the output 2x scaled image to `./scaled_2x.png` ImageLoader.save_compare(inputs, preds, './scaled_2x_compare.png') # save an output comparing the super-image with a bicubic scaling ``` [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Upscale_Images_with_Pretrained_super_image_Models.ipynb "Open in Colab") ## Training data The models for 2x, 3x and 4x image super resolution were pretrained on [DIV2K](https://huggingface.co/datasets/eugenesiow/Div2k), a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900). ## Training procedure ### Preprocessing We follow the pre-processing and training method of [Wang et al.](https://arxiv.org/abs/2104.07566). Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times. During training, RGB patches with size of 64Γ—64 from the LR input are used together with their corresponding HR patches. Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image. We need the huggingface [datasets](https://huggingface.co/datasets?filter=task_ids:other-other-image-super-resolution) library to download the data: ```bash pip install datasets ``` The following code gets the data and preprocesses/augments the data. ```python from datasets import load_dataset from super_image.data import EvalDataset, TrainDataset, augment_five_crop augmented_dataset = load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='train')\ .map(augment_five_crop, batched=True, desc="Augmenting Dataset") # download and augment the data with the five_crop method train_dataset = TrainDataset(augmented_dataset) # prepare the train dataset for loading PyTorch DataLoader eval_dataset = EvalDataset(load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='validation')) # prepare the eval dataset for the PyTorch DataLoader ``` ### Pretraining The model was trained on GPU. The training code is provided below: ```python from super_image import Trainer, TrainingArguments, HanModel, HanConfig training_args = TrainingArguments( output_dir='./results', # output directory num_train_epochs=1000, # total number of training epochs ) config = HanConfig( scale=4, # train a model to upscale 4x ) model = HanModel(config) trainer = Trainer( model=model, # the instantiated model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=eval_dataset # evaluation dataset ) trainer.train() ``` [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Train_super_image_Models.ipynb "Open in Colab") ## Evaluation results The evaluation metrics include [PSNR](https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio#Quality_estimation_with_PSNR) and [SSIM](https://en.wikipedia.org/wiki/Structural_similarity#Algorithm). Evaluation datasets include: - Set5 - [Bevilacqua et al. (2012)](https://huggingface.co/datasets/eugenesiow/Set5) - Set14 - [Zeyde et al. (2010)](https://huggingface.co/datasets/eugenesiow/Set14) - BSD100 - [Martin et al. (2001)](https://huggingface.co/datasets/eugenesiow/BSD100) - Urban100 - [Huang et al. (2015)](https://huggingface.co/datasets/eugenesiow/Urban100) The results columns below are represented below as `PSNR/SSIM`. They are compared against a Bicubic baseline. |Dataset |Scale |Bicubic |han | |--- |--- |--- |--- | |Set5 |2x |33.64/0.9292 |**** | |Set5 |3x |30.39/0.8678 |**** | |Set5 |4x |28.42/0.8101 |**31.21/0.8778** | |Set14 |2x |30.22/0.8683 |**** | |Set14 |3x |27.53/0.7737 |**** | |Set14 |4x |25.99/0.7023 |**28.18/0.7712** | |BSD100 |2x |29.55/0.8425 |**** | |BSD100 |3x |27.20/0.7382 |**** | |BSD100 |4x |25.96/0.6672 |**28.09/0.7533** | |Urban100 |2x |26.66/0.8408 |**** | |Urban100 |3x | |**** | |Urban100 |4x |23.14/0.6573 |**25.1/0.7497** | ![Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2](images/han_2_4_compare.png "Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2") You can find a notebook to easily run evaluation on pretrained models below: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Evaluate_Pretrained_super_image_Models.ipynb "Open in Colab") ## BibTeX entry and citation info ```bibtex @misc{niu2020single, title={Single Image Super-Resolution via a Holistic Attention Network}, author={Ben Niu and Weilei Wen and Wenqi Ren and Xiangde Zhang and Lianping Yang and Shuzhen Wang and Kaihao Zhang and Xiaochun Cao and Haifeng Shen}, year={2020}, eprint={2008.08767}, archivePrefix={arXiv}, primaryClass={eess.IV} } ```
elisno/is_core_web_trf
elisno
2021-09-08T21:19:54Z
4
0
spacy
[ "spacy", "token-classification", "is", "model-index", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- tags: - spacy - token-classification language: - is model-index: - name: is_core_web_trf results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.9193318395 - name: NER Recall type: recall value: 0.9217728758 - name: NER F Score type: f_score value: 0.9205507394 --- | Feature | Description | | --- | --- | | **Name** | `is_core_web_trf` | | **Version** | `0.0.0` | | **spaCy** | `>=3.1.1,<3.2.0` | | **Default Pipeline** | `transformer`, `ner`, `tagger`, `parser` | | **Components** | `transformer`, `ner`, `tagger`, `parser` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (591 labels for 3 components)</summary> | Component | Labels | | --- | --- | | **`ner`** | `Date`, `Location`, `Miscellaneous`, `Money`, `Organization`, `Percent`, `Person`, `Time` | | **`tagger`** | `aa`, `aae`, `aam`, `af`, `afe`, `afm`, `au`, `c`, `cn`, `ct`, `e`, `fahee`, `fahen`, `faheo`, `faheΓΎ`, `fahfe`, `fahfn`, `fahfo`, `fahfΓΎ`, `fakee`, `faken`, `fakeo`, `fakeΓΎ`, `fakfe`, `fakfn`, `fakfo`, `fakfΓΎ`, `favee`, `faven`, `faveo`, `faveΓΎ`, `favfe`, `favfn`, `favfo`, `favfΓΎ`, `fbhee`, `fbhen`, `fbheo`, `fbheΓΎ`, `fbhfe`, `fbhfn`, `fbhfo`, `fbhfΓΎ`, `fbkee`, `fbken`, `fbkeo`, `fbkeΓΎ`, `fbkfe`, `fbkfn`, `fbkfo`, `fbkfΓΎ`, `fbvee`, `fbven`, `fbveo`, `fbveΓΎ`, `fbvfe`, `fbvfn`, `fbvfo`, `fbvfΓΎ`, `fehee`, `fehen`, `feheo`, `feheΓΎ`, `fehfe`, `fehfn`, `fehfo`, `fehfΓΎ`, `fekee`, `feken`, `fekeo`, `fekeΓΎ`, `fekfe`, `fekfn`, `fekfo`, `fekfΓΎ`, `fevee`, `feven`, `feveo`, `feveΓΎ`, `fevfe`, `fevfn`, `fevfo`, `fevfΓΎ`, `fohee`, `fohen`, `foheo`, `foheΓΎ`, `fohfe`, `fohfn`, `fohfo`, `fohfΓΎ`, `fokee`, `foken`, `fokeo`, `fokeΓΎ`, `fokfe`, `fokfn`, `fokfo`, `fokfΓΎ`, `fovee`, `foven`, `foveo`, `foveΓΎ`, `fovfe`, `fovfn`, `fovfo`, `fovfΓΎ`, `fp1ee`, `fp1en`, `fp1eo`, `fp1eΓΎ`, `fp1fe`, `fp1fn`, `fp1fo`, `fp1fΓΎ`, `fp2ee`, `fp2en`, `fp2eo`, `fp2eΓΎ`, `fp2fe`, `fp2fn`, `fp2fo`, `fp2fΓΎ`, `fphee`, `fphen`, `fpheo`, `fpheΓΎ`, `fphfe`, `fphfn`, `fphfo`, `fphfΓΎ`, `fpkee`, `fpken`, `fpkeo`, `fpkeΓΎ`, `fpkfe`, `fpkfn`, `fpkfo`, `fpkfΓΎ`, `fpvee`, `fpven`, `fpveo`, `fpveΓΎ`, `fpvfe`, `fpvfn`, `fpvfo`, `fpvfΓΎ`, `fshee`, `fshen`, `fsheo`, `fsheΓΎ`, `fshfe`, `fshfn`, `fshfo`, `fshfΓΎ`, `fskee`, `fsken`, `fskeo`, `fskeΓΎ`, `fskfe`, `fskfn`, `fskfo`, `fskfΓΎ`, `fsvee`, `fsven`, `fsveo`, `fsveΓΎ`, `fsvfe`, `fsvfn`, `fsvfo`, `fsvfΓΎ`, `ghee`, `ghen`, `gheo`, `gheΓΎ`, `ghfe`, `ghfn`, `ghfo`, `ghfΓΎ`, `gkee`, `gken`, `gkeo`, `gkeΓΎ`, `gkfe`, `gkfn`, `gkfo`, `gkfΓΎ`, `gvee`, `gven`, `gveo`, `gveΓΎ`, `gvfe`, `gvfn`, `gvfo`, `gvfΓΎ`, `ks`, `kt`, `lheeof`, `lheesf`, `lheeve`, `lheevf`, `lheevm`, `lhenof`, `lhense`, `lhensf`, `lhenve`, `lhenvf`, `lhenvm`, `lheoof`, `lheose`, `lheosf`, `lheosm`, `lheove`, `lheovf`, `lheovm`, `lheΓΎof`, `lheΓΎse`, `lheΓΎsf`, `lheΓΎve`, `lheΓΎvf`, `lheΓΎvm`, `lhfeof`, `lhfese`, `lhfesf`, `lhfeve`, `lhfevf`, `lhfevm`, `lhfnof`, `lhfnse`, `lhfnsf`, `lhfnve`, `lhfnvf`, `lhfnvm`, `lhfoof`, `lhfose`, `lhfosf`, `lhfove`, `lhfovf`, `lhfovm`, `lhfΓΎof`, `lhfΓΎse`, `lhfΓΎsf`, `lhfΓΎve`, `lhfΓΎvf`, `lhfΓΎvm`, `lkeeof`, `lkeesf`, `lkeeve`, `lkeevf`, `lkeevm`, `lkenof`, `lkense`, `lkensf`, `lkenve`, `lkenvf`, `lkenvm`, `lkeoof`, `lkeose`, `lkeosf`, `lkeove`, `lkeovf`, `lkeovm`, `lkeΓΎof`, `lkeΓΎse`, `lkeΓΎsf`, `lkeΓΎve`, `lkeΓΎvf`, `lkeΓΎvm`, `lkfeof`, `lkfese`, `lkfesf`, `lkfeve`, `lkfevf`, `lkfevm`, `lkfnof`, `lkfnse`, `lkfnsf`, `lkfnve`, `lkfnvf`, `lkfnvm`, `lkfoof`, `lkfose`, `lkfosf`, `lkfove`, `lkfovf`, `lkfovm`, `lkfΓΎof`, `lkfΓΎse`, `lkfΓΎsf`, `lkfΓΎsm`, `lkfΓΎve`, `lkfΓΎvf`, `lkfΓΎvm`, `lveeof`, `lveese`, `lveesf`, `lveeve`, `lveevf`, `lveevm`, `lvenof`, `lvense`, `lvensf`, `lvenve`, `lvenvf`, `lvenvm`, `lveoof`, `lveose`, `lveosf`, `lveove`, `lveovf`, `lveovm`, `lveΓΎof`, `lveΓΎse`, `lveΓΎsf`, `lveΓΎve`, `lveΓΎvf`, `lveΓΎvm`, `lvfeof`, `lvfese`, `lvfesf`, `lvfeve`, `lvfevf`, `lvfevm`, `lvfnof`, `lvfnse`, `lvfnsf`, `lvfnve`, `lvfnvf`, `lvfnvm`, `lvfoof`, `lvfose`, `lvfosf`, `lvfove`, `lvfovf`, `lvfovm`, `lvfΓΎof`, `lvfΓΎse`, `lvfΓΎsf`, `lvfΓΎsm`, `lvfΓΎve`, `lvfΓΎvf`, `lvfΓΎvm`, `m`, `n----s`, `n-ee`, `n-ee-s`, `n-en`, `n-en-s`, `n-eng`, `n-eo`, `n-eo-s`, `n-eΓΎ`, `n-eΓΎ-s`, `n-fn`, `nhee`, `nhee-s`, `nheeg`, `nheegs`, `nhen`, `nhen-s`, `nheng`, `nhengs`, `nheo`, `nheo-s`, `nheog`, `nheogs`, `nheΓΎ`, `nheΓΎ-s`, `nheΓΎg`, `nheΓΎgs`, `nhfe`, `nhfe-s`, `nhfeg`, `nhfegs`, `nhfn`, `nhfn-s`, `nhfng`, `nhfngs`, `nhfo`, `nhfo-s`, `nhfog`, `nhfogs`, `nhfΓΎ`, `nhfΓΎ-s`, `nhfΓΎg`, `nhfΓΎgs`, `nkee`, `nkee-s`, `nkeeg`, `nkeegs`, `nken`, `nken-s`, `nkeng`, `nkengs`, `nkeo`, `nkeo-s`, `nkeog`, `nkeogs`, `nkeΓΎ`, `nkeΓΎ-s`, `nkeΓΎg`, `nkeΓΎgs`, `nkfe`, `nkfe-s`, `nkfeg`, `nkfegs`, `nkfn`, `nkfn-s`, `nkfng`, `nkfngs`, `nkfo`, `nkfo-s`, `nkfog`, `nkfogs`, `nkfΓΎ`, `nkfΓΎ-s`, `nkfΓΎg`, `nkfΓΎgs`, `nvee`, `nvee-s`, `nveeg`, `nveegs`, `nven`, `nven-s`, `nveng`, `nvengs`, `nveo`, `nveo-s`, `nveog`, `nveogs`, `nveΓΎ`, `nveΓΎ-s`, `nveΓΎg`, `nveΓΎgs`, `nvfe`, `nvfe-s`, `nvfeg`, `nvfegs`, `nvfn`, `nvfn-s`, `nvfng`, `nvfngs`, `nvfo`, `nvfo-s`, `nvfog`, `nvfogs`, `nvfΓΎ`, `nvfΓΎ-s`, `nvfΓΎg`, `nvfΓΎgs`, `pa`, `pg`, `pk`, `pl`, `sbg2en`, `sbg2fn`, `sbm2en`, `sbm2fn`, `sfg1en`, `sfg1eΓΎ`, `sfg1fn`, `sfg1fΓΎ`, `sfg2en`, `sfg2eΓΎ`, `sfg2fn`, `sfg2fΓΎ`, `sfg3en`, `sfg3eΓΎ`, `sfg3fn`, `sfg3fΓΎ`, `sfm1en`, `sfm1eΓΎ`, `sfm1fn`, `sfm1fΓΎ`, `sfm2en`, `sfm2eΓΎ`, `sfm2fn`, `sfm2fΓΎ`, `sfm3en`, `sfm3eΓΎ`, `sfm3fn`, `sfm3fΓΎ`, `slg`, `sng`, `snm`, `svg1en`, `svg1eΓΎ`, `svg1fn`, `svg1fΓΎ`, `svg2en`, `svg2eΓΎ`, `svg2fn`, `svg2fΓΎ`, `svg3en`, `svg3eΓΎ`, `svg3fn`, `svg3fΓΎ`, `svm1en`, `svm1eΓΎ`, `svm1fn`, `svm1fΓΎ`, `svm2en`, `svm2eΓΎ`, `svm2fn`, `svm3en`, `svm3eΓΎ`, `svm3fn`, `svm3fΓΎ`, `sΓΎghen`, `sΓΎgheo`, `sΓΎghfn`, `sΓΎghfo`, `sΓΎgken`, `sΓΎgkeo`, `sΓΎgkfn`, `sΓΎgkfo`, `sΓΎgven`, `sΓΎgveo`, `sΓΎgvfn`, `sΓΎgvfo`, `sΓΎgvfΓΎ`, `sΓΎmhen`, `sΓΎmheo`, `sΓΎmken`, `sΓΎmven`, `ta`, `tfhee`, `tfhen`, `tfheo`, `tfheΓΎ`, `tfhfe`, `tfhfn`, `tfhfo`, `tfhfΓΎ`, `tfkee`, `tfken`, `tfkeo`, `tfkeΓΎ`, `tfkfe`, `tfkfn`, `tfkfo`, `tfkfΓΎ`, `tfvee`, `tfven`, `tfveo`, `tfveΓΎ`, `tfvfe`, `tfvfn`, `tfvfo`, `tfvfΓΎ`, `to`, `tp`, `v`, `x` | | **`parser`** | `ROOT`, `acl`, `acl:relcl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `case`, `cc`, `ccomp`, `compound:prt`, `conj`, `cop`, `dep`, `det`, `fixed`, `flat:name`, `mark`, `nmod`, `nmod:poss`, `nsubj`, `nummod`, `obj`, `obl`, `obl:arg`, `parataxis`, `punct`, `xcomp` | </details> ### Accuracy | Type | Score | | --- | --- | | `ENTS_F` | 92.06 | | `ENTS_P` | 91.93 | | `ENTS_R` | 92.18 | | `TRANSFORMER_LOSS` | 248325.98 | | `NER_LOSS` | 120059.07 |
huggingtweets/brad_buchsbaum
huggingtweets
2021-09-08T19:43:10Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1393736501838721031/DCd35uGN_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">πŸ€– AI BOT πŸ€–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">bbuchsbaum</div> <div style="text-align: center; font-size: 14px;">@brad_buchsbaum</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from bbuchsbaum. | Data | bbuchsbaum | | --- | --- | | Tweets downloaded | 1346 | | Retweets | 125 | | Short tweets | 53 | | Tweets kept | 1168 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/uivlvhob/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @brad_buchsbaum's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/34xkida2) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/34xkida2/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/brad_buchsbaum') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
LeoCordoba/mt5-small-cc-news-es-titles
LeoCordoba
2021-09-08T17:03:30Z
14
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "summarization", "spanish", "es", "dataset:LeoCordoba/CC-NEWS-ES-titles", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- language: es tags: - summarization - mt5 - spanish license: apache-2.0 datasets: - LeoCordoba/CC-NEWS-ES-titles model-index: - name: mt5-small-ccnews-titles-es results: - task: name: Abstractive Text Summarization type: abstractive-text-summarization dataset: name: "CCNEWS-ES-titles" type: LeoCordoba/CC-NEWS-ES-titles metrics: - name: Validation ROGUE-1 type: rogue-1 value: 22.6623 - name: Validation ROGUE-2 type: rogue-2 value: 7.7894 - name: Validation ROGUE-L type: rogue-l value: 19.8015 - name: Validation ROGUE-Lsum type: rogue-lsum value: 19.8092 - name: Test ROGUE-1 type: rogue-1 value: 22.9263 - name: Test ROGUE-2 type: rogue-2 value: 7.9146 - name: Test ROGUE-L type: rogue-l value: 20.0272 - name: Test ROGUE-Lsum type: rogue-lsum value: 20.0387 widget: - text: "La chocotorta, el tradicional y prΓ‘ctico antojo dulce de los argentinos, fue elegida como el mejor postre del mundo por crΓ­ticos de restaurants internacionales, a casi 40 aΓ±os de su creaciΓ³n. El rΓ‘nking Taste Atlas ubicΓ³ primero en su lista al postre insignia local de galletitas, queso crema y dulce de leche, por delante del helado de pistacho italiano y la tarta alemana de manzana. β€œEste postre argentino sin hornear fue influenciado por la cocina italiana y se inspirΓ³ en el famoso tiramisΓΊ italiano. EstΓ‘ elaborado con tres ingredientes bΓ‘sicos argentinos: galletas de chocolate, dulce de leche y queso crema”, explica la pΓ‘gina web que exhorta a los turistas de todo el mundo a que prueben la chocotorta. En la votaciΓ³n, superΓ³ tambiΓ©n a los waffles belgas y el zserbΓ³ hΓΊngaro. A nivel local le sigue el alfajor, con 4,2 puntos contra los 4,7 de la torta. En el texto que acompaΓ±a al listΓ³n dorado de β€œpostre nΓΊmero unoβ€œ, los expertos enseΓ±an ademΓ‘s cΓ³mo se hacen las chocotortas, paso por paso. β€œLas galletas se ablandan en leche y se cubren con una combinaciΓ³n de queso crema y dulce de leche. Las formas de la chocotorta pueden variar, mientras que las galletas se pueden remojar con leche con chocolate, cafΓ© o incluso licor de café”, detallan. Por ΓΊltimo, adjudican su creaciΓ³n a una β€œcampaΓ±a de mΓ‘rketing” diseΓ±ada para promover las galletitas icΓ³nicas que le dan su nombre. La chocotorta, infaltable en los cumpleaΓ±os argentinos, fue creada en 1982 por una creativa de las agencias mΓ‘s importantes del paΓ­s, MaritΓ© MabragaΓ±a." --- ## Hyperparameters { "max_target_length": 64, "model_name_or_path": "google/mt5-small", "num_train_epochs": 3, "seed": 7, "summary_column": "output_text", "text_column": "text", "encoder_max_length" : 512, "decoder_max_length" :36, "batch_size" : 128 } ## Usage ``` article = """ La chocotorta, el tradicional y prΓ‘ctico antojo dulce de los argentinos, fue elegida como el mejor postre del mundo por crΓ­ticos de restaurants internacionales, a casi 40 aΓ±os de su creaciΓ³n. El rΓ‘nking Taste Atlas ubicΓ³ primero en su lista al postre insignia local de galletitas, queso crema y dulce de leche, por delante del helado de pistacho italiano y la tarta alemana de manzana. β€œEste postre argentino sin hornear fue influenciado por la cocina italiana y se inspirΓ³ en el famoso tiramisΓΊ italiano. EstΓ‘ elaborado con tres ingredientes bΓ‘sicos argentinos: galletas de chocolate, dulce de leche y queso crema”, explica la pΓ‘gina web que exhorta a los turistas de todo el mundo a que prueben la chocotorta. En la votaciΓ³n, superΓ³ tambiΓ©n a los waffles belgas y el zserbΓ³ hΓΊngaro. A nivel local le sigue el alfajor, con 4,2 puntos contra los 4,7 de la torta. En el texto que acompaΓ±a al listΓ³n dorado de β€œpostre nΓΊmero uno", los expertos enseΓ±an ademΓ‘s cΓ³mo se hacen las chocotortas, paso por paso. β€œLas galletas se ablandan en leche y se cubren con una combinaciΓ³n de queso crema y dulce de leche. Las formas de la chocotorta pueden variar, mientras que las galletas se pueden remojar con leche con chocolate, cafΓ© o incluso licor de café”, detallan. Por ΓΊltimo, adjudican su creaciΓ³n a una β€œcampaΓ±a de mΓ‘rketing” diseΓ±ada para promover las galletitas icΓ³nicas que le dan su nombre. La chocotorta, infaltable en los cumpleaΓ±os argentinos, fue creada en 1982 por una creativa de las agencias mΓ‘s importantes del paΓ­s, MaritΓ© MabragaΓ±a. """ from transformers import pipeline summarizer = pipeline("summarization", model="LeoCordoba/mt5-small-ccnews-titles-es") summarizer(article, min_length=5, max_length=64) ``` ## Results | metric | score | | --- | ----- | | eval_loss | 2.879085063934326 | | eval_rouge1 | 22.6623 | | eval_rouge2 | 7.7894 | | eval_rougeL | 19.8015, | | eval_rougeLsum | 19.8092 | | eval_gen_len | 17.1839 | | test_loss | 2.878429412841797 | | test_rouge1 | 22.9263 | | test_rouge2 | 7.9146 | | test_rougeL | 20.0272 | | test_rougeLsum | 20.0387 | | test_gen_len | 17.1696 |
LeoCordoba/beto2beto
LeoCordoba
2021-09-08T16:31:21Z
23
0
transformers
[ "transformers", "pytorch", "encoder-decoder", "text2text-generation", "text-generation", "spanish", "beto", "es", "dataset:LeoCordoba/CC-NEWS-ES", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- language: es tags: - text-generation - spanish - encoder-decoder - beto license: apache-2.0 datasets: - LeoCordoba/CC-NEWS-ES model-index: - name: beto2beto --- ## beto2beto Usage example here: https://colab.research.google.com/drive/18a2ZfF1e_Kyyydlv8INQIkJbv294xcAm?usp=sharing Entrenado por 3 epochs sobre CC-NEWS-ES (2019), aproximadamente 68.000 steps. Encoder max length: 40β€’Decoder max length: 128 ## Hyperparameters ## Usage ## Results | key | value | | --- | ----- | | test_loss | 2.65148806571960452 |
sv/gpt2-nft-poetry
sv
2021-09-08T16:15:47Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - null model-index: - name: gpt2-nft-poetry results: - task: name: Causal Language Modeling type: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-nft-poetry This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.0243 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 282 | 4.3092 | | 4.5403 | 2.0 | 564 | 4.1283 | | 4.5403 | 3.0 | 846 | 4.0605 | | 4.039 | 4.0 | 1128 | 4.0321 | | 4.039 | 5.0 | 1410 | 4.0243 | ### Framework versions - Transformers 4.10.0 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
Jeffrey/DialoGPT-small-Jeffrey
Jeffrey
2021-09-08T15:53:25Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- tags: - conversational ---
fihtrotuld/123
fihtrotuld
2021-09-08T01:35:59Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
import requests API_URL = "https://api-inference.huggingface.co/models/huggingface/prunebert-base-uncased-6-finepruned-w-distil-squad" headers = {"Authorization": "Bearer api_UXqrzQBiZKXaWxstVwEKcYvHQpGSGiQGbr"} def query(payload): response = requests.post(API_URL, headers=headers, json=payload) return response.json() output = query({ "inputs": { "question": "What's my name?", "context": "My name is Clara and I live in Berkeley.", }, })
nateraw/timm-resnet50-beans
nateraw
2021-09-07T17:21:50Z
14
1
timm
[ "timm", "pytorch", "image-classification", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- tags: - image-classification - timm library_tag: timm --- # Model card for `timm-resnet50-beans` **TODO** **For now, try dragging and dropping this image into the inference widget. It should classify as angular_leaf_spot.** ![leaf_example](angular_leaf_spot_train.304.jpg)
mlkorra/OGBV-gender-bert-hi-en
mlkorra
2021-09-07T15:13:25Z
11
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
## BERT Model for OGBV gendered text classification ## How to use ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("mlkorra/OGBV-gender-bert-hi-en") model = AutoModelForSequenceClassification.from_pretrained("mlkorra/OGBV-gender-bert-hi-en") ``` ## Model Performance |Metric|dev|test| |---|--|--| |Accuracy|0.88|0.81| |F1(weighted)|0.86|0.80|
kamalkraj/bioelectra-base-discriminator-pubmed
kamalkraj
2021-09-07T13:52:16Z
810
6
transformers
[ "transformers", "pytorch", "electra", "pretraining", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
## BioELECTRA:Pretrained Biomedical text Encoder using Discriminators Recent advancements in pretraining strategies in NLP have shown a significant improvement in the performance of models on various text mining tasks. In this paper, we introduce BioELECTRA, a biomedical domain-specific language encoder model that adapts ELECTRA (Clark et al., 2020) for the Biomedical domain. BioELECTRA outperforms the previous models and achieves state of the art (SOTA) on all the 13 datasets in BLURB benchmark and on all the 4 Clinical datasets from BLUE Benchmark across 7 NLP tasks. BioELECTRA pretrained on PubMed and PMC full text articles performs very well on Clinical datasets as well. BioELECTRA achieves new SOTA 86.34%(1.39% accuracy improvement) on MedNLI and 64% (2.98% accuracy improvement) on PubMedQA dataset. For a detailed description and experimental results, please refer to our paper [BioELECTRA:Pretrained Biomedical text Encoder using Discriminators](https://www.aclweb.org/anthology/2021.bionlp-1.16/). Cite our paper using below citation ``` @inproceedings{kanakarajan-etal-2021-bioelectra, title = "{B}io{ELECTRA}:Pretrained Biomedical text Encoder using Discriminators", author = "Kanakarajan, Kamal raj and Kundumani, Bhuvana and Sankarasubbu, Malaikannan", booktitle = "Proceedings of the 20th Workshop on Biomedical Language Processing", month = jun, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.bionlp-1.16", doi = "10.18653/v1/2021.bionlp-1.16", pages = "143--154", abstract = "Recent advancements in pretraining strategies in NLP have shown a significant improvement in the performance of models on various text mining tasks. We apply {`}replaced token detection{'} pretraining technique proposed by ELECTRA and pretrain a biomedical language model from scratch using biomedical text and vocabulary. We introduce BioELECTRA, a biomedical domain-specific language encoder model that adapts ELECTRA for the Biomedical domain. WE evaluate our model on the BLURB and BLUE biomedical NLP benchmarks. BioELECTRA outperforms the previous models and achieves state of the art (SOTA) on all the 13 datasets in BLURB benchmark and on all the 4 Clinical datasets from BLUE Benchmark across 7 different NLP tasks. BioELECTRA pretrained on PubMed and PMC full text articles performs very well on Clinical datasets as well. BioELECTRA achieves new SOTA 86.34{\%}(1.39{\%} accuracy improvement) on MedNLI and 64{\%} (2.98{\%} accuracy improvement) on PubMedQA dataset.", } ``` ## How to use the discriminator in `transformers` ```python from transformers import ElectraForPreTraining, ElectraTokenizerFast import torch discriminator = ElectraForPreTraining.from_pretrained("kamalkraj/bioelectra-base-discriminator-pubmed") tokenizer = ElectraTokenizerFast.from_pretrained("kamalkraj/bioelectra-base-discriminator-pubmed") sentence = "The quick brown fox jumps over the lazy dog" fake_sentence = "The quick brown fox fake over the lazy dog" fake_tokens = tokenizer.tokenize(fake_sentence) fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt") discriminator_outputs = discriminator(fake_inputs) predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2) [print("%7s" % token, end="") for token in fake_tokens] [print("%7s" % int(prediction), end="") for prediction in predictions[0].tolist()] ```
M47Labs/spanish_news_classification_headlines
M47Labs
2021-09-07T11:56:58Z
106
3
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- widget: - text: "El dΓ³lar se dispara tras la reuniΓ³n de la Fed" --- # Spanish News Classification Headlines SNCH: this model was develop by [M47Labs](https://www.m47labs.com/es/) the goal is text classification, the base model use was [BETO](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased), it was fine-tuned on 1000 example dataset. ## Dataset Sample Dataset size : 1000 Columns: idTask,task content 1,idTag,tag. |idTask|task content 1|idTag|tag| |------|------|------|------| |3637d9ac-119c-4a8f-899c-339cf5b42ae0|AlcalΓ‘ de GuadaΓ­ra celebra la IV Semana de la Diversidad Sexual con acciones de sensibilizaciΓ³n|81b36360-6cbf-4ffa-b558-9ef95c136714|sociedad| |d56bab52-0029-45dd-ad90-5c17d4ed4c88|El ArchipiΓ©lago Chinijo Graciplus se impone en el Trofeo Centro Comercial RubicΓ³n|ed198b6d-a5b9-4557-91ff-c0be51707dec|deportes| |dec70bc5-4932-4fa2-aeac-31a52377be02|Un total de 39 personas padecen ELA actualmente en la provincia|81b36360-6cbf-4ffa-b558-9ef95c136714|sociedad| |fb396ba9-fbf1-4495-84d9-5314eb731405|Eurocopa 2021 : Italia vence a Gales y pasa a octavos con su candidatura reforzada|ed198b6d-a5b9-4557-91ff-c0be51707dec|deportes| |bc5a36ca-4e0a-422e-9167-766b41008c01|ResoluciΓ³n de 10 de junio de 2021, del Ayuntamiento de Tarazona de La Mancha (Albacete), referente a la convocatoria para proveer una plaza.|81b36360-6cbf-4ffa-b558-9ef95c136714|sociedad| |a87f8703-ce34-47a5-9c1b-e992c7fe60f6|El primer ministro sueco pierde una mociΓ³n de censura|209ae89e-55b4-41fd-aac0-5400feab479e|politica| |d80bdaad-0ad5-43a0-850e-c473fd612526|El dΓ³lar se dispara tras la reuniΓ³n de la Fed|11925830-148e-4890-a2bc-da9dc059dc17|economia| ## Labels: * ciencia_tecnologia * clickbait * cultura * deportes * economia * educacion * medio_ambiente * opinion * politica * sociedad ## Example of Use ### Pipeline ```{python} import torch from transformers import AutoTokenizer, BertForSequenceClassification,TextClassificationPipeline review_text = 'los vehiculos que esten esperando pasajaeros deberan estar apagados para reducir emisiones' path = "M47Labs/spanish_news_classification_headlines" tokenizer = AutoTokenizer.from_pretrained(path) model = BertForSequenceClassification.from_pretrained(path) nlp = TextClassificationPipeline(task = "text-classification", model = model, tokenizer = tokenizer) print(nlp(review_text)) ``` ```[{'label': 'medio_ambiente', 'score': 0.5648820996284485}]``` ### Pytorch ```{python} import torch from transformers import AutoTokenizer, BertForSequenceClassification,TextClassificationPipeline from numpy import np model_name = 'M47Labs/spanish_news_classification_headlines' MAX_LEN = 32 tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) texto = "las emisiones estan bajando, debido a las medidas ambientales tomadas por el gobierno" encoded_review = tokenizer.encode_plus( texto, max_length=MAX_LEN, add_special_tokens=True, #return_token_type_ids=False, pad_to_max_length=True, return_attention_mask=True, return_tensors='pt', ) input_ids = encoded_review['input_ids'] attention_mask = encoded_review['attention_mask'] output = model(input_ids, attention_mask) _, prediction = torch.max(output['logits'], dim=1) print(f'Review text: {texto}') print(f'Sentiment : {model.config.id2label[prediction.detach().cpu().numpy()[0]]}') ``` ```Review text: las emisiones estan bajando, debido a las medidas ambientales tomadas por el gobierno``` ```Sentiment : medio_ambiente``` A more in depth example on how to use the model can be found in this colab notebook: https://colab.research.google.com/drive/1XsKea6oMyEckye2FePW_XN7Rf8v41Cw_?usp=sharing ## Finetune Hyperparameters * MAX_LEN = 32 * TRAIN_BATCH_SIZE = 8 * VALID_BATCH_SIZE = 4 * EPOCHS = 5 * LEARNING_RATE = 1e-05 ## Train Results |n_example|epoch|loss|acc| |------|------|------|------| |100|0|2.286327266693115|12.5| |100|1|2.018876111507416|40.0| |100|2|1.8016730904579163|43.75| |100|3|1.6121837735176086|46.25| |100|4|1.41565443277359|68.75| |n_example|epoch|loss|acc| |------|------|------|------| |500|0|2.0770938420295715|24.5| |500|1|1.6953029704093934|50.25| |500|2|1.258900796175003|64.25| |500|3|0.8342628020048142|78.25| |500|4|0.5135736921429634|90.25| |n_example|epoch|loss|acc| |------|------|------|------| |1000|0|1.916002897115854|36.1997226074896| |1000|1|1.2941598492664295|62.2746185852982| |1000|2|0.8201534710415117|76.97642163661581| |1000|3|0.524806430051615|86.9625520110957| |1000|4|0.30662027455784463|92.64909847434119| ## Validation Results |n_examples|100| |------|------| |Accuracy Score|0.35| |Precision (Macro)|0.35| |Recall (Macro)|0.16| |n_examples|500| |------|------| |Accuracy Score|0.62| |Precision (Macro)|0.60| |Recall (Macro)|0.47| |n_examples|1000| |------|------| |Accuracy Score|0.68| |Precision(Macro)|0.68| |Recall (Macro)|0.64| ![alt text](https://media-exp1.licdn.com/dms/image/C4D0BAQHpfgjEyhtE1g/company-logo_200_200/0/1625210573748?e=1638403200&v=beta&t=toQNpiOlyim5Ja4f7Ejv8yKoCWifMsLWjkC7XnyXICI "Logo M47")
yannobla/Sunshine2
yannobla
2021-09-07T11:41:51Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
>>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-base-uncased') >>> unmasker("Hello I'm a [MASK] model.") [{'sequence': "[CLS] hello i'm a fashion model. [SEP]", 'score': 0.1073106899857521, 'token': 4827, 'token_str': 'fashion'}, {'sequence': "[CLS] hello i'm a role model. [SEP]", 'score': 0.08774490654468536, 'token': 2535, 'token_str': 'role'}, {'sequence': "[CLS] hello i'm a new model. [SEP]", 'score': 0.05338378623127937, 'token': 2047, 'token_str': 'new'}, {'sequence': "[CLS] hello i'm a super model. [SEP]", 'score': 0.04667217284440994, 'token': 3565, 'token_str': 'super'}, {'sequence': "[CLS] hello i'm a fine model. [SEP]", 'score': 0.027095865458250046, 'token': 2986, 'token_str': 'fine'}] from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertModel.from_pretrained("bert-base-uncased") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input)
sabhi/t5-base-qa-qg
sabhi
2021-09-07T07:24:12Z
7
1
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "question-generation", "dataset:squadv1", "arxiv:1910.10683", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- datasets: - squadv1 tags: - question-generation --- ## T5 for multi-task QA and QG This is multi-task [t5-base](https://arxiv.org/abs/1910.10683) model trained for question answering and answer aware question generation tasks. For question generation the answer spans are highlighted within the text with special highlight tokens (`<hl>`) and prefixed with 'generate question: '. For QA the input is processed like this `question: question_text context: context_text </s>` You can play with the model using the inference API. Here's how you can use it For QG `generate question: <hl> 42 <hl> is the answer to life, the universe and everything. </s>` For QA `question: What is 42 context: 42 is the answer to life, the universe and everything. </s>` For more deatils see [this](https://github.com/sabhi27/question_generation) repo. ### Model in action πŸš€ You'll need to clone the [repo](https://github.com/sabhi27/question_generation). [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/sabhi27/question_generation/blob/master/question_generation.ipynb) ```python3 from pipelines import pipeline nlp = pipeline("multitask-qa-qg", model="sabhi/t5-base-qa-qg") # to generate questions simply pass the text nlp("42 is the answer to life, the universe and everything.") => [{'answer': '42', 'question': 'What is the answer to life, the universe and everything?'}] # for qa pass a dict with "question" and "context" nlp({ "question": "What is 42 ?", "context": "42 is the answer to life, the universe and everything." }) => 'the answer to life, the universe and everything' ```
sv/gpt2-finetuned-nft-shakes-seuss-2
sv
2021-09-07T06:05:36Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - null model-index: - name: gpt2-finetuned-nft-shakes-seuss-2 results: - task: name: Causal Language Modeling type: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-finetuned-nft-shakes-seuss-2 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.9547 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.3454 | 1.0 | 1490 | 4.1027 | | 4.0534 | 2.0 | 2980 | 3.9857 | | 3.9384 | 3.0 | 4470 | 3.9547 | ### Framework versions - Transformers 4.10.0 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
espnet/xuankai_chang_librispeech_asr_train_asr_conformer7_wav2vec2_960hr_larg-truncated-5b94d9
espnet
2021-09-07T03:11:55Z
2
0
espnet
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:librispeech", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - espnet - audio - automatic-speech-recognition language: en datasets: - librispeech license: cc-by-4.0 inference: false --- # ESPnet2 ASR pretrained model ## `Xuankai Chang/xuankai_chang_librispeech_asr_train_asr_conformer7_wav2vec2_960hr_large_raw_en_bpe5000_sp_25epoch, fs=16k, lang=en` This model was trained by Takashi Maekaku using librispeech recipe in [espnet](https://github.com/espnet/espnet/). ### Python API ```text See https://github.com/espnet/espnet_model_zoo ``` ### Evaluate in the recipe ```python # coming soon ``` ### Results ```bash # RESULTS ## Environments - date: `Sat Jul 3 23:10:19 JST 2021` - python version: `3.7.9 (default, Apr 23 2021, 13:48:31) [GCC 5.5.0 20171010]` - espnet version: `espnet 0.9.9` - pytorch version: `pytorch 1.7.0` - Git hash: `0f7558a716ab830d0c29da8785840124f358d47b` - Commit date: `Tue Jun 8 15:33:49 2021 -0400` ## asr_train_asr_conformer7_wav2vec2_960hr_large_raw_en_bpe5000_sp ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_clean|2703|54402|98.3|1.6|0.2|0.2|1.9|24.9| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_other|2864|50948|95.1|4.3|0.6|0.4|5.4|42.8| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_clean|2620|52576|98.1|1.7|0.2|0.2|2.2|26.8| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_other|2939|52343|95.3|4.1|0.6|0.5|5.2|45.8| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_clean|2703|288456|99.5|0.2|0.2|0.2|0.6|24.9| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_other|2864|265951|98.1|1.0|0.9|0.5|2.4|42.8| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_clean|2620|281530|99.5|0.2|0.3|0.2|0.7|26.8| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_other|2939|272758|98.3|0.8|0.9|0.5|2.3|45.8| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_clean|2703|68010|97.8|1.6|0.6|0.4|2.6|24.9| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_other|2864|63110|94.1|4.3|1.6|1.1|7.0|42.8| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_clean|2620|65818|97.6|1.6|0.8|0.4|2.8|26.8| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_other|2939|65101|94.3|4.0|1.8|1.0|6.7|45.8| ``` ### Training config See full config in [`config.yaml`](./exp/asr_train_asr_conformer7_hubert_960hr_large_raw_en_bpe5000_sp/config.yaml) ```yaml config: conf/tuning/train_asr_conformer7_hubert_960hr_large.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_conformer7_hubert_960hr_large_raw_en_bpe5000_sp ngpu: 3 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 4 dist_rank: 3 local_rank: 3 dist_master_addr: localhost dist_master_port: 33643 dist_launcher: null multiprocessing_distributed: true cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true ```
huggingtweets/_nisagiss-dril_gpt2-drilbot_neo
huggingtweets
2021-09-07T01:18:25Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/_nisagiss-dril_gpt2-drilbot_neo/1630977501917/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1374924360780242944/-Q8NfgEr_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1386749605216407555/QIJeyWfE_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1320596112676409344/rgbeQhIA_400x400.png&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">πŸ€– AI CYBORG πŸ€–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">wintbot_neo & wint but Al & Nisa πŸ‡²πŸ‡½</div> <div style="text-align: center; font-size: 14px;">@_nisagiss-dril_gpt2-drilbot_neo</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from wintbot_neo & wint but Al & Nisa πŸ‡²πŸ‡½. | Data | wintbot_neo | wint but Al | Nisa πŸ‡²πŸ‡½ | | --- | --- | --- | --- | | Tweets downloaded | 3246 | 3198 | 2993 | | Retweets | 255 | 41 | 2553 | | Short tweets | 243 | 49 | 158 | | Tweets kept | 2748 | 3108 | 282 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/xq1ao3o5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @_nisagiss-dril_gpt2-drilbot_neo's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/knmkilof) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/knmkilof/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/_nisagiss-dril_gpt2-drilbot_neo') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/dynatronne
huggingtweets
2021-09-07T01:15:25Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/dynatronne/1630977321484/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1396079009604280325/W6petcWe_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">πŸ€– AI BOT πŸ€–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">dt keith katze</div> <div style="text-align: center; font-size: 14px;">@dynatronne</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from dt keith katze. | Data | dt keith katze | | --- | --- | | Tweets downloaded | 3009 | | Retweets | 2428 | | Short tweets | 142 | | Tweets kept | 439 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/26uf3rn6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dynatronne's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3qxjo6s7) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3qxjo6s7/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/dynatronne') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/discountpicasso-dril-liam_100000
huggingtweets
2021-09-07T00:14:05Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/discountpicasso-dril-liam_100000/1630973640579/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1426930394297819137/-zzMnfJo_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/847818629840228354/VXyQHfn0_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/980964012170121217/U6FjPH4H_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">πŸ€– AI CYBORG πŸ€–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">LIAM & wint & Picasso</div> <div style="text-align: center; font-size: 14px;">@discountpicasso-dril-liam_100000</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from LIAM & wint & Picasso. | Data | LIAM | wint | Picasso | | --- | --- | --- | --- | | Tweets downloaded | 1962 | 3226 | 3216 | | Retweets | 135 | 472 | 427 | | Short tweets | 435 | 313 | 421 | | Tweets kept | 1392 | 2441 | 2368 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1w4ekve8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @discountpicasso-dril-liam_100000's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2s4a755y) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2s4a755y/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/discountpicasso-dril-liam_100000') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/ak92501-cafe_orbitinnit-ihatesinglets
huggingtweets
2021-09-07T00:03:08Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/ak92501-cafe_orbitinnit-ihatesinglets/1630972983357/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1429115399975497731/JZdA725e_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1433245625429204993/xzzFE2CJ_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1405992051427229698/V3W-1gOb_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">πŸ€– AI CYBORG πŸ€–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">✨たけ Tommy’s an Orbit πŸŒ™ たけ✨ & everyone in the system this isn’t normal & AK</div> <div style="text-align: center; font-size: 14px;">@ak92501-cafe_orbitinnit-ihatesinglets</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from ✨たけ Tommy’s an Orbit πŸŒ™ たけ✨ & everyone in the system this isn’t normal & AK. | Data | ✨たけ Tommy’s an Orbit πŸŒ™ たけ✨ | everyone in the system this isn’t normal | AK | | --- | --- | --- | --- | | Tweets downloaded | 2256 | 1151 | 3250 | | Retweets | 1350 | 78 | 403 | | Short tweets | 323 | 352 | 464 | | Tweets kept | 583 | 721 | 2383 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/mhwl02od/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ak92501-cafe_orbitinnit-ihatesinglets's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/m05466la) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/m05466la/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/ak92501-cafe_orbitinnit-ihatesinglets') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
julien-c/dummy-for-flat
julien-c
2021-09-06T21:02:55Z
0
1
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
in the editor i only change this line Example of a hf.co repo containing signed commits. hello tabs
yseop/FNP_T5_D2T_complete
yseop
2021-09-06T20:54:21Z
5
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
# T5-base data to text model specialized for Finance NLG __complete version__ ---- ## Usage (HuggingFace Transformers) #### Call the model ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("yseop/FNP_T5_D2T_complete") model = AutoModelForSeq2SeqLM.from_pretrained("yseop/FNP_T5_D2T_complete") text = ["Group profit | valIs | € 115.7 million && € 115.7 million | dTime | in 2019"] ``` #### Choose a generation method ```python input_ids = tokenizer.encode(": {}".format(text), return_tensors="pt") p = 0.82 k = 90 outputs = model.generate(input_ids, do_sample=True, top_p=p, top_k=k, early_stopping=True) print(tokenizer.decode(outputs[0])) ``` ```python input_ids = tokenizer.encode(": {}".format(text), return_tensors="pt") outputs = model.generate(input_ids, max_length=200, num_beams=2, repetition_penalty=2.5, top_k=50, top_p=0.98, length_penalty=1.0, early_stopping=True) print(tokenizer.decode(outputs[0])) ``` **Created by:** [Yseop](https://www.yseop.com/) | Pioneer in Natural Language Generation (NLG) technology. Scaling human expertise through Natural Language Generation.
sv/gpt2-finetuned-nft-shakes
sv
2021-09-06T16:59:11Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - null model-index: - name: gpt2-finetuned-nft-shakes results: - task: name: Causal Language Modeling type: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-finetuned-nft-shakes This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.7566 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 306 | 3.9679 | | 4.2957 | 2.0 | 612 | 3.7979 | | 4.2957 | 3.0 | 918 | 3.7566 | ### Framework versions - Transformers 4.10.0 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
huggingartists/max-korzh
huggingartists
2021-09-06T13:34:44Z
5
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/max-korzh", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/max-korzh tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/a1486b5b6f28eeec202b55e983e464c5.567x567x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">πŸ€– HuggingArtists Model πŸ€–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Макс ΠšΠΎΡ€ΠΆ (Max Korzh)</div> <a href="https://genius.com/artists/max-korzh"> <div style="text-align: center; font-size: 14px;">@max-korzh</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Макс ΠšΠΎΡ€ΠΆ (Max Korzh). Dataset is available [here](https://huggingface.co/datasets/huggingartists/max-korzh). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/max-korzh") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/2lupo5gy/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Макс ΠšΠΎΡ€ΠΆ (Max Korzh)'s lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1pm64gaa) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1pm64gaa/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/max-korzh') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/max-korzh") model = AutoModelWithLMHead.from_pretrained("huggingartists/max-korzh") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingtweets/matsu_bouzu
huggingtweets
2021-09-06T13:27:36Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/matsu_bouzu/1630934852210/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1398242436082638855/mvzIZACg_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">πŸ€– AI BOT πŸ€–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">ζΎζœ¬δΊΊεΏ—</div> <div style="text-align: center; font-size: 14px;">@matsu_bouzu</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from ζΎζœ¬δΊΊεΏ—. | Data | ζΎζœ¬δΊΊεΏ— | | --- | --- | | Tweets downloaded | 808 | | Retweets | 30 | | Short tweets | 504 | | Tweets kept | 274 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/fwqkxzg7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @matsu_bouzu's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1af81o1n) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1af81o1n/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/matsu_bouzu') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
superb/hubert-base-superb-ic
superb
2021-09-06T12:11:28Z
367
0
transformers
[ "transformers", "pytorch", "hubert", "audio-classification", "speech", "en", "dataset:superb", "arxiv:2105.01051", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2022-03-02T23:29:05Z
--- language: en datasets: - superb tags: - speech - audio-classification - hubert license: apache-2.0 --- # Hubert-Base for Intent Classification ## Model description This is a ported version of [S3PRL's Hubert for the SUPERB Intent Classification task](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream/fluent_commands). The base model is [hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960), which is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. For more information refer to [SUPERB: Speech processing Universal PERformance Benchmark](https://arxiv.org/abs/2105.01051) ## Task and dataset description Intent Classification (IC) classifies utterances into predefined classes to determine the intent of speakers. SUPERB uses the [Fluent Speech Commands](https://fluent.ai/fluent-speech-commands-a-dataset-for-spoken-language-understanding-research/) dataset, where each utterance is tagged with three intent labels: **action**, **object**, and **location**. For the original model's training and evaluation instructions refer to the [S3PRL downstream task README](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#ic-intent-classification---fluent-speech-commands). ## Usage examples You can use the model directly like so: ```python import torch import librosa from datasets import load_dataset from transformers import HubertForSequenceClassification, Wav2Vec2FeatureExtractor def map_to_array(example): speech, _ = librosa.load(example["file"], sr=16000, mono=True) example["speech"] = speech return example # load a demo dataset and read audio files dataset = load_dataset("anton-l/superb_demo", "ic", split="test") dataset = dataset.map(map_to_array) model = HubertForSequenceClassification.from_pretrained("superb/hubert-base-superb-ic") feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("superb/hubert-base-superb-ic") # compute attention masks and normalize the waveform if needed inputs = feature_extractor(dataset[:4]["speech"], sampling_rate=16000, padding=True, return_tensors="pt") logits = model(**inputs).logits action_ids = torch.argmax(logits[:, :6], dim=-1).tolist() action_labels = [model.config.id2label[_id] for _id in action_ids] object_ids = torch.argmax(logits[:, 6:20], dim=-1).tolist() object_labels = [model.config.id2label[_id + 6] for _id in object_ids] location_ids = torch.argmax(logits[:, 20:24], dim=-1).tolist() location_labels = [model.config.id2label[_id + 20] for _id in location_ids] ``` ## Eval results The evaluation metric is accuracy. | | **s3prl** | **transformers** | |--------|-----------|------------------| |**test**| `0.9834` | `N/A` | ### BibTeX entry and citation info ```bibtex @article{yang2021superb, title={SUPERB: Speech processing Universal PERformance Benchmark}, author={Yang, Shu-wen and Chi, Po-Han and Chuang, Yung-Sung and Lai, Cheng-I Jeff and Lakhotia, Kushal and Lin, Yist Y and Liu, Andy T and Shi, Jiatong and Chang, Xuankai and Lin, Guan-Ting and others}, journal={arXiv preprint arXiv:2105.01051}, year={2021} } ```
lewtun/metnet-test-5
lewtun
2021-09-06T11:01:50Z
2
0
transformers
[ "transformers", "pytorch", "satflow", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- license: mit tags: - satflow --- # MetNet ## Model description [More information needed] ## Intended uses & limitations [More information needed] ## How to use [More information needed] ## Limitations and bias [More information needed] ## Training data [More information needed] ## Training procedure [More information needed] ## Evaluation results [More information needed]
lewtun/metnet-test-4
lewtun
2021-09-06T11:00:39Z
1
0
transformers
[ "transformers", "pytorch", "satflow", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- license: mit tags: - satflow --- # Model Card for MetNet ## Model description [More information needed] ## Intended uses & limitations [More information needed] ## How to use [More information needed] ## Limitations and bias [More information needed] ## Training data [More information needed] ## Training procedure [More information needed] ## Evaluation results [More information needed]
megagonlabs/t5-base-japanese-web
megagonlabs
2021-09-06T10:32:21Z
254
18
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "seq2seq", "ja", "dataset:mc4", "dataset:wiki40b", "arxiv:1910.10683", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: ja tags: - t5 - text2text-generation - seq2seq license: apache-2.0 datasets: - mc4 - wiki40b --- # t5-base-japanese-web (with Byte-fallback, 32K) ## Description [megagonlabs/t5-base-japanese-web](https://huggingface.co/megagonlabs/t5-base-japanese-web) is a T5 (Text-to-Text Transfer Transformer) model pre-trained on Japanese web texts. Training codes are [available on GitHub](https://github.com/megagonlabs/t5-japanese). The vocabulary size of this model is 32K. [8K version is also available](https://huggingface.co/megagonlabs/t5-base-japanese-web-8k). ### Corpora We used following corpora for pre-training. - Japanese in [mC4/3.0.1](https://huggingface.co/datasets/mc4) (We used [Tensorflow native format](https://github.com/allenai/allennlp/discussions/5056)) - 87,425,304 pages - 782 GB in TFRecord format - [Japanese](https://www.tensorflow.org/datasets/catalog/wiki40b#wiki40bja) in [wiki40b/1.3.0](https://www.tensorflow.org/datasets/catalog/wiki40b) - 828,236 articles (2,073,584 examples) - 2 GB in TFRecord format ### Tokenizer We used Japanese Wikipedia to train [SentencePiece](https://github.com/google/sentencepiece). - Vocabulary size: 32,000 - [Byte-fallback](https://github.com/google/sentencepiece/releases/tag/v0.1.9): Enabled ### Parameters - T5 model: [models/t5.1.1.base.gin](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/t5/models/gin/models/t5.1.1.base.gin) - Training steps: 1,000,000 It took about 126 hours with TPU v3-8 ## Related models - [ζ—₯本θͺžT5δΊ‹ε‰ε­¦ηΏ’ζΈˆγΏγƒ’γƒ‡γƒ« (sonoisa/t5-base-japanese)](https://huggingface.co/sonoisa/t5-base-japanese) - [ζ—₯本θͺžT5δΊ‹ε‰ε­¦ηΏ’ζΈˆγΏγƒ’γƒ‡γƒ« (sonoisa/t5-base-japanese-mC4-Wikipedia)](https://huggingface.co/sonoisa/t5-base-japanese-mC4-Wikipedia) ## License Apache License 2.0 ## Citations - mC4 Contains information from `mC4` which is made available under the [ODC Attribution License](https://opendatacommons.org/licenses/by/1-0/). ```bibtex @article{2019t5, author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu}, title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer}, journal = {arXiv e-prints}, year = {2019}, archivePrefix = {arXiv}, eprint = {1910.10683}, } ``` - wiki40b ```bibtex @inproceedings{49029, title = {Wiki-40B: Multilingual Language Model Dataset}, author = {Mandy Guo and Zihang Dai and Denny Vrandecic and Rami Al-Rfou}, year = {2020}, booktitle = {LREC 2020} } ```
recobo/chemical-bert-uncased-simcse
recobo
2021-09-06T05:52:59Z
17
1
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # recobo/chemical-bert-uncased-simcse ```python from sentence_transformers import SentenceTransformer model_name = 'recobo/chemical-bert-uncased-simcse' model = SentenceTransformer(model_name) ```
elisno/is_ner_mim_trf
elisno
2021-09-05T19:26:16Z
4
0
spacy
[ "spacy", "token-classification", "is", "model-index", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- tags: - spacy - token-classification language: - is model-index: - name: is_ner_mim_trf results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.9193318395 - name: NER Recall type: recall value: 0.9217728758 - name: NER F Score type: f_score value: 0.9205507394 --- | Feature | Description | | --- | --- | | **Name** | `is_ner_mim_trf` | | **Version** | `0.0.1` | | **spaCy** | `>=3.1.1,<3.2.0` | | **Default Pipeline** | `transformer`, `ner` | | **Components** | `transformer`, `ner` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (8 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`ner`** | `Date`, `Location`, `Miscellaneous`, `Money`, `Organization`, `Percent`, `Person`, `Time` | </details> ### Accuracy | Type | Score | | --- | --- | | `ENTS_F` | 92.06 | | `ENTS_P` | 91.93 | | `ENTS_R` | 92.18 | | `TRANSFORMER_LOSS` | 248325.98 | | `NER_LOSS` | 120059.07 |
mwesner/reformer-clm
mwesner
2021-09-05T13:44:41Z
5
0
transformers
[ "transformers", "pytorch", "reformer", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- model-index: - name: reformer-clm --- ## reformer-clm This casual language model was trained from scratch on CNN/Dailymail dataset. It achieves the following results on the evaluation set: - Loss: 2.7783 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 500 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 3.8321 | 1.0 | 18412 | 3.8074 | | 3.4965 | 2.0 | 36824 | 3.4223 | | 3.1927 | 3.0 | 55236 | 3.0815 | | 3.046 | 4.0 | 73648 | 2.9270 | | 2.9781 | 5.0 | 92060 | 2.8515 | | 2.9398 | 6.0 | 110472 | 2.8082 | | 2.9293 | 7.0 | 128884 | 2.7904 | | 2.9212 | 8.0 | 147296 | 2.7817 | | 2.9169 | 9.0 | 165708 | 2.7787 | | 2.9197 | 10.0 | 184120 | 2.7783 | ### Framework versions - Transformers 4.6.1 - Pytorch 1.9.0 - Datasets 1.2.1 - Tokenizers 0.10.3
bayartsogt/mlub-bert-large-cased-tr5do30ep25s42
bayartsogt
2021-09-05T11:29:06Z
0
1
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
|fold|accuracy| |-|-| | fold 0 | 0.9730504587155964 | | fold 1 | 0.9690366972477065 | | fold 2 | 0.970756880733945 | | fold 3 | 0.9684633027522935 | | fold 4 | 0.9719036697247706 | | OOF Acc | 0.9706422018348624 |
bayartsogt/mlub-bert-large-uncased-tr5do20ep25s42
bayartsogt
2021-09-05T11:26:54Z
0
1
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
|fold|accuracy| |-|-| | fold 0 | 0.9753440366972477 | | fold 1 | 0.9678899082568807 | | fold 2 | 0.9747706422018348 | | fold 3 | 0.9690366972477065 | | fold 4 | 0.9759174311926605 | | OOF Acc | 0.9725917431192661 |
devin132/w2v-timit-ft-4001
devin132
2021-09-04T22:35:42Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
# Fintuned Wav2Vec of Timit - 4001 checkpoint
recobo/chemical-bert-uncased-tsdae
recobo
2021-09-04T21:17:19Z
14
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # recobo/chemical-bert-uncased-tsdae ```python from sentence_transformers import SentenceTransformer model_name = 'recobo/chemical-bert-uncased-tsdae' model = SentenceTransformer(model_name) ```
bshlgrs/autonlp-classification_with_all_labellers-9532137
bshlgrs
2021-09-04T21:03:27Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autonlp", "en", "dataset:bshlgrs/autonlp-data-classification_with_all_labellers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: en widget: - text: "I love AutoNLP πŸ€—" datasets: - bshlgrs/autonlp-data-classification_with_all_labellers --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 9532137 ## Validation Metrics - Loss: 0.34556105732917786 - Accuracy: 0.8749890724713699 - Macro F1: 0.5243623959669343 - Micro F1: 0.8749890724713699 - Weighted F1: 0.8638030768409057 - Macro Precision: 0.5016762404900895 - Micro Precision: 0.8749890724713699 - Weighted Precision: 0.8547962562614184 - Macro Recall: 0.5529674694200845 - Micro Recall: 0.8749890724713699 - Weighted Recall: 0.8749890724713699 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/bshlgrs/autonlp-classification_with_all_labellers-9532137 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("bshlgrs/autonlp-classification_with_all_labellers-9532137", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("bshlgrs/autonlp-classification_with_all_labellers-9532137", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
superb/hubert-large-superb-ic
superb
2021-09-04T20:48:25Z
4
0
transformers
[ "transformers", "pytorch", "hubert", "audio-classification", "speech", "audio", "en", "dataset:superb", "arxiv:2105.01051", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2022-03-02T23:29:05Z
--- language: en datasets: - superb tags: - speech - audio - hubert license: apache-2.0 --- # Hubert-Large for Intent Classification ## Model description This is a ported version of [S3PRL's Hubert for the SUPERB Intent Classification task](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream/fluent_commands). The base model is [hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k), which is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. For more information refer to [SUPERB: Speech processing Universal PERformance Benchmark](https://arxiv.org/abs/2105.01051) ## Task and dataset description Intent Classification (IC) classifies utterances into predefined classes to determine the intent of speakers. SUPERB uses the [Fluent Speech Commands](https://fluent.ai/fluent-speech-commands-a-dataset-for-spoken-language-understanding-research/) dataset, where each utterance is tagged with three intent labels: **action**, **object**, and **location**. For the original model's training and evaluation instructions refer to the [S3PRL downstream task README](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#ic-intent-classification---fluent-speech-commands). ## Usage examples You can use the model directly like so: ```python import torch import librosa from datasets import load_dataset from transformers import HubertForSequenceClassification, Wav2Vec2FeatureExtractor def map_to_array(example): speech, _ = librosa.load(example["file"], sr=16000, mono=True) example["speech"] = speech return example # load a demo dataset and read audio files dataset = load_dataset("anton-l/superb_demo", "ic", split="test") dataset = dataset.map(map_to_array) model = HubertForSequenceClassification.from_pretrained("superb/hubert-large-superb-ic") feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("superb/hubert-large-superb-ic") # compute attention masks and normalize the waveform if needed inputs = feature_extractor(dataset[:4]["speech"], sampling_rate=16000, padding=True, return_tensors="pt") logits = model(**inputs).logits action_ids = torch.argmax(logits[:, :6], dim=-1).tolist() action_labels = [model.config.id2label[_id] for _id in action_ids] object_ids = torch.argmax(logits[:, 6:20], dim=-1).tolist() object_labels = [model.config.id2label[_id + 6] for _id in object_ids] location_ids = torch.argmax(logits[:, 20:24], dim=-1).tolist() location_labels = [model.config.id2label[_id + 20] for _id in location_ids] ``` ## Eval results The evaluation metric is accuracy. | | **s3prl** | **transformers** | |--------|-----------|------------------| |**test**| `0.9876` | `N/A` | ### BibTeX entry and citation info ```bibtex @article{yang2021superb, title={SUPERB: Speech processing Universal PERformance Benchmark}, author={Yang, Shu-wen and Chi, Po-Han and Chuang, Yung-Sung and Lai, Cheng-I Jeff and Lakhotia, Kushal and Lin, Yist Y and Liu, Andy T and Shi, Jiatong and Chang, Xuankai and Lin, Guan-Ting and others}, journal={arXiv preprint arXiv:2105.01051}, year={2021} } ```
superb/wav2vec2-large-superb-ic
superb
2021-09-04T19:52:29Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "audio-classification", "speech", "audio", "en", "dataset:superb", "arxiv:2105.01051", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2022-03-02T23:29:05Z
--- language: en datasets: - superb tags: - speech - audio - wav2vec2 license: apache-2.0 --- # Wav2Vec2-Large for Intent Classification ## Model description This is a ported version of [S3PRL's Wav2Vec2 for the SUPERB Intent Classification task](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream/fluent_commands). The base model is [wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60), which is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. For more information refer to [SUPERB: Speech processing Universal PERformance Benchmark](https://arxiv.org/abs/2105.01051) ## Task and dataset description Intent Classification (IC) classifies utterances into predefined classes to determine the intent of speakers. SUPERB uses the [Fluent Speech Commands](https://fluent.ai/fluent-speech-commands-a-dataset-for-spoken-language-understanding-research/) dataset, where each utterance is tagged with three intent labels: **action**, **object**, and **location**. For the original model's training and evaluation instructions refer to the [S3PRL downstream task README](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#ic-intent-classification---fluent-speech-commands). ## Usage examples You can use the model directly like so: ```python import torch import librosa from datasets import load_dataset from transformers import Wav2Vec2ForSequenceClassification, Wav2Vec2FeatureExtractor def map_to_array(example): speech, _ = librosa.load(example["file"], sr=16000, mono=True) example["speech"] = speech return example # load a demo dataset and read audio files dataset = load_dataset("anton-l/superb_demo", "ic", split="test") dataset = dataset.map(map_to_array) model = Wav2Vec2ForSequenceClassification.from_pretrained("superb/wav2vec2-large-superb-ic") feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("superb/wav2vec2-large-superb-ic") # compute attention masks and normalize the waveform if needed inputs = feature_extractor(dataset[:4]["speech"], sampling_rate=16000, padding=True, return_tensors="pt") logits = model(**inputs).logits action_ids = torch.argmax(logits[:, :6], dim=-1).tolist() action_labels = [model.config.id2label[_id] for _id in action_ids] object_ids = torch.argmax(logits[:, 6:20], dim=-1).tolist() object_labels = [model.config.id2label[_id + 6] for _id in object_ids] location_ids = torch.argmax(logits[:, 20:24], dim=-1).tolist() location_labels = [model.config.id2label[_id + 20] for _id in location_ids] ``` ## Eval results The evaluation metric is accuracy. | | **s3prl** | **transformers** | |--------|-----------|------------------| |**test**| `0.9528` | `N/A` | ### BibTeX entry and citation info ```bibtex @article{yang2021superb, title={SUPERB: Speech processing Universal PERformance Benchmark}, author={Yang, Shu-wen and Chi, Po-Han and Chuang, Yung-Sung and Lai, Cheng-I Jeff and Lakhotia, Kushal and Lin, Yist Y and Liu, Andy T and Shi, Jiatong and Chang, Xuankai and Lin, Guan-Ting and others}, journal={arXiv preprint arXiv:2105.01051}, year={2021} } ```
Wikidepia/IndoT5-base-paraphrase
Wikidepia
2021-09-04T02:49:33Z
109
0
transformers
[ "transformers", "pytorch", "jax", "tensorboard", "t5", "text2text-generation", "id", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: - id --- # Paraphrase Generation with IndoT5 Base IndoT5-base trained on translated PAWS. ## Model in action ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("Wikidepia/IndoT5-base-paraphrase") model = AutoModelForSeq2SeqLM.from_pretrained("Wikidepia/IndoT5-base-paraphrase") sentence = "Anak anak melakukan piket kelas agar kebersihan kelas terjaga" text = "paraphrase: " + sentence + " </s>" encoding = tokenizer(text, padding='longest', return_tensors="pt") outputs = model.generate( input_ids=encoding["input_ids"], attention_mask=encoding["attention_mask"], max_length=512, do_sample=True, top_k=200, top_p=0.95, early_stopping=True, num_return_sequences=5 ) ``` ## Limitations Sometimes paraphrase contain date which doesnt exists in the original text :/ ## Acknowledgement Thanks to Tensorflow Research Cloud for providing TPU v3-8s.
tau/splinter-large-qass
tau
2021-09-03T08:47:23Z
7
0
transformers
[ "transformers", "pytorch", "splinter", "question-answering", "SplinterModel", "en", "arxiv:2108.05857", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: en tags: - splinter - SplinterModel license: apache-2.0 --- # Splinter large model, (with pretrained QASS-layer weights) Splinter-large is the pretrained model discussed in the paper [Few-Shot Question Answering by Pretraining Span Selection](https://aclanthology.org/2021.acl-long.239/) (at ACL 2021). Its original repository can be found [here](https://github.com/oriram/splinter). The model is case-sensitive. Note (1): This model **does** contain the pretrained weights for the QASS layer (see paper for details). For the model **without** those weights, see [tau/splinter-large](https://huggingface.co/tau/splinter-large). Note (2): Splinter-large was trained after the paper was released, so the results are not reported. However, this model outperforms the base model by large margins. For example, on SQuAD, the model is able to reach 80% F1 given only 128 examples, whereas the base model obtains only ~73%). See the results for Splinter-large in the Appendix of [this paper](https://arxiv.org/pdf/2108.05857.pdf). ## Model description Splinter is a model that is pretrained in a self-supervised fashion for few-shot question answering. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the Recurring Span Selection (RSS) objective, which emulates the span selection process involved in extractive question answering. Given a text, clusters of recurring spans (n-grams that appear more than once in the text) are first identified. For each such cluster, all of its instances but one are replaced with a special `[QUESTION]` token, and the model should select the correct (i.e., unmasked) span for each masked one. The model also defines the Question-Aware Span selection (QASS) layer, which selects spans conditioned on a specific question (in order to perform multiple predictions). ## Intended uses & limitations The prime use for this model is few-shot extractive QA. ## Pretraining The model was pretrained on a v3-32 TPU for 2.4M steps. The training data is based on **Wikipedia** and **BookCorpus**. See the paper for more details. ### BibTeX entry and citation info ```bibtex @inproceedings{ram-etal-2021-shot, title = "Few-Shot Question Answering by Pretraining Span Selection", author = "Ram, Ori and Kirstain, Yuval and Berant, Jonathan and Globerson, Amir and Levy, Omer", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-long.239", doi = "10.18653/v1/2021.acl-long.239", pages = "3066--3079", } ```
xhyi/PT_GPTNEO1300_Delish_v6
xhyi
2021-09-02T22:29:48Z
3
0
transformers
[ "transformers", "pytorch", "gpt_neo", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
# Delish v6 (GPT-Neo 1.3B) This model is from the DelishBot project.
huggingartists/lil-nas-x
huggingartists
2021-09-02T20:06:24Z
8
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/lil-nas-x", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/lil-nas-x tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/f50e1ac333da1f744f98eec38e44dd29.640x640x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">πŸ€– HuggingArtists Model πŸ€–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Lil Nas X</div> <a href="https://genius.com/artists/lil-nas-x"> <div style="text-align: center; font-size: 14px;">@lil-nas-x</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Lil Nas X. Dataset is available [here](https://huggingface.co/datasets/huggingartists/lil-nas-x). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/lil-nas-x") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/n5s2tj7p/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Lil Nas X's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/334lnf7p) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/334lnf7p/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/lil-nas-x') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/lil-nas-x") model = AutoModelWithLMHead.from_pretrained("huggingartists/lil-nas-x") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
mnaylor/psychbert-cased
mnaylor
2021-09-02T13:57:46Z
14
7
transformers
[ "transformers", "jax", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# PsychBERT This domain adapted language model is pretrained from the `bert-base-cased` checkpoint on masked language modeling, using a dataset of ~40,000 PubMed papers in the domain of psychology, psychiatry, mental health, and behavioral health; as well as a dastaset of roughly 200,000 social media conversations about mental health. This work is submitted as an entry for BIBM 2021. **Note**: the token-prediction widget on this page does not work with Flax models. In order to use the model, please pull it into a Python session as follows: ``` from transformers import FlaxAutoModelForMaskedLM, AutoModelForMaskedLM # load as a flax model flax_lm = FlaxAutoModelForMaskedLM.from_pretrained('mnaylor/psychbert-cased') # load as a pytorch model # requires flax to be installed in your environment pytorch_lm = AutoModelForMaskedLM.from_pretrained('mnaylor/psychbert-cased', from_flax=True) ``` Authors: Vedant Vajre, Mitch Naylor, Uday Kamath, Amarda Shehu
SaulLu/test-add-new-model
SaulLu
2021-09-02T12:47:36Z
6
0
transformers
[ "transformers", "pytorch", "bart", "feature-extraction", "arxiv:2107.06955", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:04Z
# HTLM Pretraining Dataset: 23TB of simplified HTML extracted from common crawl dumps Paper: [HTLM: Hyper-Text Pre-Training and Prompting of Language Models](https://arxiv.org/abs/2107.06955) Authors: Armen Aghajanyan, Dmytro Okhonko, Mike Lewis, Mandar Joshi, Hu Xu, Gargi Ghosh, Luke Zettlemoyer Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Abstract We introduce HTLM, a hyper-text language model trained on a large-scale web crawl. Modeling hyper-text has a number of advantages: (1) it is easily gathered at scale, (2) it provides rich document-level and end-task-adjacent supervision (e.g. class and id attributes often encode document category information), and (3) it allows for new structured prompting that follows the established semantics of HTML (e.g. to do zero-shot summarization by infilling title tags for a webpage that contains the input text). We show that pretraining with a BART-style denoising loss directly on simplified HTML provides highly effective transfer for a wide range of end tasks and supervision levels. HTLM matches or exceeds the performance of comparably sized text-only LMs for zero-shot prompting and fine-tuning for classification benchmarks, while also setting new state-of-the-art performance levels for zero-shot summarization. We also find that hyper-text prompts provide more value to HTLM, in terms of data efficiency, than plain text prompts do for existing LMs, and that HTLM is highly effective at auto-prompting itself, by simply generating the most likely hyper-text formatting for any available training data. We will release all code and models to support future HTLM research. ## Usage For the moment you can use it as is to do a classic Mask Filling task (see snippet bellow) or fine-tune it on a downstream task. ``` from transformers import BartTokenizer, BartForConditionalGeneration TXT = "My friends are <mask> but they eat too many carbs." model_name = "SaulLu/test-add-new-model" tokenizer = BartTokenizer.from_pretrained(model_name) model = BartForConditionalGeneration.from_pretrained(model_name) input_ids = tokenizer([TXT], return_tensors='pt')['input_ids'] logits = model(input_ids).logits masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item() probs = logits[0, masked_index].softmax(dim=0) values, predictions = probs.topk(5) tokenizer.decode(predictions).split() ```
flax-community/gpt2-medium-indonesian
flax-community
2021-09-02T12:22:45Z
20
6
transformers
[ "transformers", "pytorch", "jax", "tensorboard", "gpt2", "text-generation", "id", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: id widget: - text: "Sewindu sudah kita tak berjumpa, rinduku padamu sudah tak terkira." --- # GPT2-medium-indonesian This is a pretrained model on Indonesian language using a causal language modeling (CLM) objective, which was first introduced in [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and first released at [this page](https://openai.com/blog/better-language-models/). This model was trained using HuggingFace's Flax framework and is part of the [JAX/Flax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104) organized by [HuggingFace](https://huggingface.co). All training was done on a TPUv3-8 VM sponsored by the Google Cloud team. The demo can be found [here](https://huggingface.co/spaces/flax-community/gpt2-indonesian). ## How to use You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='flax-community/gpt2-medium-indonesian') >>> set_seed(42) >>> generator("Sewindu sudah kita tak berjumpa,", max_length=30, num_return_sequences=5) [{'generated_text': 'Sewindu sudah kita tak berjumpa, dua dekade lalu, saya hanya bertemu sekali. Entah mengapa, saya lebih nyaman berbicara dalam bahasa Indonesia, bahasa Indonesia'}, {'generated_text': 'Sewindu sudah kita tak berjumpa, tapi dalam dua hari ini, kita bisa saja bertemu.”\ β€œKau tau, bagaimana dulu kita bertemu?” aku'}, {'generated_text': 'Sewindu sudah kita tak berjumpa, banyak kisah yang tersimpan. Tak mudah tuk kembali ke pelukan, di mana kini kita berada, sebuah tempat yang jauh'}, {'generated_text': 'Sewindu sudah kita tak berjumpa, sejak aku lulus kampus di Bandung, aku sempat mencari kabar tentangmu. Ah, masih ada tempat di hatiku,'}, {'generated_text': 'Sewindu sudah kita tak berjumpa, tapi Tuhan masih saja menyukarkan doa kita masing-masing.\ Tuhan akan memberi lebih dari apa yang kita'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import GPT2Tokenizer, GPT2Model tokenizer = GPT2Tokenizer.from_pretrained('flax-community/gpt2-medium-indonesian') model = GPT2Model.from_pretrained('flax-community/gpt2-medium-indonesian') text = "Ubah dengan teks apa saja." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import GPT2Tokenizer, TFGPT2Model tokenizer = GPT2Tokenizer.from_pretrained('flax-community/gpt2-medium-indonesian') model = TFGPT2Model.from_pretrained('flax-community/gpt2-medium-indonesian') text = "Ubah dengan teks apa saja." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Limitations and bias The training data used for this model are Indonesian websites of [OSCAR](https://oscar-corpus.com/), [mc4](https://huggingface.co/datasets/mc4) and [Wikipedia](https://huggingface.co/datasets/wikipedia). The datasets contain a lot of unfiltered content from the internet, which is far from neutral. While we have done some filtering on the dataset (see the **Training data** section), the filtering is by no means a thorough mitigation of biased content that is eventually used by the training data. These biases might also affect models that are fine-tuned using this model. As the openAI team themselves point out in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases): > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases > that require the generated text to be true. > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we > do not recommend that they be deployed into systems that interact with humans > unless the deployers first carry > out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, > race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with > similar levels of caution around use cases that are sensitive to biases around human attributes. We have done a basic bias analysis that you can find in this [notebook](https://huggingface.co/flax-community/gpt2-small-indonesian/blob/main/bias_analysis/gpt2_medium_indonesian_bias_analysis.ipynb), performed on [Indonesian GPT2 medium](https://huggingface.co/flax-community/gpt2-medium-indonesian), based on the bias analysis for [Polish GPT2](https://huggingface.co/flax-community/papuGaPT2) with modifications. ### Gender bias We generated 50 texts starting with prompts "She/He works as". After doing some preprocessing (lowercase and stopwords removal) we obtain texts that are used to generate word clouds of female/male professions. The most salient terms for male professions are: driver, sopir (driver), ojek, tukang, online. ![gender bias - male](https://huggingface.co/flax-community/gpt2-small-indonesian/raw/main/bias_analysis/wordcloud_male.png) The most salient terms for female professions are: pegawai (employee), konsultan (consultant), asisten (assistant). ![gender bias - female](https://huggingface.co/flax-community/gpt2-small-indonesian/raw/main/bias_analysis/wordcloud_female.png) ### Ethnicity bias We generated 1,200 texts to assess bias across ethnicity and gender vectors. We will create prompts with the following scheme: * Person - we will assess 5 ethnicities: Sunda, Batak, Minahasa, Dayak, Asmat, Neutral (no ethnicity) * Topic - we will use 5 different topics: * random act: *entered home* * said: *said* * works as: *works as* * intent: *let [person] ...* * define: *is* Sample of generated prompt: "seorang perempuan sunda masuk ke rumah..." (a Sundanese woman enters the house...) We used a [model](https://huggingface.co/Hate-speech-CNERG/dehatebert-mono-indonesian) trained on Indonesian hate speech corpus ([dataset 1](https://github.com/okkyibrohim/id-multi-label-hate-speech-and-abusive-language-detection), [dataset 2](https://github.com/ialfina/id-hatespeech-detection)) to obtain the probability that each generated text contains hate speech. To avoid leakage, we removed the first word identifying the ethnicity and gender from the generated text before running the hate speech detector. The following chart demonstrates the intensity of hate speech associated with the generated texts with outlier scores removed. Some ethnicities score higher than the neutral baseline. ![bias analysis - ethnicities](https://huggingface.co/flax-community/gpt2-small-indonesian/raw/main/bias_analysis/bias_ethnicity.png) ### Religion bias With the same methodology above, we generated 1,400 texts to assess bias across religion and gender vectors. We will assess 6 religions: Islam, Protestan (Protestant), Katolik (Catholic), Buddha (Buddhism), Hindu (Hinduism), and Khonghucu (Confucianism) with Neutral (no religion) as a baseline. The following chart demonstrates the intensity of hate speech associated with the generated texts with outlier scores removed. Some religions score higher than the neutral baseline. ![bias analysis - ethnicities](https://huggingface.co/flax-community/gpt2-small-indonesian/raw/main/bias_analysis/bias_religion.png) ## Training data The model was trained on a combined dataset of [OSCAR](https://oscar-corpus.com/), [mc4](https://huggingface.co/datasets/mc4) and Wikipedia for the Indonesian language. We have filtered and reduced the mc4 dataset so that we end up with 29 GB of data in total. The mc4 dataset was cleaned using [this filtering script](https://github.com/Wikidepia/indonesian_datasets/blob/master/dump/mc4/cleanup.py) and we also only included links that have been cited by the Indonesian Wikipedia. ## Training procedure The model was trained on a TPUv3-8 VM provided by the Google Cloud team. The training duration was `6d 3h 7m 26s`. ### Evaluation results The model achieves the following results without any fine-tuning (zero-shot): | dataset | train loss | eval loss | eval perplexity | | ---------- | ---------- | -------------- | ---------- | | ID OSCAR+mc4+Wikipedia (29GB) | 2.79 | 2.696 | 14.826 | ### Tracking The training process was tracked in [TensorBoard](https://huggingface.co/flax-community/gpt2-medium-indonesian/tensorboard) and [Weights and Biases](https://wandb.ai/wandb/hf-flax-gpt2-indonesian?workspace=user-cahya). ## Team members - Akmal ([@Wikidepia](https://huggingface.co/Wikidepia)) - alvinwatner ([@alvinwatner](https://huggingface.co/alvinwatner)) - Cahya Wirawan ([@cahya](https://huggingface.co/cahya)) - Galuh Sahid ([@Galuh](https://huggingface.co/Galuh)) - Muhammad Agung Hambali ([@AyameRushia](https://huggingface.co/AyameRushia)) - Muhammad Fhadli ([@muhammadfhadli](https://huggingface.co/muhammadfhadli)) - Samsul Rahmadani ([@munggok](https://huggingface.co/munggok)) ## Future work We would like to pre-train further the models with larger and cleaner datasets and fine-tune it to specific domains if we can get the necessary hardware resources.
DataikuNLP/paraphrase-multilingual-MiniLM-L12-v2
DataikuNLP
2021-09-02T08:31:10Z
393
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "arxiv:1908.10084", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:04Z
--- pipeline_tag: sentence-similarity license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # DataikuNLP/paraphrase-multilingual-MiniLM-L12-v2 **This model is a copy of [this model repository](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) from sentence-transformers at the specific commit `d66eff4d8a8598f264f166af8db67f7797164651`.** This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2') model = AutoModel.from_pretrained('sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
DataikuNLP/distiluse-base-multilingual-cased-v1
DataikuNLP
2021-09-02T08:25:03Z
157
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "arxiv:1908.10084", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:04Z
--- pipeline_tag: sentence-similarity license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # DataikuNLP/distiluse-base-multilingual-cased-v1 **This model is a copy of [this model repository](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v1) from sentence-transformers at the specific commit `3a706e4d65c04f868c4684adfd4da74141be8732`.** This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/distiluse-base-multilingual-cased-v1') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/distiluse-base-multilingual-cased-v1) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
DataikuNLP/TinyBERT_General_4L_312D
DataikuNLP
2021-09-02T08:09:47Z
96
1
transformers
[ "transformers", "pytorch", "jax", "bert", "arxiv:1909.10351", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04Z
TinyBERT: Distilling BERT for Natural Language Understanding ======== **This model is a copy of [this model repository](https://huggingface.co/huawei-noah/TinyBERT_General_4L_312D) from Huawei Noah at the specific commit `34707a33cd59a94ecde241ac209bf35103691b43`.** TinyBERT is 7.5x smaller and 9.4x faster on inference than BERT-base and achieves competitive performances in the tasks of natural language understanding. It performs a novel transformer distillation at both the pre-training and task-specific learning stages. In general distillation, we use the original BERT-base without fine-tuning as the teacher and a large-scale text corpus as the learning data. By performing the Transformer distillation on the text from general domain, we obtain a general TinyBERT which provides a good initialization for the task-specific distillation. We here provide the general TinyBERT for your tasks at hand. For more details about the techniques of TinyBERT, refer to our paper: [TinyBERT: Distilling BERT for Natural Language Understanding](https://arxiv.org/abs/1909.10351) Citation ======== If you find TinyBERT useful in your research, please cite the following paper: ``` @article{jiao2019tinybert, title={Tinybert: Distilling bert for natural language understanding}, author={Jiao, Xiaoqi and Yin, Yichun and Shang, Lifeng and Jiang, Xin and Chen, Xiao and Li, Linlin and Wang, Fang and Liu, Qun}, journal={arXiv preprint arXiv:1909.10351}, year={2019} } ```
DataikuNLP/paraphrase-MiniLM-L6-v2
DataikuNLP
2021-09-02T08:05:59Z
57
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "arxiv:1908.10084", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:04Z
--- pipeline_tag: sentence-similarity license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # DataikuNLP/paraphrase-MiniLM-L6-v2 **This model is a copy of [this model repository](https://huggingface.co/sentence-transformers/paraphrase-MiniLM-L6-v2/) from sentence-transformers at the specific commit `c4dfcde8a3e3e17e85cd4f0ec1925a266187f48e`.** This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/paraphrase-MiniLM-L6-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-MiniLM-L6-v2') model = AutoModel.from_pretrained('sentence-transformers/paraphrase-MiniLM-L6-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-MiniLM-L6-v2) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
Hoang/distilbert-base-uncased-finetuned-squad
Hoang
2021-09-02T07:32:09Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-finetuned-squad results: - task: name: Question Answering type: question-answering dataset: name: squad type: squad args: plain_text --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1582 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2176 | 1.0 | 5533 | 1.1429 | | 0.9425 | 2.0 | 11066 | 1.1196 | | 0.7586 | 3.0 | 16599 | 1.1582 | ### Framework versions - Transformers 4.10.0 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
xhyi/distilLED3_08_31_2021_v5
xhyi
2021-09-02T01:44:58Z
5
0
transformers
[ "transformers", "pytorch", "led", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
\nTraining Loss Validation Loss Rouge2 Precision Rouge2 Recall Rouge2 Fmeasure 2.880900 2.715085 0.121400 0.142300 0.117100 +200 steps total = 440 steps tokenization: max article: 8192 max abstract: 512
xhyi/distilLED1_08_31_2021_v3
xhyi
2021-09-02T01:41:23Z
4
0
transformers
[ "transformers", "pytorch", "led", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
Step Training Loss Validation Loss Rouge2 Precision Rouge2 Recall Rouge2 Fmeasure 240 2.513600 3.049892 0.082800 0.102600 0.085700 240 steps
gagan3012/bert-tiny-finetuned-ner
gagan3012
2021-09-01T23:50:44Z
64
4
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-tiny-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.8083060109289617 - name: Recall type: recall value: 0.8273856136033113 - name: F1 type: f1 value: 0.8177345348001547 - name: Accuracy type: accuracy value: 0.9597597979252387 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-tiny-finetuned-ner This model is a fine-tuned version of [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.1689 - Precision: 0.8083 - Recall: 0.8274 - F1: 0.8177 - Accuracy: 0.9598 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0355 | 1.0 | 878 | 0.1692 | 0.8072 | 0.8248 | 0.8159 | 0.9594 | | 0.0411 | 2.0 | 1756 | 0.1678 | 0.8101 | 0.8277 | 0.8188 | 0.9600 | | 0.0386 | 3.0 | 2634 | 0.1697 | 0.8103 | 0.8269 | 0.8186 | 0.9599 | | 0.0373 | 4.0 | 3512 | 0.1694 | 0.8106 | 0.8263 | 0.8183 | 0.9600 | | 0.0383 | 5.0 | 4390 | 0.1689 | 0.8083 | 0.8274 | 0.8177 | 0.9598 | ### Framework versions - Transformers 4.10.0 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
DataikuNLP/average_word_embeddings_glove.6B.300d
DataikuNLP
2021-09-01T15:57:24Z
0
1
sentence-transformers
[ "sentence-transformers", "feature-extraction", "sentence-similarity", "arxiv:1908.10084", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:04Z
--- pipeline_tag: sentence-similarity license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity --- # average_word_embeddings_glove.6B.300d **This model is a copy of [this model repository](https://huggingface.co/sentence-transformers/average_word_embeddings_glove.6B.300d) from sentence-transformers at the specific commit `5d2b7d1c127036ae98b9d487eca4d48744edc709`.** This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/average_word_embeddings_glove.6B.300d') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/average_word_embeddings_glove.6B.300d) ## Full Model Architecture ``` SentenceTransformer( (0): WordEmbeddings( (emb_layer): Embedding(400001, 300) ) (1): Pooling({'word_embedding_dimension': 300, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
espnet/su_openslr36
espnet
2021-09-01T15:51:23Z
1
0
espnet
[ "espnet", "audio", "automatic-speech-recognition", "su", "dataset:su_openslr36", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - espnet - audio - automatic-speech-recognition language: su datasets: - su_openslr36 license: cc-by-4.0 --- ## ESPnet2 ASR pretrained model ### `su_openslr36` ♻️ Imported from https://zenodo.org/record/5090135/ This model was trained by su_openslr36 using su_openslr36/asr1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
patrickvonplaten/wav2vec2_tiny_random_robust
patrickvonplaten
2021-09-01T14:48:17Z
86
0
transformers
[ "transformers", "pytorch", "wav2vec2", "feature-extraction", "automatic-speech-recognition", "en", "dataset:librispeech_asr", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: en datasets: - librispeech_asr tags: - automatic-speech-recognition license: apache-2.0 --- ## Test model To test this model run the following code: ```python from datasets import load_dataset from transformers import Wav2Vec2ForCTC import torchaudio import torch ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") model = Wav2Vec2ForCTC.from_pretrained("patrickvonplaten/wav2vec2_tiny_random_robust") def load_audio(batch): batch["samples"], _ = torchaudio.load(batch["file"]) return batch ds = ds.map(load_audio) input_values = torch.nn.utils.rnn.pad_sequence([torch.tensor(x[0]) for x in ds["samples"][:10]], batch_first=True) # forward logits = model(input_values).logits pred_ids = torch.argmax(logits, dim=-1) # dummy loss dummy_labels = pred_ids.clone() dummy_labels[dummy_labels == model.config.pad_token_id] = 1 # can't have CTC blank token in label dummy_labels = dummy_labels[:, -(dummy_labels.shape[1] // 4):] # make sure labels are shorter to avoid "inf" loss (can still happen though...) loss = model(input_values, labels=dummy_labels).loss ```
eugenesiow/mdsr
eugenesiow
2021-09-01T06:42:34Z
78
1
transformers
[ "transformers", "MDSR", "super-image", "image-super-resolution", "dataset:eugenesiow/Div2k", "dataset:eugenesiow/Set5", "dataset:eugenesiow/Set14", "dataset:eugenesiow/BSD100", "dataset:eugenesiow/Urban100", "arxiv:1707.02921", "arxiv:2104.07566", "arxiv:1803.08664", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - super-image - image-super-resolution datasets: - eugenesiow/Div2k - eugenesiow/Set5 - eugenesiow/Set14 - eugenesiow/BSD100 - eugenesiow/Urban100 metrics: - pnsr - ssim --- # Multi-Scale Deep Super-Resolution System (MDSR) MDSR model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper [Enhanced Deep Residual Networks for Single Image Super-Resolution](https://arxiv.org/abs/1707.02921) by Lim et al. (2017) and first released in [this repository](https://github.com/sanghyun-son/EDSR-PyTorch). The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling and model upscaling. ![Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 4](images/mdsr_4_4_compare.png "Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 4") ## Model description The MDSR is a model that uses both deeper and wider architecture (32 ResBlocks and 256 channels) to improve performance. It uses both global and local skip connections, and up-scaling is done at the end of the network. It doesn't use batch normalization layers (input and output have similar distributions, normalizing intermediate features may not be desirable) instead it uses constant scaling layers to ensure stable training. An L1 loss function (absolute error) is used instead of L2 (MSE), the authors showed better performance empirically and it requires less computation. ## Intended uses & limitations You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset. ### How to use The model can be used with the [super_image](https://github.com/eugenesiow/super-image) library: ```bash pip install super-image ``` Here is how to use a pre-trained model to upscale your image: ```python from super_image import MdsrModel, ImageLoader from PIL import Image import requests url = 'https://paperswithcode.com/media/datasets/Set5-0000002728-07a9793f_zA3bDjj.jpg' image = Image.open(requests.get(url, stream=True).raw) model = MdsrModel.from_pretrained('eugenesiow/mdsr', scale=2) # scale 2, 3 and 4 models available inputs = ImageLoader.load_image(image) preds = model(inputs) ImageLoader.save_image(preds, './scaled_2x.png') # save the output 2x scaled image to `./scaled_2x.png` ImageLoader.save_compare(inputs, preds, './scaled_2x_compare.png') # save an output comparing the super-image with a bicubic scaling ``` [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Upscale_Images_with_Pretrained_super_image_Models.ipynb "Open in Colab") ## Training data The models for 2x, 3x and 4x image super resolution were pretrained on [DIV2K](https://huggingface.co/datasets/eugenesiow/Div2k), a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900). ## Training procedure ### Preprocessing We follow the pre-processing and training method of [Wang et al.](https://arxiv.org/abs/2104.07566). Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times. During training, RGB patches with size of 64Γ—64 from the LR input are used together with their corresponding HR patches. Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image. We need the huggingface [datasets](https://huggingface.co/datasets?filter=task_ids:other-other-image-super-resolution) library to download the data: ```bash pip install datasets ``` The following code gets the data and preprocesses/augments the data. ```python from datasets import load_dataset from super_image.data import EvalDataset, TrainDataset, augment_five_crop augmented_dataset = load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='train')\ .map(augment_five_crop, batched=True, desc="Augmenting Dataset") # download and augment the data with the five_crop method train_dataset = TrainDataset(augmented_dataset) # prepare the train dataset for loading PyTorch DataLoader eval_dataset = EvalDataset(load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='validation')) # prepare the eval dataset for the PyTorch DataLoader ``` ### Pretraining The model was trained on GPU. The training code is provided below: ```python from super_image import Trainer, TrainingArguments, MdsrModel, MdsrConfig training_args = TrainingArguments( output_dir='./results', # output directory num_train_epochs=1000, # total number of training epochs ) config = MdsrConfig( scale=4, # train a model to upscale 4x ) model = MdsrModel(config) trainer = Trainer( model=model, # the instantiated model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=eval_dataset # evaluation dataset ) trainer.train() ``` [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Train_super_image_Models.ipynb "Open in Colab") ## Evaluation results The evaluation metrics include [PSNR](https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio#Quality_estimation_with_PSNR) and [SSIM](https://en.wikipedia.org/wiki/Structural_similarity#Algorithm). Evaluation datasets include: - Set5 - [Bevilacqua et al. (2012)](https://huggingface.co/datasets/eugenesiow/Set5) - Set14 - [Zeyde et al. (2010)](https://huggingface.co/datasets/eugenesiow/Set14) - BSD100 - [Martin et al. (2001)](https://huggingface.co/datasets/eugenesiow/BSD100) - Urban100 - [Huang et al. (2015)](https://huggingface.co/datasets/eugenesiow/Urban100) The results columns below are represented below as `PSNR/SSIM`. They are compared against a Bicubic baseline. |Dataset |Scale |Bicubic |mdsr | |--- |--- |--- |--- | |Set5 |2x |33.64/0.9292 |**38.04/0.9608** | |Set5 |3x |30.39/0.8678 |**35.11/0.9406** | |Set5 |4x |28.42/0.8101 |**32.26/0.8953** | |Set14 |2x |30.22/0.8683 |**33.71/0.9184** | |Set14 |3x |27.53/0.7737 |**31.06/0.8593** | |Set14 |4x |25.99/0.7023 |**28.77/0.7856** | |BSD100 |2x |29.55/0.8425 |**33.79/0.9256** | |BSD100 |3x |27.20/0.7382 |**29.66/0.8196** | |BSD100 |4x |25.96/0.6672 |**28.53/0.7653** | |Urban100 |2x |26.66/0.8408 |**32.14/0.9283** | |Urban100 |3x | |**29.29/0.8738** | |Urban100 |4x |23.14/0.6573 |**26.07/0.7851** | ![Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2](images/mdsr_2_4_compare.png "Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2") You can find a notebook to easily run evaluation on pretrained models below: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Evaluate_Pretrained_super_image_Models.ipynb "Open in Colab") ## BibTeX entry and citation info ```bibtex @article{ahn2018fast, title={Fast, Accurate, and Lightweight Super-Resolution with Cascading Residual Network}, author={Ahn, Namhyuk and Kang, Byungkon and Sohn, Kyung-Ah}, journal={arXiv preprint arXiv:1803.08664}, year={2018} } ```
nateraw/vit-base-cats-vs-dogs
nateraw
2021-08-31T20:02:08Z
92
1
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "dataset:cats_vs_dogs", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer - image-classification - pytorch datasets: - cats_vs_dogs metrics: - accuracy model-index: - name: vit-base-cats-vs-dogs results: - task: name: Image Classification type: image-classification dataset: name: cats_vs_dogs type: cats_vs_dogs args: default metrics: - name: Accuracy type: accuracy value: 0.9934510250569476 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-cats-vs-dogs This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the cats_vs_dogs dataset. It achieves the following results on the evaluation set: - Loss: 0.0202 - Accuracy: 0.9935 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 1337 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.064 | 1.0 | 311 | 0.0483 | 0.9849 | | 0.0622 | 2.0 | 622 | 0.0275 | 0.9903 | | 0.0366 | 3.0 | 933 | 0.0262 | 0.9917 | | 0.0294 | 4.0 | 1244 | 0.0219 | 0.9932 | | 0.0161 | 5.0 | 1555 | 0.0202 | 0.9935 | ### Framework versions - Transformers 4.8.1 - Pytorch 1.9.0+cu102 - Datasets 1.11.1.dev0 - Tokenizers 0.10.3
Saibo-creator/legal-roberta-base
Saibo-creator
2021-08-31T15:36:35Z
263
6
transformers
[ "transformers", "pytorch", "tf", "jax", "roberta", "fill-mask", "legal", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - en tags: - legal license: apache-2.0 metrics: - precision - recall --- # LEGAL-ROBERTA We introduce LEGAL-ROBERTA, which is a domain-specific language representation model fine-tuned on large-scale legal corpora(4.6 GB). ## Demo 'This \<mask\> Agreement is between General Motors and John Murray .' | Model | top1 | top2 | top3 | top4 | top5 | | ------------ | ---- | --- | --- | --- | -------- | | Bert | new | current | proposed | marketing | joint | | legalBert | settlement | letter | dealer | master | supplemental | | legalRoberta | License | Settlement | Contract | license | Trust | > LegalRoberta captures the case 'The applicant submitted that her husband was subjected to treatment amounting to \<mask\> whilst in the custody of Adana Security Directorate' | Model | top1 | top2 | top3 | top4 | top5 | | ------------ | ---- | --- | --- | --- | -------- | | Bert | torture | rape | abuse | death | violence | | legalBert | torture | detention | arrest | rape | death | | legalRoberta | torture | abuse | insanity | cruelty | confinement | 'Establishing a system for the identification and registration of \<mask\> animals and regarding the labeling of beef and beef products .': | Model | top1 | top2 | top3 | top4 | top5 | | ------------ | ---- | --- | --- | --- | -------- | | Bert | farm | livestock | draft | domestic | wild | | legalBert | live | beef | farm | pet | dairy | | legalRoberta | domestic | all | beef | wild | registered | ## Load Pretrained Model ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("saibo/legal-roberta-base") model = AutoModel.from_pretrained("saibo/legal-roberta-base") ``` ## Training data The training data consists of 3 origins: 1. Patent Litigations (https://www.kaggle.com/uspto/patent-litigations): This dataset covers over 74k cases across 52 years and over 5 million relevant documents. 5 different files detail the litigating parties, their attorneys, results, locations, and dates. 1. *1.57GB* 2. abbrev:PL 3. *clean 1.1GB* 2. Caselaw Access Project (CAP) (https://case.law/): Following 360 years of United States case law, Caselaw Access Project (CAP) API and bulk data services includes 40 million pages of U.S. court decisions and almost 6.5 million individual cases. 1. *raw 5.6* 2. abbrev:CAP 3. *clean 2.8GB* 3. Google Patents Public Data (https://www.kaggle.com/bigquery/patents): The Google Patents Public Data contains a collection of publicly accessible, connected database tables for empirical analysis of the international patent system. 1. *BigQuery (https://www.kaggle.com/sohier/beyond-queries-exploring-the-bigquery-api)* 2. abbrev:GPPD(1.1GB,patents-public-data.uspto_oce_litigation.documents) 3. *clean 1GB* ## Training procedure We start from a pretrained ROBERTA-BASE model and fine-tune it on the legal corpus. Fine-tuning configuration: - lr = 5e-5(with lr decay, ends at 4.95e-8) - num_epoch = 3 - Total steps = 446500 - Total_flos = 2.7365e18 Loss starts at 1.850 and ends at 0.880 The perplexity after fine-tuning on legal corpus = 2.2735 Device: 2*GeForce GTX TITAN X computeCapability: 5.2 ## Eval results We benchmarked the model on two downstream tasks: Multi-Label Classification for Legal Text and Catchphrase Retrieval with Legal Case Description. 1.LMTC, Legal Multi-Label Text Classification Dataset: Labels shape: 4271 Frequent labels: 739 Few labels: 3369 Zero labels: 163 Hyperparameters: - lr: 1e-05 - batch_size: 4 - max_sequence_size: 512 - max_label_size: 15 - few_threshold: 50 - epochs: 10 - dropout:0.1 - early stop:yes - patience: 3 ## Limitations: In the Masked Language Model showroom, the tokens have the prefix **Δ **. This seems to be wired but I haven't yet been able to fix it. I know in the case of BPE tokenizer(ROBERTA's tokenizer), the symbol Δ  means the end of a new token, and the majority of tokens in the vocabs of pre-trained tokenizers start with Δ . For example ```python import transformers tokenizer = transformers.RobertaTokenizer.from_pretrained('roberta-base') print(tokenizer.tokenize('I love salad')) ``` Outputs: ``` ['I', 'Δ love', 'Δ salad'] ``` The pretraining of LegalRoBERTa was restricted by the size of legal corpora available and the number of pretraining steps is small compared to the popular domain adapted models. This makes legalRoBERTa significantly **under-trained**. ## BibTeX entry and citation info
milayue/neosh-bot1
milayue
2021-08-31T10:43:59Z
3
1
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- tags: - conversational --- # Neosh Bot1 This is a simplified version. Hopefully will train a more complex model in the future.
redorangeyellowy/tts_korean_tacotron
redorangeyellowy
2021-08-31T03:22:31Z
0
1
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
This is Korean-TTS model. (based on Tacotron) Dataset is from Sogang University.
nreimers/MiniLM-L6-H384-uncased
nreimers
2021-08-30T20:05:29Z
1,993
34
transformers
[ "transformers", "pytorch", "jax", "bert", "feature-extraction", "license:mit", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- license: mit --- ## MiniLM: 6 Layer Version This is a 6 layer version of [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased/) by keeping only every second layer.
nreimers/MiniLM-L3-H384-uncased
nreimers
2021-08-30T20:05:09Z
86
3
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "license:mit", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- license: mit --- ## MiniLM: 3 Layer Version This is a 3 layer version of [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased/) by keeping only the layer [3, 7, 11].
jinmang2/dall-e-tokenizer
jinmang2
2021-08-30T18:20:38Z
4
0
transformers
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# DALL-E-Tokenizer Huggingface package for the discrete VAE usded for [DALL-E](https://github.com/openai/DALL-E). # How to use ```python # from dall_e_tok import DallEEncoder from dall_e_tok import DALLETokenizer tokenizer = DALLETokenizer.from_pretrained("jinmang2/dall-e-tokenizer") ```
AdapterHub/bert-base-uncased-pf-ud_en_ewt
AdapterHub
2021-08-30T15:54:13Z
1
0
adapter-transformers
[ "adapter-transformers", "bert", "adapterhub:dp/ud_ewt", "en", "dataset:universal_dependencies", "region:us" ]
null
2022-03-02T23:29:04Z
--- tags: - bert - adapterhub:dp/ud_ewt - adapter-transformers datasets: - universal_dependencies language: - en --- # Adapter `AdapterHub/bert-base-uncased-pf-ud_en_ewt` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [dp/ud_ewt](https://adapterhub.ml/explore/dp/ud_ewt/) dataset and includes a prediction head for dependency parsing. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-ud_en_ewt", source="hf", set_active=True) ``` ## Architecture & Training This adapter was trained using adapter-transformer's example script for dependency parsing. See https://github.com/Adapter-Hub/adapter-transformers/tree/master/examples/dependency-parsing. ## Evaluation results Scores achieved by dependency parsing adapters on the test set of UD English EWT after training: | Model | UAS | LAS | | --- | --- | --- | | `bert-base-uncased` | 91.74 | 89.15 | | `roberta-base` | 91.43 | 88.43 | ## Citation <!-- Add some description here -->
redorangeyellowy/tts_korean_temp
redorangeyellowy
2021-08-30T10:08:00Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
This is espnet-based korean TTS model. You should recognize that this is not fisished one. Dataset is from our university, which is NOT available yet.
vasudevgupta/gsoc-wav2vec2-xlsr-53
vasudevgupta
2021-08-30T07:38:48Z
4
0
transformers
[ "transformers", "tf", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
TensorFlow equivalent of [`facebook/wav2vec2-large-xlsr-53`](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
vasudevgupta/gsoc-wav2vec2-robust
vasudevgupta
2021-08-30T07:34:01Z
5
1
transformers
[ "transformers", "tf", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
TensorFlow equivalent of [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust)
huggingtweets/sarthaktexas
huggingtweets
2021-08-30T07:16:29Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/sarthaktexas/1630307785663/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1425242303925563394/YrMTa0kl_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">πŸ€– AI BOT πŸ€–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Sarthak Mohanty</div> <div style="text-align: center; font-size: 14px;">@sarthaktexas</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Sarthak Mohanty. | Data | Sarthak Mohanty | | --- | --- | | Tweets downloaded | 2431 | | Retweets | 1529 | | Short tweets | 209 | | Tweets kept | 693 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/25qevo9e/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sarthaktexas's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/zm9579aw) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/zm9579aw/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/sarthaktexas') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
riyadhctg/distilbert-base-uncased-finetuned-cola
riyadhctg
2021-08-30T07:04:19Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model_index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metric: name: Matthews Correlation type: matthews_correlation value: 0.5526838482765232 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7691 - Matthews Correlation: 0.5527 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5247 | 1.0 | 535 | 0.5390 | 0.4315 | | 0.353 | 2.0 | 1070 | 0.5273 | 0.4994 | | 0.2386 | 3.0 | 1605 | 0.6391 | 0.5089 | | 0.17 | 4.0 | 2140 | 0.7691 | 0.5527 | | 0.1348 | 5.0 | 2675 | 0.8483 | 0.5472 | ### Framework versions - Transformers 4.9.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
uhhlt/bert-based-uncased-hatespeech-movies
uhhlt
2021-08-29T21:42:02Z
6
3
transformers
[ "transformers", "tf", "bert", "text-classification", "en", "arxiv:2108.10724", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: en tag: text-classification datasets: - twitter - movies subtitles --- # bert-based-uncased-hatespeech-movies: A hatespeech model used to classify text as **normal**, **offensive**, **hatespeech** in Movie subtitles. The model is initially a pre-trained transformer model(bert-based-uncased) which is further trained on Twitter comments which can be normal, offensive and hate to learn the context from social media data. It is then fine-tuned using the movie subtitles dataset. Please check our paper and if used please cite ``` @article{von2021hateful, title={How Hateful are Movies? A Study and Prediction on Movie Subtitles}, author={von Boguszewski, Niklas and Moin, Sana and Bhowmick, Anirban and Yimam, Seid Muhie and Biemann, Chris}, journal={arXiv preprint arXiv:2108.10724}, year={2021} } ``` The dataset and models are available on https://github.com/uhh-lt/hatespeech
j-hartmann/emotion-english-roberta-large
j-hartmann
2021-08-29T11:48:09Z
1,644
14
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "sentiment", "emotion", "twitter", "reddit", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: "en" tags: - roberta - sentiment - emotion - twitter - reddit widget: - text: "Oh wow. I didn't know that." - text: "This movie always makes me cry.." - text: "Oh Happy Day" --- ## Description β„Ή With this model, you can classify emotions in English text data. The model was trained on 6 diverse datasets and predicts Ekman's 6 basic emotions, plus a neutral class: 1) anger 🀬 2) disgust 🀒 3) fear 😨 4) joy πŸ˜€ 5) neutral 😐 6) sadness 😭 7) surprise 😲 The model is a fine-tuned checkpoint of [RoBERTa-large](https://huggingface.co/roberta-large). For further details on this emotion model, please refer to the model card of its [DistilRoBERTa](https://huggingface.co/j-hartmann/emotion-english-distilroberta-base) version.
jean-paul/kinyaRoberta-large
jean-paul
2021-08-29T10:25:44Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "arxiv:1907.11692", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# Model description A Pretrained model on the Kinyarwanda language dataset using a masked language modeling (MLM) objective. RoBerta model was first introduced in [this paper](https://arxiv.org/abs/1907.11692). This KinyaRoBERTa model was pretrained with uncased tokens which means that no difference between for example ikinyarwanda and Ikinyarwanda. # Training parameters #### Dataset The data set used has both sources from the new articles in Rwanda extracted from different new web pages, dumped Wikipedia files, and the books in Kinyarwanda. The sizes of the sources of data are 72 thousand new articles, three thousand dumped Wikipedia articles, and six books with more than a thousand pages. #### Hyperparameters The model was trained with the default configuration of RoBerta and Trainer from the Huggingface. However, due to some resource computation issues, we kept the number of transformer layers to 12. # How to use: 1) The model can be used directly with the pipeline for masked language modeling as follows: ``` from transformers import pipeline the_mask_pipe = pipeline( "fill-mask", model='jean-paul/kinyaRoberta-large', tokenizer='jean-paul/kinyaRoberta-large', ) the_mask_pipe("Ejo ndikwiga nagize <mask> baje kunsura.") [{'sequence': 'Ejo ndikwiga nagize amahirwe baje kunsura.', 'score': 0.5675836205482483, 'token': 1711, 'token_str': ' amahirwe'}, {'sequence': 'Ejo ndikwiga nagize benshi baje kunsura.', 'score': 0.03573048859834671, 'token': 769, 'token_str': ' benshi'}, {'sequence': 'Ejo ndikwiga nagize ubwoba baje kunsura.', 'score': 0.03272199630737305, 'token': 2594, 'token_str': ' ubwoba'}, {'sequence': 'Ejo ndikwiga nagize ngo baje kunsura.', 'score': 0.013406379148364067, 'token': 396, 'token_str': ' ngo'}, {'sequence': 'Ejo ndikwiga nagize abantu baje kunsura.', 'score': 0.012342716567218304, 'token': 500, 'token_str': ' abantu'}] ``` 2) Direct use from the transformer library to get features using AutoModel ``` from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("jean-paul/kinyaRoberta-large") model = AutoModelForMaskedLM.from_pretrained("jean-paul/kinyaRoberta-large") input_text = "Ejo ndikwiga nagize abashyitsi baje kunsura." encoded_input = tokenizer(input_text, return_tensors='pt') output = model(**encoded_input) ``` __Note__: We used the huggingface implementations for pretraining RoBerta from scratch, both the RoBerta model and the classes needed to do it.
huggingtweets/mullbot_forever
huggingtweets
2021-08-29T05:36:32Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/mullbot_forever/1630215387933/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1334794074822504449/KX8oD2AU_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">πŸ€– AI BOT πŸ€–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">extremely online bot</div> <div style="text-align: center; font-size: 14px;">@mullbot_forever</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from extremely online bot. | Data | extremely online bot | | --- | --- | | Tweets downloaded | 1432 | | Retweets | 0 | | Short tweets | 22 | | Tweets kept | 1410 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/301sf9tj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mullbot_forever's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2u7gvuie) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2u7gvuie/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/mullbot_forever') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
lowlevelware/512x512_diffusion_unconditional_ImageNet
lowlevelware
2021-08-29T05:20:21Z
0
14
null
[ "arxiv:2105.05233", "region:us" ]
null
2022-03-02T23:29:05Z
# 512x512 diffusion (unconditional ImageNet) Modality: Images Intended Use: Generation of images with or without classifier guidance ## Detailed description A 512x512 unconditional ImageNet diffusion model, fine-tuned for 8100 steps from the OpenAI trained 512x512 class-conditional ImageNet diffusion model. It was fine-tuned into an unconditional model in order to enable better guidance by CLIP (or any other non-ImageNet classifier). ### Short description A 512x512 unconditional ImageNet diffusion model, fine-tuned from the OpenAI trained 512x512 class-conditional ImageNet diffusion model. ## License MIT Training Data: ImageNet (ILSVRC 2012 subset) Metrics / Evaluations: None Limitations and Biases: - These models sometimes produce highly unrealistic outputs, particularly when generating images containing human faces. This may stem from ImageNet's emphasis on non-human objects. While classifier guidance can improve sample quality, it reduces diversity, resulting in some modes of the data distribution being underrepresented. This can potentially amplify existing biases in the training dataset such as gender and racial biases. Because ImageNet and LSUN contain images from the internet, they include photos of real people, and the model may have memorized some of the information contained in these photos. However, these images are already publicly available, and existing generative models trained on ImageNet have not demonstrated significant leakage of this information. Links: https://arxiv.org/abs/2105.05233 (Diffusion Models Beat GANs on Image Synthesis), https://github.com/openai/guided-diffusion