modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-01 00:47:04
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
530 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-01 00:46:57
card
stringlengths
11
1.01M
huggingtweets/vfahegao
huggingtweets
2021-07-23T17:11:32Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/vfahegao/1627060115826/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1398475970210828290/2kCvFheG_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Eye of the Devil Starring Sharon Tate 🍿🍿🍿🍿🍿🕊</div> <div style="text-align: center; font-size: 14px;">@vfahegao</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Eye of the Devil Starring Sharon Tate 🍿🍿🍿🍿🍿🕊. | Data | Eye of the Devil Starring Sharon Tate 🍿🍿🍿🍿🍿🕊 | | --- | --- | | Tweets downloaded | 421 | | Retweets | 68 | | Short tweets | 104 | | Tweets kept | 249 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2lu2ve0b/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @vfahegao's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2tnz0xod) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2tnz0xod/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/vfahegao') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/laineden
huggingtweets
2021-07-23T17:10:14Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/laineden/1627060210524/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1415021724119277572/iSuhNx8q_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">snail ❥</div> <div style="text-align: center; font-size: 14px;">@laineden</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from snail ❥. | Data | snail ❥ | | --- | --- | | Tweets downloaded | 3235 | | Retweets | 516 | | Short tweets | 487 | | Tweets kept | 2232 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/jg8608cr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @laineden's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2d466up4) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2d466up4/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/laineden') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/moviefishy
huggingtweets
2021-07-23T16:51:17Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/moviefishy/1627059072751/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1408154042698665985/1PWi4RhY_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Rock Genius Fishy - Rock House Head</div> <div style="text-align: center; font-size: 14px;">@moviefishy</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Rock Genius Fishy - Rock House Head. | Data | Rock Genius Fishy - Rock House Head | | --- | --- | | Tweets downloaded | 3238 | | Retweets | 485 | | Short tweets | 546 | | Tweets kept | 2207 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/f99exm0b/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @moviefishy's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3v5cszr1) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3v5cszr1/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/moviefishy') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/islamphobiacow-praisegodbarbon
huggingtweets
2021-07-23T16:06:26Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/islamphobiacow-praisegodbarbon/1627056382131/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1381764452098437120/74IgKP07_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1368077075127603200/Z08slO2P_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Boston Psychology PhD & keyvan</div> <div style="text-align: center; font-size: 14px;">@islamphobiacow-praisegodbarbon</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Boston Psychology PhD & keyvan. | Data | Boston Psychology PhD | keyvan | | --- | --- | --- | | Tweets downloaded | 3224 | 3242 | | Retweets | 858 | 179 | | Short tweets | 251 | 223 | | Tweets kept | 2115 | 2840 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3egvdux4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @islamphobiacow-praisegodbarbon's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/34hmjrwi) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/34hmjrwi/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/islamphobiacow-praisegodbarbon') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
flax-sentence-embeddings/all_datasets_v3_roberta-large
flax-sentence-embeddings
2021-07-23T15:45:17Z
5,030
13
sentence-transformers
[ "sentence-transformers", "pytorch", "roberta", "feature-extraction", "sentence-similarity", "en", "arxiv:2104.08727", "arxiv:1810.09305", "arxiv:2102.07033", "arxiv:1904.06472", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity language: en --- # Model description The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`roberta-large`](https://huggingface.co/roberta-large) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Google’s Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used as a sentence encoder. Given an input sentence, it ouptuts a vector which captures the sentence semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. ## How to use Here is how to use this model to get the features of a given text using [SentenceTransformers](https://github.com/UKPLab/sentence-transformers) library: ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('flax-sentence-embeddings/all_datasets_v3_roberta-large') text = "Replace me by any text you'd like." text_embbedding = model.encode(text) # array([-0.01559514, 0.04046123, 0.1317083 , 0.00085931, 0.04585106, # -0.05607086, 0.0138078 , 0.03569756, 0.01420381, 0.04266302 ...], # dtype=float32) ``` # Training procedure ## Pre-training We use the pretrained [`roberta-large`](https://huggingface.co/roberta-large). Please refer to the model card for more detailed information about the pre-training procedure. ## Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. ### Hyper parameters We trained ou model on a TPU v3-8. We train the model during 540k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository. ### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |:--------------------------------------------------------:|:----------------------------------------:|:--------------------------:| | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_body_jsonl) | - | 364,001 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [COCO 2020](COCO 2020) | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [TriviaqQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Quora Question Pairs](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [SPECTER](https://github.com/allenai/specter) | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [S2ORC](https://github.com/allenai/s2orc) Title/Abstract | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [S2ORC](https://github.com/allenai/s2orc) Citation/Citation | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) Citation/Abstract | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [PAQ](https://github.com/facebookresearch/PAQ) | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | SearchQA | - | 582,261 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Title/Answer | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Title/Question | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Question/Answer | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [MS MARCO](https://microsoft.github.io/msmarco/) | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [Reddit conversationnal](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | total | | 1,097,953,922 |
flax-sentence-embeddings/all_datasets_v3_distilroberta-base
flax-sentence-embeddings
2021-07-23T15:43:19Z
13
2
sentence-transformers
[ "sentence-transformers", "pytorch", "roberta", "feature-extraction", "sentence-similarity", "en", "arxiv:2104.08727", "arxiv:1810.09305", "arxiv:2102.07033", "arxiv:1904.06472", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity language: en --- # Model description The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`distilroberta-base`](https://huggingface.co/distilroberta-base) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Google’s Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used as a sentence encoder. Given an input sentence, it ouptuts a vector which captures the sentence semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. ## How to use Here is how to use this model to get the features of a given text using [SentenceTransformers](https://github.com/UKPLab/sentence-transformers) library: ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('flax-sentence-embeddings/all_datasets_v3_distilroberta-base') text = "Replace me by any text you'd like." text_embbedding = model.encode(text) # array([-0.01559514, 0.04046123, 0.1317083 , 0.00085931, 0.04585106, # -0.05607086, 0.0138078 , 0.03569756, 0.01420381, 0.04266302 ...], # dtype=float32) ``` # Training procedure ## Pre-training We use the pretrained [`distilroberta-base`](https://huggingface.co/distilroberta-base). Please refer to the model card for more detailed information about the pre-training procedure. ## Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. ### Hyper parameters We trained ou model on a TPU v3-8. We train the model during 540k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository. ### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |:--------------------------------------------------------:|:----------------------------------------:|:--------------------------:| | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_body_jsonl) | - | 364,001 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [COCO 2020](COCO 2020) | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [TriviaqQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Quora Question Pairs](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [SPECTER](https://github.com/allenai/specter) | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [S2ORC](https://github.com/allenai/s2orc) Title/Abstract | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [S2ORC](https://github.com/allenai/s2orc) Citation/Citation | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) Citation/Abstract | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [PAQ](https://github.com/facebookresearch/PAQ) | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | SearchQA | - | 582,261 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Title/Answer | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Title/Question | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Question/Answer | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [MS MARCO](https://microsoft.github.io/msmarco/) | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [Reddit conversationnal](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | total | | 1,097,953,922 |
huggingtweets/gozusabu
huggingtweets
2021-07-23T15:36:01Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/gozusabu/1627054557412/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1382600435056394242/azQoqzIb_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Calum Macleod</div> <div style="text-align: center; font-size: 14px;">@gozusabu</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Calum Macleod. | Data | Calum Macleod | | --- | --- | | Tweets downloaded | 1926 | | Retweets | 673 | | Short tweets | 279 | | Tweets kept | 974 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/y71yp06o/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gozusabu's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/dwp3t07q) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/dwp3t07q/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/gozusabu') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/timthom_007
huggingtweets
2021-07-23T15:30:29Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/timthom_007/1627054225472/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1406641405150253059/RNJ6uGeN_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">TimThom 🍝</div> <div style="text-align: center; font-size: 14px;">@timthom_007</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from TimThom 🍝. | Data | TimThom 🍝 | | --- | --- | | Tweets downloaded | 1187 | | Retweets | 89 | | Short tweets | 225 | | Tweets kept | 873 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/37fjihoh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @timthom_007's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1tq742cw) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1tq742cw/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/timthom_007') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
gwynethfae/t5-small-finetuned-xsum
gwynethfae
2021-07-23T15:08:15Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - null model_index: - name: t5-small-finetuned-xsum results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | No log | 1.0 | 13 | 3.6429 | 15.3135 | 1.0725 | 12.0447 | 12.445 | 18.97 | ### Framework versions - Transformers 4.9.0 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
danlou/albert-xxlarge-v2-finetuned-csqa
danlou
2021-07-23T13:55:03Z
6
2
transformers
[ "transformers", "pytorch", "albert", "multiple-choice", "generated_from_trainer", "dataset:commonsense_qa", "license:apache-2.0", "endpoints_compatible", "region:us" ]
multiple-choice
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - commonsense_qa metrics: - accuracy model_index: - name: albert-xxlarge-v2-finetuned-csqa results: - dataset: name: commonsense_qa type: commonsense_qa args: default metric: name: Accuracy type: accuracy value: 0.7870597839355469 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-xxlarge-v2-finetuned-csqa This model is a fine-tuned version of [albert-xxlarge-v2](https://huggingface.co/albert-xxlarge-v2) on the commonsense_qa dataset. It achieves the following results on the evaluation set: - Loss: 1.6177 - Accuracy: 0.7871 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7464 | 1.0 | 609 | 0.5319 | 0.7985 | | 0.3116 | 2.0 | 1218 | 0.6422 | 0.7936 | | 0.0769 | 3.0 | 1827 | 1.2674 | 0.7952 | | 0.0163 | 4.0 | 2436 | 1.4839 | 0.7903 | | 0.0122 | 5.0 | 3045 | 1.6177 | 0.7871 | ### Framework versions - Transformers 4.8.2 - Pytorch 1.9.0 - Datasets 1.10.2 - Tokenizers 0.10.3
danlou/albert-xxlarge-v2-finetuned-csqa-ih
danlou
2021-07-23T13:32:06Z
4
1
transformers
[ "transformers", "pytorch", "albert", "multiple-choice", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
multiple-choice
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model_index: name: albert-xxlarge-v2-finetuned-csqa-ih --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-xxlarge-v2-finetuned-csqa-ih This model is a fine-tuned version of [albert-xxlarge-v2](https://huggingface.co/albert-xxlarge-v2) on an unkown dataset. It achieves the following results on the evaluation set: - Loss: 1.5694 - Accuracy: 0.8026 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.8032 | 1.0 | 532 | 0.5217 | 0.8043 | | 0.3182 | 2.0 | 1064 | 0.6313 | 0.7985 | | 0.0668 | 3.0 | 1596 | 1.2971 | 0.7969 | | 0.0131 | 4.0 | 2128 | 1.4671 | 0.8026 | | 0.0046 | 5.0 | 2660 | 1.5694 | 0.8026 | ### Framework versions - Transformers 4.8.2 - Pytorch 1.9.0 - Datasets 1.10.2 - Tokenizers 0.10.3
kaushalya/medclip
kaushalya
2021-07-23T09:40:37Z
82
14
transformers
[ "transformers", "jax", "tensorboard", "hybrid-clip", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# MedCLIP: Fine-tuning a CLIP model on the ROCO medical dataset <!-- ![Logo](./assets/logo.png) --> <h3 align="center"> <!-- <p>MedCLIP</p> --> <img src="./assets/logo.png" alt="huggingface-medclip" width="250" height="250"> ## Summary This repository contains the code for fine-tuning a CLIP model on the [ROCO dataset](https://github.com/razorx89/roco-dataset), a dataset made of radiology images and a caption. This work is done as a part of the [**Flax/Jax community week**](https://github.com/huggingface/transformers/blob/master/examples/research_projects/jax-projects/README.md#quickstart-flax-and-jax-in-transformers) organized by Hugging Face and Google. ### Demo You can try a Streamlit demo app that uses this model on [🤗 Spaces](https://huggingface.co/spaces/kaushalya/medclip-roco). You may have to signup for 🤗 Spaces private beta to access this app (screenshot shown below). ![Streamlit app](./assets/streamlit_app.png) 🤗 Hub Model card: https://huggingface.co/flax-community/medclip-roco ## Dataset 🧩 Each image is accompanied by a textual caption. The caption length varies from a few characters (a single word) to 2,000 characters (multiple sentences). During preprocessing we remove all images that has a caption shorter than 10 characters. Training set: 57,780 images with their caption. Validation set: 7,200 Test set: 7,650 [ ] Give an example ## Installation 💽 This repo depends on the master branch of [Hugging Face - Transformers library](https://github.com/huggingface/transformers). First you need to clone the transformers repository and then install it locally (preferably inside a virtual environment) with `pip install -e ".[flax]"`. ## The Model ⚙️ You can load the pretrained model from the Hugging Face Hub with ``` from medclip.modeling_hybrid_clip import FlaxHybridCLIP model = FlaxHybridCLIP.from_pretrained("flax-community/medclip-roco") ``` ## Training The model is trained using Flax/JAX on a cloud TPU-v3-8. You can fine-tune a CLIP model implemented in Flax by simply running `sh run_medclip`. This is the validation loss curve we observed when we trained the model using the `run_medclip.sh` script. ![Validation loss](./assets/val_loss.png) ## Limitations 🚨 The current model is capable of identifying if a given radiology image is a PET scan or an ultrasound scan. However it fails at identifying a brain scan from a lung scan. ❗️This model **should not** be used in a medical setting without further evaluations❗️. ## Acknowledgements Huge thanks to the Hugging Face 🤗 team and Google JAX/Flax team for organizing the community week and letting us use cloud compute for 2 weeks. We specially thank [@patil-suraj](https://github.com/patil-suraj) & [@patrickvonplaten](https://github.com/patrickvonplaten) for the continued support on Slack and the detailed feedback. ## TODO [ ] Evaluation on down-stream tasks [ ] Zero-shot learning performance [ ] Merge the demo app
vasudevgupta/gsoc-wav2vec2
vasudevgupta
2021-07-23T08:25:58Z
6
0
transformers
[ "transformers", "tf", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
Wav2Vec2 Model (initialized from [`facebook/wav2vec2-base`](https://huggingface.co/facebook/wav2vec2-base)) with **no** LM head. Model weights are converted into TensorFlow using following script: ```shell python3 convert_torch_to_tf.py --hf_model_id "facebook/wav2vec2-base" ``` **TF SavedModel** is obtained by running following commands: ```shell git clone https://huggingface.co/vasudevgupta/gsoc-wav2vec2 python3 export2hub.py \ --hf_model_id facebook/wav2vec2-base \ --saved_model_dir gsoc-wav2vec2/saved-model \ --seqlen 246000 cd gsoc-wav2vec2 && tar -czf saved-model.tar.gz saved-model ``` Project Link: https://github.com/vasudevgupta7/gsoc-wav2vec2
ehdwns1516/gpt3-kor-based_gpt2_review_SR5
ehdwns1516
2021-07-23T01:19:22Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
# ehdwns1516/gpt3-kor-based_gpt2_review_SR5 * This model has been trained Korean dataset as a star of 5 in the [naver shopping reivew dataset](https://github.com/bab2min/corpus/tree/master/sentiment). * Input text what you want to generate review. * If the context is longer than 1200 characters, the context may be cut in the middle and the result may not come out well. review generator DEMO: [Ainize DEMO](https://main-review-generator-ehdwns1516.endpoint.ainize.ai/) review generator API: [Ainize API](https://ainize.web.app/redirect?git_repo=https://github.com/ehdwns1516/review_generator) ## Model links for each 1 to 5 star * [ehdwns1516/gpt3-kor-based_gpt2_review_SR1](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR1) * [ehdwns1516/gpt3-kor-based_gpt2_review_SR2](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR2) * [ehdwns1516/gpt3-kor-based_gpt2_review_SR3](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR3) * [ehdwns1516/gpt3-kor-based_gpt2_review_SR4](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR4) * [ehdwns1516/gpt3-kor-based_gpt2_review_SR5](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR5) ## Overview Language model: [gpt3-kor-small_based_on_gpt2](https://huggingface.co/kykim/gpt3-kor-small_based_on_gpt2) Language: Korean Training data: review_body dataset with a star of 5 in the [naver shopping reivew dataset](https://github.com/bab2min/corpus/tree/master/sentiment). Code: See [Ainize Workspace](https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/ehdwns1516/gpt2_review_fine-tunning_note) ## Usage ## In Transformers ``` from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("ehdwns1516/gpt3-kor-based_gpt2_review_SR5") model = AutoModelWithLMHead.from_pretrained("ehdwns1516/gpt3-kor-based_gpt2_review_SR5") generator = pipeline( "text-generation", model="ehdwns1516/gpt3-kor-based_gpt2_review_SR5", tokenizer=tokenizer ) context = "your context" result = dict() result[0] = generator(context)[0] ```
ehdwns1516/gpt3-kor-based_gpt2_review_SR4
ehdwns1516
2021-07-23T01:18:45Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
# ehdwns1516/gpt3-kor-based_gpt2_review_SR4 * This model has been trained Korean dataset as a star of 4 in the [naver shopping reivew dataset](https://github.com/bab2min/corpus/tree/master/sentiment). * Input text what you want to generate review. * If the context is longer than 1200 characters, the context may be cut in the middle and the result may not come out well. review generator DEMO: [Ainize DEMO](https://main-review-generator-ehdwns1516.endpoint.ainize.ai/) review generator API: [Ainize API](https://ainize.web.app/redirect?git_repo=https://github.com/ehdwns1516/review_generator) ## Model links for each 1 to 5 star * [ehdwns1516/gpt3-kor-based_gpt2_review_SR1](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR1) * [ehdwns1516/gpt3-kor-based_gpt2_review_SR2](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR2) * [ehdwns1516/gpt3-kor-based_gpt2_review_SR3](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR3) * [ehdwns1516/gpt3-kor-based_gpt2_review_SR4](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR4) * [ehdwns1516/gpt3-kor-based_gpt2_review_SR5](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR5) ## Overview Language model: [gpt3-kor-small_based_on_gpt2](https://huggingface.co/kykim/gpt3-kor-small_based_on_gpt2) Language: Korean Training data: review_body dataset with a star of 4 in the [naver shopping reivew dataset](https://github.com/bab2min/corpus/tree/master/sentiment). Code: See [Ainize Workspace](https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/ehdwns1516/gpt2_review_fine-tunning_note) ## Usage ## In Transformers ``` from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("ehdwns1516/gpt3-kor-based_gpt2_review_SR4") model = AutoModelWithLMHead.from_pretrained("ehdwns1516/gpt3-kor-based_gpt2_review_SR4") generator = pipeline( "text-generation", model="ehdwns1516/gpt3-kor-based_gpt2_review_SR4", tokenizer=tokenizer ) context = "your context" result = dict() result[0] = generator(context)[0] ```
ehdwns1516/gpt3-kor-based_gpt2_review_SR3
ehdwns1516
2021-07-23T01:18:13Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
# ehdwns1516/gpt3-kor-based_gpt2_review_SR3 * This model has been trained Korean dataset as a star of 3 in the [naver shopping reivew dataset](https://github.com/bab2min/corpus/tree/master/sentiment). * Input text what you want to generate review. * If the context is longer than 1200 characters, the context may be cut in the middle and the result may not come out well. review generator DEMO: [Ainize DEMO](https://main-review-generator-ehdwns1516.endpoint.ainize.ai/) review generator API: [Ainize API](https://ainize.web.app/redirect?git_repo=https://github.com/ehdwns1516/review_generator) ## Model links for each 1 to 5 star * [ehdwns1516/gpt3-kor-based_gpt2_review_SR1](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR1) * [ehdwns1516/gpt3-kor-based_gpt2_review_SR2](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR2) * [ehdwns1516/gpt3-kor-based_gpt2_review_SR3](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR3) * [ehdwns1516/gpt3-kor-based_gpt2_review_SR4](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR4) * [ehdwns1516/gpt3-kor-based_gpt2_review_SR5](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR5) ## Overview Language model: [gpt3-kor-small_based_on_gpt2](https://huggingface.co/kykim/gpt3-kor-small_based_on_gpt2) Language: Korean Training data: review_body dataset with a star of 3 in the [naver shopping reivew dataset](https://github.com/bab2min/corpus/tree/master/sentiment). Code: See [Ainize Workspace](https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/ehdwns1516/gpt2_review_fine-tunning_note) ## Usage ## In Transformers ``` from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("ehdwns1516/gpt3-kor-based_gpt2_review_SR3") model = AutoModelWithLMHead.from_pretrained("ehdwns1516/gpt3-kor-based_gpt2_review_SR3") generator = pipeline( "text-generation", model="ehdwns1516/gpt3-kor-based_gpt2_review_SR3", tokenizer=tokenizer ) context = "your context" result = dict() result[0] = generator(context)[0] ```
ehdwns1516/gpt3-kor-based_gpt2_review_SR2
ehdwns1516
2021-07-23T01:16:21Z
7
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
# ehdwns1516/gpt3-kor-based_gpt2_review_SR2 * This model has been trained Korean dataset as a star of 2 in the [naver shopping reivew dataset](https://github.com/bab2min/corpus/tree/master/sentiment). * Input text what you want to generate review. * If the context is longer than 1200 characters, the context may be cut in the middle and the result may not come out well. review generator DEMO: [Ainize DEMO](https://main-review-generator-ehdwns1516.endpoint.ainize.ai/) review generator API: [Ainize API](https://ainize.web.app/redirect?git_repo=https://github.com/ehdwns1516/review_generator) ## Model links for each 1 to 5 star * [ehdwns1516/gpt3-kor-based_gpt2_review_SR1](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR1) * [ehdwns1516/gpt3-kor-based_gpt2_review_SR2](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR2) * [ehdwns1516/gpt3-kor-based_gpt2_review_SR3](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR3) * [ehdwns1516/gpt3-kor-based_gpt2_review_SR4](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR4) * [ehdwns1516/gpt3-kor-based_gpt2_review_SR5](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR5) ## Overview Language model: [gpt3-kor-small_based_on_gpt2](https://huggingface.co/kykim/gpt3-kor-small_based_on_gpt2) Language: Korean Training data: review_body dataset with a star of 2 in the [naver shopping reivew dataset](https://github.com/bab2min/corpus/tree/master/sentiment). Code: See [Ainize Workspace](https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/ehdwns1516/gpt2_review_fine-tunning_note) ## Usage ## In Transformers ``` from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("ehdwns1516/gpt3-kor-based_gpt2_review_SR2") model = AutoModelWithLMHead.from_pretrained("ehdwns1516/gpt3-kor-based_gpt2_review_SR2") generator = pipeline( "text-generation", model="ehdwns1516/gpt3-kor-based_gpt2_review_SR2", tokenizer=tokenizer ) context = "your context" result = dict() result[0] = generator(context)[0] ```
ehdwns1516/gpt2_review_star3
ehdwns1516
2021-07-23T01:06:54Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
# gpt2_review_star3 * This model has been trained as a review_body dataset with a star of 3 in the [amazon_review dataset](https://huggingface.co/datasets/amazon_reviews_multi). * Input text what you want to generate review. * If the context is longer than 1200 characters, the context may be cut in the middle and the result may not come out well. review generator DEMO: [Ainize DEMO](https://main-review-generator-ehdwns1516.endpoint.ainize.ai/) review generator API: [Ainize API](https://ainize.web.app/redirect?git_repo=https://github.com/ehdwns1516/review_generator) ## Model links for each 1 to 5 star * [ehdwns1516/gpt2_review_star1](https://huggingface.co/ehdwns1516/gpt2_review_star1) * [ehdwns1516/gpt2_review_star2](https://huggingface.co/ehdwns1516/gpt2_review_star2) * [ehdwns1516/gpt2_review_star3](https://huggingface.co/ehdwns1516/gpt2_review_star3) * [ehdwns1516/gpt2_review_star4](https://huggingface.co/ehdwns1516/gpt2_review_star4) * [ehdwns1516/gpt2_review_star5](https://huggingface.co/ehdwns1516/gpt2_review_star5) ## Overview Language model: [gpt2](https://huggingface.co/gpt2) Language: English Training data: review_body dataset with a star of 3 in the [amazon_review dataset](https://huggingface.co/datasets/amazon_reviews_multi). Code: See [Ainize Workspace](https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/ehdwns1516/gpt2_review_fine-tunning_note) ## Usage ## In Transformers ``` from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("ehdwns1516/gpt2_review_star3") model = AutoModelWithLMHead.from_pretrained("ehdwns1516/gpt2_review_star3") generator = pipeline( "text-generation", model="ehdwns1516/gpt2_review_star3", tokenizer=tokenizer ) context = "your context" result = dict() result[0] = generator(context)[0] ```
ehdwns1516/gpt2_review_star2
ehdwns1516
2021-07-23T01:06:41Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
# gpt2_review_star2 * This model has been trained as a review_body dataset with a star of 2 in the [amazon_review dataset](https://huggingface.co/datasets/amazon_reviews_multi). * Input text what you want to generate review. * If the context is longer than 1200 characters, the context may be cut in the middle and the result may not come out well. review generator DEMO: [Ainize DEMO](https://main-review-generator-ehdwns1516.endpoint.ainize.ai/) review generator API: [Ainize API](https://ainize.web.app/redirect?git_repo=https://github.com/ehdwns1516/review_generator) ## Model links for each 1 to 5 star * [ehdwns1516/gpt2_review_star1](https://huggingface.co/ehdwns1516/gpt2_review_star1) * [ehdwns1516/gpt2_review_star2](https://huggingface.co/ehdwns1516/gpt2_review_star2) * [ehdwns1516/gpt2_review_star3](https://huggingface.co/ehdwns1516/gpt2_review_star3) * [ehdwns1516/gpt2_review_star4](https://huggingface.co/ehdwns1516/gpt2_review_star4) * [ehdwns1516/gpt2_review_star5](https://huggingface.co/ehdwns1516/gpt2_review_star5) ## Overview Language model: [gpt2](https://huggingface.co/gpt2) Language: English Training data: review_body dataset with a star of 2 in the [amazon_review dataset](https://huggingface.co/datasets/amazon_reviews_multi). Code: See [Ainize Workspace](https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/ehdwns1516/gpt2_review_fine-tunning_note) ## Usage ## In Transformers ``` from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("ehdwns1516/gpt2_review_star2") model = AutoModelWithLMHead.from_pretrained("ehdwns1516/gpt2_review_star2") generator = pipeline( "text-generation", model="ehdwns1516/gpt2_review_star2", tokenizer=tokenizer ) context = "your context" result = dict() result[0] = generator(context)[0] ```
Fraser/wiki-vae
Fraser
2021-07-22T19:16:20Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:04Z
# Wiki-VAE A Transformer-VAE trained on all the sentences in wikipedia. Training is done on AWS SageMaker.
shahrukhx01/distilbart-cnn-12-6-text2sql
shahrukhx01
2021-07-22T08:38:17Z
9
2
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
The distilbart-cnn-12-6-text2sql is fine-tuned on WIKISQL dataset. ```python from transformers import BartTokenizer, BartForConditionalGeneration, BartConfig model = BartForConditionalGeneration.from_pretrained('shahrukhx01/distilbart-cnn-12-6-text2sql') tokenizer = BartTokenizer.from_pretrained('shahrukhx01/distilbart-cnn-12-6-text2sql') TEXT_QUERY = "what is the temperature of berlin " inputs = tokenizer([TEXT_QUERY], max_length=1024, return_tensors='pt') # Generate SQL text_query_ids = model.generate(inputs['input_ids'], num_beams=4, max_length=5, early_stopping=True) print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in text_query_ids]) ```
suhnylla/planes_airlines
suhnylla
2021-07-22T02:21:24Z
69
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: planes_airlines results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.32307693362236023 --- # planes_airlines Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### planes cathay pacific ![planes cathay pacific](images/planes_cathay_pacific.jpg) #### planes delta airlines ![planes delta airlines](images/planes_delta_airlines.jpg) #### planes malaysia airlines ![planes malaysia airlines](images/planes_malaysia_airlines.jpg) #### planes singapore airlines ![planes singapore airlines](images/planes_singapore_airlines.jpg) #### planes virgin airlines ![planes virgin airlines](images/planes_virgin_airlines.jpg)
aristotletan/bart-large-finetuned-xsum
aristotletan
2021-07-22T01:45:40Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "dataset:wsj_markets", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - wsj_markets metrics: - rouge model_index: - name: bart-large-finetuned-xsum results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: wsj_markets type: wsj_markets args: default metric: name: Rouge1 type: rouge value: 15.3934 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-finetuned-xsum This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the wsj_markets dataset. It achieves the following results on the evaluation set: - Loss: 0.8497 - Rouge1: 15.3934 - Rouge2: 7.0378 - Rougel: 13.9522 - Rougelsum: 14.3541 - Gen Len: 20.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.0964 | 1.0 | 1735 | 0.9365 | 18.703 | 12.7539 | 18.1293 | 18.5397 | 20.0 | | 0.95 | 2.0 | 3470 | 0.8871 | 19.5223 | 13.0938 | 18.9148 | 18.8363 | 20.0 | | 0.8687 | 3.0 | 5205 | 0.8587 | 15.0915 | 7.142 | 13.6693 | 14.5975 | 20.0 | | 0.7989 | 4.0 | 6940 | 0.8569 | 18.243 | 11.4495 | 17.4326 | 17.489 | 20.0 | | 0.7493 | 5.0 | 8675 | 0.8497 | 15.3934 | 7.0378 | 13.9522 | 14.3541 | 20.0 | ### Framework versions - Transformers 4.8.2 - Pytorch 1.9.0+cu102 - Datasets 1.10.0 - Tokenizers 0.10.3
huggingtweets/devops_guru-neiltyson-nigelthurlow
huggingtweets
2021-07-21T22:55:43Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/devops_guru-neiltyson-nigelthurlow/1626908139492/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1163117736140124160/u23u5DU4_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/748969887146471424/4BmVTQAv_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/74188698/NeilTysonOriginsA-Crop_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Nigel Thurlow & Ernest Wright, Ph. D. ABD & Neil deGrasse Tyson</div> <div style="text-align: center; font-size: 14px;">@devops_guru-neiltyson-nigelthurlow</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Nigel Thurlow & Ernest Wright, Ph. D. ABD & Neil deGrasse Tyson. | Data | Nigel Thurlow | Ernest Wright, Ph. D. ABD | Neil deGrasse Tyson | | --- | --- | --- | --- | | Tweets downloaded | 1264 | 1933 | 3250 | | Retweets | 648 | 20 | 10 | | Short tweets | 27 | 105 | 79 | | Tweets kept | 589 | 1808 | 3161 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/jc9vah1k/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @devops_guru-neiltyson-nigelthurlow's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2myicem9) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2myicem9/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/devops_guru-neiltyson-nigelthurlow') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
bgfruna/double-bart-ensemble-squad2
bgfruna
2021-07-21T22:47:12Z
0
0
null
[ "pytorch", "question-answering", "en", "dataset:squad_v2", "dataset:squad2", "license:cc-by-4.0", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: en tags: - pytorch - question-answering datasets: - squad_v2 - squad2 license: cc-by-4.0 metrics: - squad_v2 - exact - f1 widget: - text: "By what main attribute are computational problems classified utilizing computational complexity theory?" context: "Computational complexity theory is a branch of the theory of computation in theoretical computer science that focuses on classifying computational problems according to their inherent difficulty, and relating those classes to each other. A computational problem is understood to be a task that is in principle amenable to being solved by a computer, which is equivalent to stating that the problem may be solved by mechanical application of mathematical steps, such as an algorithm." --- # Performance This ensemble was evaluated on [SQuAD 2.0](https://huggingface.co/datasets/squad_v2) with the following results: ``` {'HasAns_exact': 52.5472334682861, 'HasAns_f1': 67.94939813758602, 'HasAns_total': 5928, 'NoAns_exact': 91.75777964676199, 'NoAns_f1': 91.75777964676199, 'NoAns_total': 5945, 'best_exact': 72.16373283921503, 'best_exact_thresh': 0.0, 'best_f1': 79.85378860941708, 'best_f1_thresh': 0.0, 'exact': 72.1805777815211, 'f1': 79.87063355172326, 'total': 11873 } ```
huggingtweets/nigelthurlow
huggingtweets
2021-07-21T22:34:57Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/nigelthurlow/1626906893945/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1163117736140124160/u23u5DU4_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Nigel Thurlow</div> <div style="text-align: center; font-size: 14px;">@nigelthurlow</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Nigel Thurlow. | Data | Nigel Thurlow | | --- | --- | | Tweets downloaded | 1264 | | Retweets | 648 | | Short tweets | 27 | | Tweets kept | 589 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/n4jwj2tf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nigelthurlow's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2r5nb7zp) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2r5nb7zp/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/nigelthurlow') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/alicefromqueens
huggingtweets
2021-07-21T21:38:57Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/alicefromqueens/1626903533456/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1372804858068230149/aSZcjxvN_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Dread Alice</div> <div style="text-align: center; font-size: 14px;">@alicefromqueens</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Dread Alice. | Data | Dread Alice | | --- | --- | | Tweets downloaded | 3249 | | Retweets | 50 | | Short tweets | 511 | | Tweets kept | 2688 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/frqs20kj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @alicefromqueens's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2c7152gp) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2c7152gp/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/alicefromqueens') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
b25mayank3/shirt_identifier
b25mayank3
2021-07-21T20:29:09Z
73
1
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: shirt_identifier results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.6875 --- # shirt_identifier Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### Big Check shirt ![Big Check shirt](images/Big_Check_shirt.jpg) #### Formal Shirt ![Formal Shirt](images/Formal_Shirt.jpg) #### casual shirt ![casual shirt](images/casual_shirt.jpg) #### denim shirt ![denim shirt](images/denim_shirt.jpg)
flax-community/clip-vit-base-patch32_marian-es
flax-community
2021-07-21T19:30:04Z
1
0
transformers
[ "transformers", "jax", "tensorboard", "clip-vision-marian", "arxiv:2102.08981", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# CLIP-Vision-Marian Seq2Seq Encoder-Decoder Model Pretrained CLIP-Vision-Marian pre-trained on a subset of Spanish-translated Conceptual-12M image-text pairs using a seq2seq model training objective. 2.5M cleaned English image-text pairs are translated using Spanish Marian Model. We trained CLIP-Vision-Marian model during community week hosted by Huggingface 🤗 using JAX/Flax. ## Model description CLIP-Vision-Marian is a modified transformers model which takes in visual embeddings from CLIP-Vision transformer and feeds into the `encoder_hidden_states` of a Marian decoder. This is done for deep cross-modal interaction via `cross-attention` between the two modes. The decoder then predicts logits for the `input_ids` provided and can be used for generation. ## Intended uses & limitations❗️ You can use the raw model for encoder-decoder network where you want the encoder to encode images and the decoder to decode text. Note that this model is primarily aimed at being fine-tuned on tasks like Spanish image captioning. ### How to use❓ You will need to clone the model from [here](https://github.com/bhavitvyamalik/spanish-image-captioning). An example of usage is shown below: ```python >>> from torchvision.io import read_image >>> import numpy as np >>> import wget >>> import os >>> from transformers import CLIPProcessor, MarianTokenizer >>> from models.flax_clip_vision_marian.modeling_clip_vision_marian import FlaxCLIPVisionMarianMT img = wget.download("https://huggingface.co/streamlitiframe/flax-community/spanish-image-captioning/+/media/55a8898e61131569cc0ed4e72a8b3092969d63c2dff4f47ed9ef0d89.jpeg") >>> img = read_image(img) # reading image >>> clip_processor = CLIPProcessor.from_pretrained('flax-community/clip-vit-base-patch32_marian') >>> clip_outputs = clip_processor(images=img) >>> clip_outputs['pixel_values'][0] = clip_outputs['pixel_values'][0].transpose(1,2,0) # Need to transpose images as model expected channel last images. >>> tokenizer = MarianTokenizer.from_pretrained('Helsinki-NLP/opus-mt-en-es') >>> model = FlaxCLIPVisionMarianMT.from_pretrained('flax-community/clip-vit-base-patch32_marian-es') >>> output_ids = model.generate(batch["pixel_values"], early_stopping=True, num_beams=4, max_length=64).sequences >>> output_string = tokenizer.batch_decode(output_ids.reshape(-1, 64), skip_special_tokens=True, max_length=64) >>> output_string # Sopa de avena en un tazón blanco con arándanos frescos ``` ## Training data 🏋🏻‍♂️ The Spanish image captioning model was trained on a subset of Conceptual 12M dataset by Google: <br> <br> [Conceptual 12M](https://github.com/google-research-datasets/conceptual-12m), Introduced by Changpinyo et al. in [Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts](https://arxiv.org/abs/2102.08981). ### Please update the dataset link here The translated dataset can be downloaded from [conceptual-12m-multilingual-marian-es](https://huggingface.co/datasets/flax-community/conceptual-12m-multilingual-marian-es). We do not provide images as we do not own any of them. One can download images from the `image_url` section of the original Conceptual 12M dataset. ## Data Cleaning 🧹 Though the original dataset contains 12M image-text pairs, a lot of the URLs are invalid now, and in some cases, images are corrupt or broken. We remove such examples from our data, which leaves us with approximately 10M image-text pairs, out of which we took only 2.5M image, caption pairs. #### **Train set:** Total data: <br> 2475000 captions <br> 2475000 images <br> #### **Validation set** Total data: <br> 25000 captions <br> 25000 images <br> ## Training procedure 👨🏻‍💻 ### Training The model was trained on Google Cloud Engine TPUv3-8 machine (with 335 GB of RAM, 1000 GB of hard drive, 96 CPU cores) **8 v3 TPU cores** for 42K steps with a batch size of 128 and a sequence length of 128. The optimizer used is Adam with a learning rate of 3e-4, β1 = 0.9, β2 = 0.98 and ε = 1e-8, a weight decay of 0.01, learning rate warmup for 1,000 steps and linear decay of the learning rate after. We tracked experiments using Tensorboard which can be found in `Training Metrics` tab. #### **Pretraining Results 📊** Our model reached **eval loss of ~3.1** around ~20K steps. Here are the BLEU^ scores for different languages: |Language |BLEU-1|BLEU-2|BLEU-3|BLEU-4| |--------------------------|------|------|------|------| |Spanish | 0.2015| 0.1348| 0.09982| 0.0748| ^BLEU scores are out of 1 ## **App Demo** You can try out our model on 🤗 Huggingface's spaces 🪐 : [Streamlit app of Spanish Image Captioning model on Huggingface Spaces](https://huggingface.co/spaces/flax-community/spanish-image-captioning) ## Team Members - Bhavitvya Malik [@bhavitvyamalik](https://github.com/bhavitvyamalik) - Gunjan Chhablani [@gchhablani](https://github.com/gchhablani) ## Credits Thanks to Huggingface 🤗 & Google JAX/Flax team for such a wonderful community week. Big thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) and [@patil-suraj](https://github.com/patil-suraj) for helping us with our solution during the community week. <img src=https://pbs.twimg.com/media/E443fPjX0AY1BsR.jpg:large>
AIDA-UPM/MSTSb_stsb-xlm-r-multilingual
AIDA-UPM
2021-07-21T18:32:31Z
54
1
sentence-transformers
[ "sentence-transformers", "pytorch", "xlm-roberta", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:04Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 1438 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "callback": null, "epochs": 1, "evaluation_steps": 1000, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 4e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 144, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
ifis-zork/ZORK_AI_SCI_FI
ifis-zork
2021-07-21T14:18:15Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer model_index: - name: ZORK_AI_SCI_FI results: - task: name: Causal Language Modeling type: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ZORK_AI_SCI_FI This model is a fine-tuned version of [ifis-zork/ZORK_AI_SCI_FI_TEMP](https://huggingface.co/ifis-zork/ZORK_AI_SCI_FI_TEMP) on an unkown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.9.0+cu102 - Tokenizers 0.10.3
defex/distilgpt2-finetuned-amazon-reviews
defex
2021-07-21T10:36:15Z
5
1
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - null model_index: - name: distilgpt2-finetuned-amazon-reviews results: - task: name: Causal Language Modeling type: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-amazon-reviews This model was trained from scratch on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.8.2 - Pytorch 1.9.0+cu102 - Datasets 1.9.0 - Tokenizers 0.10.3
lemon234071/t5-base-Chinese
lemon234071
2021-07-21T09:49:44Z
78
16
transformers
[ "transformers", "pytorch", "jax", "mt5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
A mt5-base model that the vocab and word embedding are truncated, only Chinese and English characters are retained. https://github.com/lemon234071/TransformerBaselines
flax-community/clip-vision-bert-vqa-ft-6k
flax-community
2021-07-21T09:21:58Z
4
4
transformers
[ "transformers", "jax", "clip-vision-bert", "text-classification", "arxiv:1908.03557", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
# CLIP-Vision-BERT Multilingual VQA Model Fine-tuned CLIP-Vision-BERT on translated [VQAv2](https://visualqa.org/challenge.html) image-text pairs using sequence classification objective. We translate the dataset to three other languages other than English: French, German, and Spanish using the [MarianMT Models](https://huggingface.co/transformers/model_doc/marian.html). This model is based on the VisualBERT which was introduced in [this paper](https://arxiv.org/abs/1908.03557) and first released in [this repository](https://github.com/uclanlp/visualbert). The output is 3129 class logits, the same classes as used by VisualBERT authors. The initial weights are loaded from the Conceptual-12M 60k [checkpoints](https://huggingface.co/flax-community/clip-vision-bert-cc12m-60k). We trained the CLIP-Vision-BERT VQA model during community week hosted by Huggingface 🤗 using JAX/Flax. ## Model description CLIP-Vision-BERT is a modified BERT model which takes in visual embeddings from the CLIP-Vision transformer and concatenates them with BERT textual embeddings before passing them to the self-attention layers of BERT. This is done for deep cross-modal interaction between the two modes. ## Intended uses & limitations❗️ This model is fine-tuned on a multi-translated version of the visual question answering task - [VQA v2](https://visualqa.org/challenge.html). Since VQAv2 is a dataset scraped from the internet, it will involve some biases which will also affect all fine-tuned versions of this model. ### How to use❓ You can use this model directly on visual question answering. You will need to clone the model from [here](https://github.com/gchhablani/multilingual-vqa). An example of usage is shown below: ```python >>> from torchvision.io import read_image >>> import numpy as np >>> import os >>> from transformers import CLIPProcessor, BertTokenizerFast >>> from model.flax_clip_vision_bert.modeling_clip_vision_bert import FlaxCLIPVisionBertForSequenceClassification >>> image_path = os.path.join('images/val2014', os.listdir('images/val2014')[0]) >>> img = read_image(image_path) >>> clip_processor = CLIPProcessor.from_pretrained('openai/clip-vit-base-patch32') ftfy or spacy is not installed using BERT BasicTokenizer instead of ftfy. >>> clip_outputs = clip_processor(images=img) >>> clip_outputs['pixel_values'][0] = clip_outputs['pixel_values'][0].transpose(1,2,0) # Need to transpose images as model expected channel last images. >>> tokenizer = BertTokenizerFast.from_pretrained('bert-base-multilingual-uncased') >>> model = FlaxCLIPVisionBertForSequenceClassification.from_pretrained('flax-community/clip-vision-bert-vqa-ft-6k') >>> text = "Are there teddy bears in the image?" >>> tokens = tokenizer([text], return_tensors="np") >>> pixel_values = np.concatenate([clip_outputs['pixel_values']]) >>> outputs = model(pixel_values=pixel_values, **tokens) >>> preds = outputs.logits[0] >>> sorted_indices = np.argsort(preds)[::-1] # Get reverse sorted scores >>> top_5_indices = sorted_indices[:5] >>> top_5_tokens = list(map(model.config.id2label.get,top_5_indices)) >>> top_5_scores = preds[top_5_indices] >>> print(dict(zip(top_5_tokens, top_5_scores))) {'yes': 15.809224, 'no': 7.8785815, '<unk>': 4.622649, 'very': 4.511462, 'neither': 3.600822} ``` ## Training data 🏋🏻‍♂️ The CLIP-Vision-BERT model was fine-tuned on the translated version of the VQAv2 dataset in four languages using Marian: English, French, German and Spanish. Hence, the dataset is four times the original English questions. The dataset questions and image URLs/paths can be downloaded from [flax-community/multilingual-vqa](https://huggingface.co/datasets/flax-community/multilingual-vqa). ## Data Cleaning 🧹 Though the original dataset contains 443,757 train and 214,354 validation image-question pairs. We only use the `multiple_choice_answer`. The answers which are not present in the 3129 classes are mapped to the `<unk>` label. **Splits** We use the original train-val splits from the VQAv2 dataset. After translation, we get 1,775,028 train image-text pairs, and 857,416 validation image-text pairs. ## Training procedure 👨🏻‍💻 ### Preprocessing The texts are lowercased and tokenized using WordPiece and a shared vocabulary size of approximately 110,000. The beginning of a new document is marked with `[CLS]` and the end of one by `[SEP]`. ### Fine-tuning The checkpoint of the model was trained on Google Cloud Engine TPUv3-8 machine (with 335 GB of RAM, 1000 GB of hard drive, 96 CPU cores) **8 v3 TPU cores** for 6k steps with a per device batch size of 128 and a max sequence length of 128. The optimizer used is AdamW with a learning rate of 5e-5, learning rate warmup for 1600 steps, and linear decay of the learning rate after. We tracked experiments using TensorBoard. Here is link to main dashboard: [CLIP Vision BERT VQAv2 Fine-tuning Dashboard](https://huggingface.co/flax-community/multilingual-vqa-pt-60k-ft/tensorboard) #### **Fine-tuning Results 📊** The model at this checkpoint reached **eval accuracy of 0.49** on our multilingual VQAv2 dataset. ## Team Members - Gunjan Chhablani [@gchhablani](https://hf.co/gchhablani) - Bhavitvya Malik[@bhavitvyamalik](https://hf.co/bhavitvyamalik) ## Acknowledgements We thank [Nilakshan Kunananthaseelan](https://huggingface.co/knilakshan20) for helping us whenever he could get a chance. We also thank [Abheesht Sharma](https://huggingface.co/abheesht) for helping in the discussions in the initial phases. [Luke Melas](https://github.com/lukemelas) helped us get the CC-12M data on our TPU-VMs and we are very grateful to him. This project would not be possible without the help of [Patrick](https://huggingface.co/patrickvonplaten) and [Suraj](https://huggingface.co/valhalla) who met with us frequently and helped review our approach and guided us throughout the project. Huge thanks to Huggingface 🤗 & Google Jax/Flax team for such a wonderful community week and for answering our queries on the Slack channel, and for providing us with the TPU-VMs. <img src=https://pbs.twimg.com/media/E443fPjX0AY1BsR.jpg:large>
flax-community/clip-vision-bert-cc12m-60k
flax-community
2021-07-21T09:17:15Z
9
2
transformers
[ "transformers", "jax", "clip-vision-bert", "fill-mask", "arxiv:1908.03557", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# CLIP-Vision-BERT Multilingual Pre-trained Model Pretrained CLIP-Vision-BERT pre-trained on translated [Conceptual-12M](https://github.com/google-research-datasets/conceptual-12m) image-text pairs using a masked language modeling (MLM) objective. 10M cleaned image-text pairs are translated using [mBART-50 one-to-many model](https://huggingface.co/facebook/mbart-large-50-one-to-many-mmt) to 2.5M examples each in English, French, German and Spanish. This model is based on the VisualBERT which was introduced in [this paper](https://arxiv.org/abs/1908.03557) and first released in [this repository](https://github.com/uclanlp/visualbert). We trained CLIP-Vision-BERT model during community week hosted by Huggingface 🤗 using JAX/Flax. This checkpoint is pre-trained for 60k steps. ## Model description CLIP-Vision-BERT is a modified BERT model which takes in visual embeddings from CLIP-Vision transformer and concatenates them with BERT textual embeddings before passing them to the self-attention layers of BERT. This is done for deep cross-modal interaction between the two modes. ## Intended uses & limitations❗️ You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. Note that this model is primarily aimed at being fine-tuned on tasks such as visuo-linguistic sequence classification or visual question answering. We used this model to fine-tuned on a multi-translated version of the visual question answering task - [VQA v2](https://visualqa.org/challenge.html). Since Conceptual-12M is a dataset scraped from the internet, it will involve some biases which will also affect all fine-tuned versions of this model. ### How to use❓ You can use this model directly with a pipeline for masked language modeling. You will need to clone the model from [here](https://github.com/gchhablani/multilingual-vqa). An example of usage is shown below: ```python >>> from torchvision.io import read_image >>> import numpy as np >>> import os >>> from transformers import CLIPProcessor, BertTokenizerFast >>> from model.flax_clip_vision_bert.modeling_clip_vision_bert import FlaxCLIPVisionBertForMaskedLM >>> image_path = os.path.join('images/val2014', os.listdir('images/val2014')[0]) >>> img = read_image(image_path) >>> clip_processor = CLIPProcessor.from_pretrained('openai/clip-vit-base-patch32') ftfy or spacy is not installed using BERT BasicTokenizer instead of ftfy. >>> clip_outputs = clip_processor(images=img) >>> clip_outputs['pixel_values'][0] = clip_outputs['pixel_values'][0].transpose(1,2,0) # Need to transpose images as model expected channel last images. >>> tokenizer = BertTokenizerFast.from_pretrained('bert-base-multilingual-uncased') >>> model = FlaxCLIPVisionBertForMaskedLM.from_pretrained('flax-community/clip-vision-bert-cc12m-60k') >>> text = "Three teddy [MASK] in a showcase." >>> tokens = tokenizer([text], return_tensors="np") >>> pixel_values = np.concatenate([clip_outputs['pixel_values']]) >>> outputs = model(pixel_values=pixel_values, **tokens) >>> indices = np.where(tokens['input_ids']==tokenizer.mask_token_id) >>> preds = outputs.logits[indices][0] >>> sorted_indices = np.argsort(preds)[::-1] # Get reverse sorted scores /home/crocoder/anaconda3/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py:4615: UserWarning: 'kind' argument to argsort is ignored. warnings.warn("'kind' argument to argsort is ignored.") >>> top_5_indices = sorted_indices[:5] >>> top_5_tokens = tokenizer.convert_ids_to_tokens(top_5_indices) >>> top_5_scores = preds[top_5_indices] >>> print(dict(zip(top_5_tokens, top_5_scores))) {'bears': 19.241959, 'bear': 17.700356, 'animals': 14.368396, 'girls': 14.343797, 'dolls': 14.274415} ``` ## Training data 🏋🏻‍♂️ The CLIP-Vision-BERT model was pre-trained on a translated version of the Conceptual-12m dataset in four languages using mBART-50: English, French, German and Spanish, with 2.5M image-text pairs in each. The dataset captions and image urls can be downloaded from [flax-community/conceptual-12m-mbart-50-translated](https://huggingface.co/datasets/flax-community/conceptual-12m-mbart-50-multilingual). ## Data Cleaning 🧹 Though the original dataset contains 12M image-text pairs, a lot of the URLs are invalid now, and in some cases, images are corrupt or broken. We remove such examples from our data, which leaves us with approximately 10M image-text pairs. **Splits** We used 99% of the 10M examples as a train set, and the remaining ~ 100K examples as our validation set. ## Training procedure 👨🏻‍💻 ### Preprocessing The texts are lowercased and tokenized using WordPiece and a shared vocabulary size of approximately 110,000. The beginning of a new document is marked with `[CLS]` and the end of one by `[SEP]` The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. The visual embeddings are taken from the CLIP-Vision model and combined with the textual embeddings inside the BERT embedding layer. The padding is done in the middle. Here is an example of what the embeddings look like: ``` [CLS Emb] [Textual Embs] [SEP Emb] [Pad Embs] [Visual Embs] ``` A total length of 128 tokens, including the visual embeddings, is used. The texts are truncated or padded accordingly. ### Pretraining The checkpoint of the model was trained on Google Cloud Engine TPUv3-8 machine (with 335 GB of RAM, 1000 GB of hard drive, 96 CPU cores) **8 v3 TPU cores** for 60k steps with a per device batch size of 64 and a max sequence length of 128. The optimizer used is Adafactor with a learning rate of 1e-4, learning rate warmup for 5,000 steps, and linear decay of the learning rate after. We tracked experiments using TensorBoard. Here is the link to the main dashboard: [CLIP Vision BERT CC12M Pre-training Dashboard](https://huggingface.co/flax-community/multilingual-vqa-pt-ckpts/tensorboard) #### **Pretraining Results 📊** The model at this checkpoint reached **eval accuracy of 67.53%** and **with train loss at 1.793 and eval loss at 1.724**. ## Fine Tuning on downstream tasks We performed fine-tuning on downstream tasks. We used the following datasets for visual question answering: 1. Multilingual of [Visual Question Answering (VQA) v2](https://visualqa.org/challenge.html) - We translated this dataset to the four languages using `Helsinki-NLP` Marian models. The translated data can be found at [flax-community/multilingual-vqa](https://huggingface.co/datasets/flax-community/multilingual-vqa). The checkpoints for the fine-tuned model on this pre-trained checkpoint can be found [here](https://huggingface.co/flax-community/multilingual-vqa-pt-60k-ft/tensorboard). The fine-tuned model achieves eval accuracy of 49% on our validation dataset. ## Team Members - Gunjan Chhablani [@gchhablani](https://hf.co/gchhablani) - Bhavitvya Malik[@bhavitvyamalik](https://hf.co/bhavitvyamalik) ## Acknowledgements We thank [Nilakshan Kunananthaseelan](https://huggingface.co/knilakshan20) for helping us whenever he could get a chance. We also thank [Abheesht Sharma](https://huggingface.co/abheesht) for helping in the discussions in the initial phases. [Luke Melas](https://github.com/lukemelas) helped us get the CC-12M data on our TPU-VMs and we are very grateful to him. This project would not be possible without the help of [Patrick](https://huggingface.co/patrickvonplaten) and [Suraj](https://huggingface.co/valhalla) who met with us frequently and helped review our approach and guided us throughout the project. Huge thanks to Huggingface 🤗 & Google Jax/Flax team for such a wonderful community week and for answering our queries on the Slack channel, and for providing us with the TPU-VMs. <img src=https://pbs.twimg.com/media/E443fPjX0AY1BsR.jpg:large>
huggingtweets/grapefried
huggingtweets
2021-07-21T08:54:37Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/grapefried/1626857673378/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1392696284549632008/QOl3l-zh_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">ju1ce💎</div> <div style="text-align: center; font-size: 14px;">@grapefried</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from ju1ce💎. | Data | ju1ce💎 | | --- | --- | | Tweets downloaded | 2034 | | Retweets | 504 | | Short tweets | 403 | | Tweets kept | 1127 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1actx5cl/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @grapefried's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1a1nwhd0) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1a1nwhd0/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/grapefried') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
CZWin32768/xlm-align
CZWin32768
2021-07-21T07:53:29Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "fill-mask", "arxiv:2106.06381", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
# XLM-Align **Improving Pretrained Cross-Lingual Language Models via Self-Labeled Word Alignment** (ACL-2021, [paper](https://arxiv.org/pdf/2106.06381.pdf), [github](https://github.com/CZWin32768/XLM-Align)) XLM-Align is a pretrained cross-lingual language model that supports 94 languages. See details in our [paper](https://arxiv.org/pdf/2106.06381.pdf). ## Example ``` model = = AutoModel.from_pretrained("CZWin32768/xlm-align") ``` ## Evaluation Results XTREME cross-lingual understanding tasks: | Model | POS | NER | XQuAD | MLQA | TyDiQA | XNLI | PAWS-X | Avg | |:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:| | XLM-R_base | 75.6 | 61.8 | 71.9 / 56.4 | 65.1 / 47.2 | 55.4 / 38.3 | 75.0 | 84.9 | 66.4 | | XLM-Align | **76.0** | **63.7** | **74.7 / 59.0** | **68.1 / 49.8** | **62.1 / 44.8** | **76.2** | **86.8** | **68.9** | ## MD5 ``` b9d214025837250ede2f69c9385f812c config.json 6005db708eb4bab5b85fa3976b9db85b pytorch_model.bin bf25eb5120ad92ef5c7d8596b5dc4046 sentencepiece.bpe.model eedbd60a7268b9fc45981b849664f747 tokenizer.json ``` ## About Contact: chizewen\@outlook.com BibTeX: ``` @article{xlmalign, title={Improving Pretrained Cross-Lingual Language Models via Self-Labeled Word Alignment}, author={Zewen Chi and Li Dong and Bo Zheng and Shaohan Huang and Xian-Ling Mao and Heyan Huang and Furu Wei}, journal={arXiv preprint arXiv:2106.06381}, year={2021} } ```
huggingtweets/plesmasquerade
huggingtweets
2021-07-21T02:40:45Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/plesmasquerade/1626834982015/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1415803411002314752/X0K3MR1R_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">lovely lovely aerie, 🍭👑🪞🕯️🌙💫🪶🧣🗑️🔪</div> <div style="text-align: center; font-size: 14px;">@plesmasquerade</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from lovely lovely aerie, 🍭👑🪞🕯️🌙💫🪶🧣🗑️🔪. | Data | lovely lovely aerie, 🍭👑🪞🕯️🌙💫🪶🧣🗑️🔪 | | --- | --- | | Tweets downloaded | 3235 | | Retweets | 1376 | | Short tweets | 330 | | Tweets kept | 1529 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/39gtjjjo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @plesmasquerade's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/6jt0gb2r) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/6jt0gb2r/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/plesmasquerade') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
lg/ghpy_20k
lg
2021-07-20T23:55:56Z
10
2
transformers
[ "transformers", "pytorch", "gpt_neo", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
**This model is provided with no guarantees whatsoever; use at your own risk.** This is a Neo2.7B model fine tuned on github data scraped by an EleutherAI member (filtered for python-only) for 20k steps. A better code model is coming soon™ (hopefully, maybe); this model was created mostly as a test of infrastructure code.
vasudevgupta/tf-wav2vec2-base
vasudevgupta
2021-07-20T23:07:35Z
2
0
transformers
[ "transformers", "tf", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
TensorFlow version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base). Obtained using script from https://github.com/vasudevgupta7/gsoc-wav2vec2.
ifis-zork/ZORK_AI_MODERN
ifis-zork
2021-07-20T20:47:22Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer model_index: - name: ZORK_AI_MODERN results: - task: name: Causal Language Modeling type: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ZORK_AI_MODERN This model is a fine-tuned version of [ifis-zork/ZORK_AI_MODERN_A](https://huggingface.co/ifis-zork/ZORK_AI_MODERN_A) on an unkown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.9.0+cu102 - Tokenizers 0.10.3
ifis-zork/ZORK_AI_MODERN_A
ifis-zork
2021-07-20T19:37:56Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer model_index: - name: ZORK_AI_MODERN_A results: - task: name: Causal Language Modeling type: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ZORK_AI_MODERN_A This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unkown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.9.0+cu102 - Tokenizers 0.10.3
huggingtweets/sharsenko
huggingtweets
2021-07-20T16:08:39Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/sharsenko/1626797315466/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1411529618180431873/Eyc2bjZV_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Willo</div> <div style="text-align: center; font-size: 14px;">@sharsenko</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Willo. | Data | Willo | | --- | --- | | Tweets downloaded | 1279 | | Retweets | 304 | | Short tweets | 219 | | Tweets kept | 756 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1r0bziin/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sharsenko's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/37iziw4p) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/37iziw4p/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/sharsenko') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
ritog/bangla-gpt2
ritog
2021-07-20T15:22:47Z
12
2
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "bn", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: bn tags: - text-generation widget: - text: আজ একটি সুন্দর দিন এবং আমি --- # Bangla-GPT2 ### A GPT-2 Model for the Bengali Language * Dataset- mc4 Bengali * Training time- ~40 hours * Written in- JAX If you use this model, please cite: ``` @misc{bangla-gpt2, author = {Ritobrata Ghosh}, year = {2016}, title = {Bangla GPT-2}, publisher = {Hugging Face} } ```
idrimadrid/autonlp-creator_classifications-4021083
idrimadrid
2021-07-20T12:57:16Z
6
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autonlp", "en", "dataset:idrimadrid/autonlp-data-creator_classifications", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - idrimadrid/autonlp-data-creator_classifications --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 4021083 ## Validation Metrics - Loss: 0.6848716735839844 - Accuracy: 0.8825910931174089 - Macro F1: 0.41301646762109634 - Micro F1: 0.8825910931174088 - Weighted F1: 0.863740586166105 - Macro Precision: 0.4129337301330573 - Micro Precision: 0.8825910931174089 - Weighted Precision: 0.8531335941587811 - Macro Recall: 0.44466614072309585 - Micro Recall: 0.8825910931174089 - Weighted Recall: 0.8825910931174089 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/idrimadrid/autonlp-creator_classifications-4021083 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("idrimadrid/autonlp-creator_classifications-4021083", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("idrimadrid/autonlp-creator_classifications-4021083", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
flax-community/roberta-hindi
flax-community
2021-07-20T12:50:29Z
105
2
transformers
[ "transformers", "pytorch", "jax", "tensorboard", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- widget: - text: "मुझे उनसे बात करना <mask> अच्छा लगा" - text: "हम आपके सुखद <mask> की कामना करते हैं" - text: "सभी अच्छी चीजों का एक <mask> होता है" --- # RoBERTa base model for Hindi language Pretrained model on Hindi language using a masked language modeling (MLM) objective. [A more interactive & comparison demo is available here](https://huggingface.co/spaces/flax-community/roberta-hindi). > This is part of the [Flax/Jax Community Week](https://discuss.huggingface.co/t/pretrain-roberta-from-scratch-in-hindi/7091), organized by [Hugging Face](https://huggingface.co/) and TPU usage sponsored by Google. ## Model description RoBERTa Hindi is a transformers model pretrained on a large corpus of Hindi data(a combination of **mc4, oscar and indic-nlp** datasets) ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='flax-community/roberta-hindi') >>> unmasker("हम आपके सुखद <mask> की कामना करते हैं") [{'score': 0.3310680091381073, 'sequence': 'हम आपके सुखद सफर की कामना करते हैं', 'token': 1349, 'token_str': ' सफर'}, {'score': 0.15317578613758087, 'sequence': 'हम आपके सुखद पल की कामना करते हैं', 'token': 848, 'token_str': ' पल'}, {'score': 0.07826550304889679, 'sequence': 'हम आपके सुखद समय की कामना करते हैं', 'token': 453, 'token_str': ' समय'}, {'score': 0.06304813921451569, 'sequence': 'हम आपके सुखद पहल की कामना करते हैं', 'token': 404, 'token_str': ' पहल'}, {'score': 0.058322224766016006, 'sequence': 'हम आपके सुखद अवसर की कामना करते हैं', 'token': 857, 'token_str': ' अवसर'}] ``` ## Training data The RoBERTa Hindi model was pretrained on the reunion of the following datasets: - [OSCAR](https://huggingface.co/datasets/oscar) is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture. - [mC4](https://huggingface.co/datasets/mc4) is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. - [IndicGLUE](https://indicnlp.ai4bharat.org/indic-glue/) is a natural language understanding benchmark. - [Samanantar](https://indicnlp.ai4bharat.org/samanantar/) is a parallel corpora collection for Indic language. - [Hindi Text Short and Large Summarization Corpus](https://www.kaggle.com/disisbig/hindi-text-short-and-large-summarization-corpus) is a collection of ~180k articles with their headlines and summary collected from Hindi News Websites. - [Hindi Text Short Summarization Corpus](https://www.kaggle.com/disisbig/hindi-text-short-summarization-corpus) is a collection of ~330k articles with their headlines collected from Hindi News Websites. - [Old Newspapers Hindi](https://www.kaggle.com/crazydiv/oldnewspapershindi) is a cleaned subset of HC Corpora newspapers. ## Training procedure ### Preprocessing The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50265. The inputs of the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked with `<s>` and the end of one by `</s>`. - We had to perform cleanup of **mC4** and **oscar** datasets by removing all non hindi (non Devanagari) characters from the datasets. - We tried to filter out evaluation set of WikiNER of [IndicGlue](https://indicnlp.ai4bharat.org/indic-glue/) benchmark by [manual labelling](https://github.com/amankhandelia/roberta_hindi/blob/master/wikiner_incorrect_eval_set.csv) where the actual labels were not correct and modifying the [downstream evaluation dataset](https://github.com/amankhandelia/roberta_hindi/blob/master/utils.py). The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `<mask>`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed). ### Pretraining The model was trained on Google Cloud Engine TPUv3-8 machine (with 335 GB of RAM, 1000 GB of hard drive, 96 CPU cores).A randomized shuffle of combined dataset of **mC4, oscar** and other datasets listed above was used to train the model. Training logs are present in [wandb](https://wandb.ai/wandb/hf-flax-roberta-hindi). ## Evaluation Results RoBERTa Hindi is evaluated on various downstream tasks. The results are summarized below. | Task | Task Type | IndicBERT | HindiBERTa | Indic Transformers Hindi BERT | RoBERTa Hindi Guj San | RoBERTa Hindi | |-------------------------|----------------------|-----------|------------|-------------------------------|-----------------------|---------------| | BBC News Classification | Genre Classification | **76.44** | 66.86 | **77.6** | 64.9 | 73.67 | | WikiNER | Token Classification | - | 90.68 | **95.09** | 89.61 | **92.76** | | IITP Product Reviews | Sentiment Analysis | **78.01** | 73.23 | **78.39** | 66.16 | 75.53 | | IITP Movie Reviews | Sentiment Analysis | 60.97 | 52.26 | **70.65** | 49.35 | **61.29** | ## Team Members - Aman K ([amankhandelia](https://huggingface.co/amankhandelia)) - Haswanth Aekula ([hassiahk](https://huggingface.co/hassiahk)) - Kartik Godawat ([dk-crazydiv](https://huggingface.co/dk-crazydiv)) - Prateek Agrawal ([prateekagrawal](https://huggingface.co/prateekagrawal)) - Rahul Dev ([mlkorra](https://huggingface.co/mlkorra)) ## Credits Huge thanks to Hugging Face 🤗 & Google Jax/Flax team for such a wonderful community week, especially for providing such massive computing resources. Big thanks to [Suraj Patil](https://huggingface.co/valhalla) & [Patrick von Platen](https://huggingface.co/patrickvonplaten) for mentoring during the whole week. <img src=https://pbs.twimg.com/media/E443fPjX0AY1BsR.jpg:medium>
flax-community/mr-indicnlp-classifier
flax-community
2021-07-19T12:53:33Z
10
1
transformers
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# IndicNLP Marathi News Classifier This model was fine-tuned using [Marathi RoBERTa](https://huggingface.co/flax-community/roberta-base-mr) on [IndicNLP Marathi News Dataset](https://github.com/AI4Bharat/indicnlp_corpus#indicnlp-news-article-classification-dataset) ## Dataset IndicNLP Marathi news dataset consists 3 classes - `['lifestyle', 'entertainment', 'sports']` - with following docs distribution as per classes: | train | eval | test | | ----- | ---- | ---- | | 9672 | 477 | 478 | 💯 Our **`mr-indicnlp-classifier`** model fine tuned from **roberta-base-mr** Pretrained Marathi RoBERTa model outperformed both classifier mentioned in [Arora, G. (2020). iNLTK](https://www.semanticscholar.org/paper/iNLTK%3A-Natural-Language-Toolkit-for-Indic-Languages-Arora/5039ed9e100d3a1cbbc25a02c82f6ee181609e83/figure/3) and [Kunchukuttan, Anoop et al. AI4Bharat-IndicNLP.](https://www.semanticscholar.org/paper/AI4Bharat-IndicNLP-Corpus%3A-Monolingual-Corpora-and-Kunchukuttan-Kakwani/7997d432925aff0ba05497d2893c09918298ca55/figure/4) | Dataset | FT-W | FT-WC | INLP | iNLTK | **roberta-base-mr 🏆** | | --------------- | ----- | ----- | ----- | ----- | --------------------- | | iNLTK Headlines | 83.06 | 81.65 | 89.92 | 92.4 | **97.48** |
KrishnaChandra4/DialoGPT-small-Rick
KrishnaChandra4
2021-07-19T08:53:30Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:04Z
--- tags: - conversational ---
flax-community/t5-vae-wiki
flax-community
2021-07-19T07:03:14Z
3
0
transformers
[ "transformers", "jax", "transformer_vae", "vae", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en tags: vae license: apache-2.0 --- # T5-VAE-Wiki (flax) A Transformer-VAE made using flax. It has been trained to interpolate on sentences form wikipedia. Done as part of Huggingface community training ([see forum post](https://discuss.huggingface.co/t/train-a-vae-to-interpolate-on-english-sentences/7548)). Builds on T5, using an autoencoder to convert it into an MMD-VAE ([more info](http://fras.uk/ml/large%20prior-free%20models/transformer-vae/2020/08/13/Transformers-as-Variational-Autoencoders.html)). ## How to use from the 🤗/transformers library Add model repo as a submodule: ```bash git submodule add https://github.com/Fraser-Greenlee/t5-vae-flax.git t5_vae_flax ``` ```python from transformers import AutoTokenizer from t5_vae_flax.src.t5_vae import FlaxT5VaeForAutoencoding tokenizer = AutoTokenizer.from_pretrained("t5-base") model = FlaxT5VaeForAutoencoding.from_pretrained("flax-community/t5-vae-wiki") ``` ## Setup Run `setup_tpu_vm_venv.sh` to setup a virtual enviroment on a TPU VM for training.
huggingtweets/heyarav
huggingtweets
2021-07-19T01:28:18Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1416877970132672512/942NnDJA_400x400.png&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Arav</div> <div style="text-align: center; font-size: 14px;">@heyarav</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Arav. | Data | Arav | | --- | --- | | Tweets downloaded | 3246 | | Retweets | 411 | | Short tweets | 786 | | Tweets kept | 2049 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3n441q7z/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @heyarav's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2s8u4vm6) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2s8u4vm6/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/heyarav') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
clip-italian/clip-italian-final
clip-italian
2021-07-18T16:35:01Z
12
0
transformers
[ "transformers", "jax", "hybrid-clip", "italian", "bert", "vit", "vision", "it", "dataset:wit", "dataset:ctl/conceptualCaptions", "dataset:mscoco-it", "arxiv:2103.00020", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: it license: datasets: - wit - ctl/conceptualCaptions - mscoco-it tags: - italian - bert - vit - vision --- # CLIP-Italian CLIP Italian is a CLIP-like Model for Italian. The CLIP model (Contrastive Language–Image Pre-training) was developed by researchers at OpenAI and is able to efficiently learn visual concepts from natural language supervision. We fine-tuned a competitive Italian CLIP model with only ~1.4 million Italian image-text pairs. This model is part of the [Flax/Jax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google. ## Training Data We considered three main sources of data: - [WIT](https://github.com/google-research-datasets/wit) - [MSCOCO-IT](https://github.com/crux82/mscoco-it) - [Conceptual Captions](https://ai.google.com/research/ConceptualCaptions/) ## Training Procedure Preprocessing, hardware used, hyperparameters... ## Evaluation Performance ## Limitations ## Usage ## Team members - Federico Bianchi ([vinid](https://huggingface.co/vinid)) - Raphael Pisoni ([4rtemi5](https://huggingface.co/4rtemi5)) - Giuseppe Attanasio ([g8a9](https://huggingface.co/g8a9)) - Silvia Terragni ([silviatti](https://huggingface.co/silviatti)) - Dario Balestri ([D3Reo](https://huggingface.co/D3Reo)) - Gabriele Sarti ([gsarti](https://huggingface.co/gsarti)) - Sri Lakshmi ([srisweet](https://huggingface.co/srisweet)) ## Useful links - [CLIP Blog post](https://openai.com/blog/clip/) - [CLIP paper](https://arxiv.org/abs/2103.00020) - [Community Week README](https://github.com/huggingface/transformers/blob/master/examples/research_projects/jax-projects/README.md) - [Community Week channel](https://discord.com/channels/858019234139602994/859711887520038933) - [Hybrid CLIP example scripts](https://github.com/huggingface/transformers/tree/master/examples/research_projects/jax-projects/hybrid_clip) - [Model Repository](https://huggingface.co/clip-italian/clip-italian-final/)
flax-community/gpt-2-tamil
flax-community
2021-07-18T16:03:33Z
10
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "ta", "dataset:oscar", "dataset:IndicNLP", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: ta datasets: - oscar - IndicNLP widget: - text: 'ஒரு ஊரிலே ஒரு காக்கைக்கு' --- # GPT2-Tamil This repository is created as part of the Flax/Jax community week by Huggingface. The aim of this project is to pretrain a language model using GPT-2 specifically for Tamil language. ## Setup: To setup the project, run the following command, ```python pip install -r requirements.txt ``` ## Model: Pretrained model on Tamil language using a causal language modeling (CLM) objective. ## Dataset Used: The GTP-2 model is trained on [oscar dataset - ta](https://huggingface.co/datasets/oscar) ## Intended uses & limitations: You can use the raw model for next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=gpt) to look for fine-tuned versions on a task that interests you. ## How to pretrain the model: To perform training, do the following steps, - Export the model directory (where you want to store the model artifacts like config, tokenizer, etc.) ```python >>> export MODEL_DIR=<model_dir> ``` - Create the config.json by running the following command, ```python >>> python src/create_config.py ``` - Create the tokenizer by running the following command, ```python >>> python src/train_tokenizer.py ``` - Once the config and tokenizer is created, run the following script to start training the flax model ```python >>> python scripts/train_gpt2-oscar-tamil.sh ``` ## How to use: To perform language generation using the model, pipeline can be used directly. - First convert the flax model to pytorch using the following command, ```python python src/convert_flax_to_pytorch.py ``` - Use the following snippet to perform language generation, ```python >>> from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline >>> model_name = 'abinayam/gpt-2-tamil' >>> model = AutoModelWithLMHead.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) >>> set_seed(42) >>> input_text = "ஒரு ஊரிலே ஒரு காக்கைக்கு" >>> max_len = 300 >>> no_seq = 5 >>> generator = pipeline('text-generation', model=model, tokenizer=tokenizer) >>> sequence = generator(input_text, max_length=max_len, num_return_sequences=no_seq) ```
imvladikon/bert-base-uncased-jigsaw
imvladikon
2021-07-18T15:46:05Z
1
0
transformers
[ "transformers", "pytorch", "bert", "generated_from_trainer", "en", "dataset:jigsaw", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: - en license: tags: - generated_from_trainer datasets: - jigsaw model_index: - name: bert-base-uncased results: - {} --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased This model is a fine-tuned version of [](https://huggingface.co/) on the jigsaw dataset. It achieves the following results on the evaluation set: - Loss: 0.0393 - Precision Micro: 0.7758 - Recall Micro: 0.7858 - F1 Micro: 0.7808 - F2 Micro: 0.7838 - Precision Macro: 0.6349 - Recall Macro: 0.5972 - F1 Macro: 0.6105 - F2 Macro: 0.6015 - Overall Precision: 0.9841 - Overall Recall: 0.9841 - Overall F1: 0.9841 - Overall F2: 0.9841 - Overall Accuracy: 0.9841 - Matthews Corrcoef: 0.7725 - Aucroc Macro: 0.9897 - Aucroc Micro: 0.9920 - Accuracy Toxic: 0.9678 - F1 Toxic: 0.8295 - Accuracy Severe Toxic: 0.9899 - F1 Severe Toxic: 0.3313 - Accuracy Obscene: 0.9816 - F1 Obscene: 0.8338 - Accuracy Threat: 0.9974 - F1 Threat: 0.4545 - Accuracy Insult: 0.9763 - F1 Insult: 0.7662 - Accuracy Identity Hate: 0.9914 - F1 Identity Hate: 0.4480 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 12 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 48 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision Micro | Recall Micro | F1 Micro | F2 Micro | Precision Macro | Recall Macro | F1 Macro | F2 Macro | Overall Precision | Overall Recall | Overall F1 | Overall F2 | Overall Accuracy | Matthews Corrcoef | Aucroc Macro | Aucroc Micro | Accuracy Toxic | F1 Toxic | Accuracy Severe Toxic | F1 Severe Toxic | Accuracy Obscene | F1 Obscene | Accuracy Threat | F1 Threat | Accuracy Insult | F1 Insult | Accuracy Identity Hate | F1 Identity Hate | |:-------------:|:-----:|:-----:|:---------------:|:---------------:|:------------:|:--------:|:--------:|:---------------:|:------------:|:--------:|:--------:|:-----------------:|:--------------:|:----------:|:----------:|:----------------:|:-----------------:|:------------:|:------------:|:--------------:|:--------:|:---------------------:|:---------------:|:----------------:|:----------:|:---------------:|:---------:|:---------------:|:---------:|:----------------------:|:----------------:| | 0.0433 | 1.0 | 2659 | 0.0423 | 0.7607 | 0.7798 | 0.7702 | 0.7759 | 0.6398 | 0.5561 | 0.5585 | 0.5535 | 0.9832 | 0.9832 | 0.9832 | 0.9832 | 0.9832 | 0.7615 | 0.9887 | 0.9908 | 0.9671 | 0.8211 | 0.9878 | 0.4354 | 0.9805 | 0.8265 | 0.9974 | 0.2243 | 0.9746 | 0.7602 | 0.9918 | 0.2834 | | 0.0366 | 2.0 | 5318 | 0.0393 | 0.7758 | 0.7858 | 0.7808 | 0.7838 | 0.6349 | 0.5972 | 0.6105 | 0.6015 | 0.9841 | 0.9841 | 0.9841 | 0.9841 | 0.9841 | 0.7725 | 0.9897 | 0.9920 | 0.9678 | 0.8295 | 0.9899 | 0.3313 | 0.9816 | 0.8338 | 0.9974 | 0.4545 | 0.9763 | 0.7662 | 0.9914 | 0.4480 | | 0.0305 | 3.0 | 7977 | 0.0399 | 0.7608 | 0.8186 | 0.7887 | 0.8064 | 0.6621 | 0.6856 | 0.6715 | 0.6794 | 0.9842 | 0.9842 | 0.9842 | 0.9842 | 0.9842 | 0.7810 | 0.9897 | 0.9919 | 0.9662 | 0.8272 | 0.9892 | 0.4772 | 0.9815 | 0.8347 | 0.9977 | 0.5629 | 0.9772 | 0.7740 | 0.9931 | 0.5528 | | 0.0263 | 4.0 | 10636 | 0.0435 | 0.7333 | 0.8336 | 0.7803 | 0.8114 | 0.6395 | 0.7039 | 0.6687 | 0.6890 | 0.9830 | 0.9830 | 0.9830 | 0.9830 | 0.9830 | 0.7732 | 0.9897 | 0.9912 | 0.9608 | 0.8083 | 0.9898 | 0.4791 | 0.9812 | 0.8319 | 0.9972 | 0.5368 | 0.9756 | 0.7700 | 0.9935 | 0.5861 | | 0.0218 | 5.0 | 13295 | 0.0456 | 0.7480 | 0.8108 | 0.7781 | 0.7974 | 0.6661 | 0.6720 | 0.6662 | 0.6691 | 0.9833 | 0.9833 | 0.9833 | 0.9833 | 0.9833 | 0.7701 | 0.9890 | 0.9907 | 0.9612 | 0.8071 | 0.9894 | 0.4642 | 0.9823 | 0.8354 | 0.9977 | 0.5325 | 0.9754 | 0.7613 | 0.9936 | 0.5968 | ### Framework versions - Transformers 4.8.2 - Pytorch 1.9.0+cu102 - Datasets 1.9.0 - Tokenizers 0.10.3
jacobshein/danish-bert-botxo-qa-squad
jacobshein
2021-07-18T11:19:49Z
4
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "danish", "question answering", "squad", "machine translation", "botxo", "da", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: da tags: - danish - bert - question answering - squad - machine translation - botxo license: cc-by-4.0 datasets: - common_crawl - wikipedia - dindebat.dk - hestenettet.dk - danish OpenSubtitles widget: - context: Stine sagde hej, men Jacob sagde halløj. --- # Danish BERT (version 2, uncased) by [BotXO](https://github.com/botxo/nordic_bert) fine-tuned for Question Answering (QA) on the [machine-translated SQuAD-da dataset](https://github.com/ccasimiro88/TranslateAlignRetrieve/tree/multilingual/squads-tar/da) ```python from transformers import AutoTokenizer, AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("jacobshein/danish-bert-botxo-qa-squad") model = AutoModelForQuestionAnswering.from_pretrained("jacobshein/danish-bert-botxo-qa-squad") ``` #### Contact For further information on usage or fine-tuning procedure, please reach out by email through [jacobhein.com](https://jacobhein.com/#contact).
sehandev/koelectra-long-qa
sehandev
2021-07-18T06:01:25Z
3
0
transformers
[ "transformers", "pytorch", "electra", "question-answering", "generated_from_trainer", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer model_index: - name: koelectra-long-qa results: - task: name: Question Answering type: question-answering --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # koelectra-long-qa This model is a fine-tuned version of [monologg/koelectra-base-v3-discriminator](https://huggingface.co/monologg/koelectra-base-v3-discriminator) on an unkown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 256 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1 - Datasets 1.9.0 - Tokenizers 0.10.3
flax-community/wav2vec2-base-persian
flax-community
2021-07-18T05:44:28Z
24
1
transformers
[ "transformers", "pytorch", "jax", "tensorboard", "wav2vec2", "pretraining", "speech", "fa", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: fa datasets: - common_voice tags: - speech license: apache-2.0 --- # Wav2Vec2 4 Persian > This is part of the [Flax/Jax Community Week](https://discuss.huggingface.co/t/pretrain-wav2vec2-in-persian/8180), organized by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google. ## Team Members - Mehrdad Farahani ([m3hrdadfi](https://huggingface.co/m3hrdadfi)) ## Dataset TODO: Update ## How To Use TODO: Update ## Demo TODO: Update ## Evaluation TODO: Update
huggingtweets/percyvader
huggingtweets
2021-07-17T22:54:48Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/percyvader/1626562484510/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/848218865528078336/OTr3Lo3N_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">trades cowboy hat for fedora</div> <div style="text-align: center; font-size: 14px;">@percyvader</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from trades cowboy hat for fedora. | Data | trades cowboy hat for fedora | | --- | --- | | Tweets downloaded | 2818 | | Retweets | 628 | | Short tweets | 746 | | Tweets kept | 1444 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2vmsj6nk/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @percyvader's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1euqbqf4) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1euqbqf4/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/percyvader') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
birgermoell/t5-base-swedish
birgermoell
2021-07-17T07:52:39Z
11
0
transformers
[ "transformers", "pytorch", "jax", "tensorboard", "t5", "feature-extraction", "summarization", "translation", "sv", "dataset:oscar", "arxiv:1910.10683", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:05Z
--- language: - sv datasets: - oscar tags: - summarization - translation license: apache-2.0 --- [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) Pretraining Dataset: [C4](https://huggingface.co/datasets/oscar) Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* ## Abstract Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67) ## Model series This model is part of a series of models training on TPU with Flax Jax during Huggingface Flax/Jax challenge. ## Gpt models ## Swedish Gpt https://huggingface.co/birgermoell/swedish-gpt/ ## Swedish gpt wiki https://huggingface.co/flax-community/swe-gpt-wiki # Nordic gpt wiki https://huggingface.co/flax-community/nordic-gpt-wiki ## Dansk gpt wiki https://huggingface.co/flax-community/dansk-gpt-wiki ## Norsk gpt wiki https://huggingface.co/flax-community/norsk-gpt-wiki ## Roberta models ## Nordic Roberta Wiki https://huggingface.co/flax-community/nordic-roberta-wiki ## Swe Roberta Wiki Oscar https://huggingface.co/flax-community/swe-roberta-wiki-oscar ## Roberta Swedish Scandi https://huggingface.co/birgermoell/roberta-swedish-scandi ## Roberta Swedish https://huggingface.co/birgermoell/roberta-swedish ## Swedish T5 model https://huggingface.co/birgermoell/t5-base-swedish
birgermoell/swedish-gpt
birgermoell
2021-07-17T07:45:52Z
30
2
transformers
[ "transformers", "pytorch", "jax", "tensorboard", "gpt2", "text-generation", "sv", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: sv widget: - text: "Jag är en svensk språkmodell." --- ## Model series This model is part of a series of models training on TPU with Flax Jax during Huggingface Flax/Jax challenge. ## Gpt models ## Swedish Gpt https://huggingface.co/birgermoell/swedish-gpt/ ## Swedish gpt wiki https://huggingface.co/flax-community/swe-gpt-wiki # Nordic gpt wiki https://huggingface.co/flax-community/nordic-gpt-wiki ## Dansk gpt wiki https://huggingface.co/flax-community/dansk-gpt-wiki ## Norsk gpt wiki https://huggingface.co/flax-community/norsk-gpt-wiki ## Roberta models ## Nordic Roberta Wiki https://huggingface.co/flax-community/nordic-roberta-wiki ## Swe Roberta Wiki Oscar https://huggingface.co/flax-community/swe-roberta-wiki-oscar ## Roberta Swedish Scandi https://huggingface.co/birgermoell/roberta-swedish-scandi ## Roberta Swedish https://huggingface.co/birgermoell/roberta-swedish ## Swedish T5 model https://huggingface.co/birgermoell/t5-base-swedish # GPT-svenska-wikipedia A swedish GPT2 style model trained using Flax CLM pipeline on the Swedish part of the wiki40b dataset and the Oscar dataset. https://huggingface.co/datasets/wiki40b The model was trained for around 22600 steps (42 hours) as part of the Huggingface Jax/Flax challenge with the following loss and learning rate Loss: 3.1715331077575684, Learning Rate: 0.0024816440418362617) The model could likely be trained for longer. ## Data cleaning and preprocessing The data was cleaned and preprocessed using the following script. Make sure to install depencies for beam_runner to make the dataset work. ```python from datasets import load_dataset def load_and_clean_wiki(): dataset = load_dataset('wiki40b', 'sv', beam_runner='DirectRunner', split="train") #dataset = load_dataset('wiki40b', 'sv', beam_runner='DirectRunner') dataset = dataset.remove_columns(['wikidata_id', 'version_id']) filtered_dataset = dataset.map(filter_wikipedia) # filtered_dataset[:3] # print(filtered_dataset[:3]) return filtered_dataset def filter_wikipedia(batch): batch["text"] = " ".join(batch["text"].split("\ _START_SECTION_\ ")) batch["text"] = " ".join(batch["text"].split("\ _START_ARTICLE_\ ")) batch["text"] = " ".join(batch["text"].split("\ _START_ARTICLE_\ ")) batch["text"] = " ".join(batch["text"].split("\ _START_PARAGRAPH_\ ")) batch["text"] = " ".join(batch["text"].split("_NEWLINE_")) batch["text"] = " ".join(batch["text"].split("\xa0")) return batch ``` ## Training script The following training script was used to train the model. ```bash ./run_clm_flax.py --output_dir="${MODEL_DIR}" --model_type="gpt2" --config_name="${MODEL_DIR}" --tokenizer_name="${MODEL_DIR}" --dataset_name="wiki40b" --dataset_config_name="sv" --do_train --do_eval --block_size="512" --per_device_train_batch_size="64" --per_device_eval_batch_size="64" --learning_rate="5e-3" --warmup_steps="1000" --adam_beta1="0.9" --adam_beta2="0.98" --weight_decay="0.01" --overwrite_output_dir --num_train_epochs="20" --logging_steps="500" --save_steps="1000" --eval_steps="2500" --push_to_hub ```
liano/aura
liano
2021-07-17T04:48:44Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
https://arthritisrelieftx.com/123movies-watch-space-jam-a-new-legacy-2021-full-online-free-hd/ https://www.mycentraloregon.com/2021/07/16/how-to-watch-space-jam-a-new-legacy-free-streaming-space-jam-2-on-hbo-max-available-online/
huggingtweets/joshizcul
huggingtweets
2021-07-17T01:19:01Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/joshizcul/1626484737394/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1391124699321569281/4aMGupaX_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">J🔆🌒sh 💈🏵</div> <div style="text-align: center; font-size: 14px;">@joshizcul</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from J🔆🌒sh 💈🏵. | Data | J🔆🌒sh 💈🏵 | | --- | --- | | Tweets downloaded | 3238 | | Retweets | 101 | | Short tweets | 716 | | Tweets kept | 2421 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3upndo8i/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @joshizcul's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2iju0kbl) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2iju0kbl/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/joshizcul') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/ejazaii
huggingtweets
2021-07-16T21:20:13Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/ejazaii/1626470409321/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1379529428808634376/6XrfvhAA_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Ejaz</div> <div style="text-align: center; font-size: 14px;">@ejazaii</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Ejaz. | Data | Ejaz | | --- | --- | | Tweets downloaded | 1273 | | Retweets | 134 | | Short tweets | 355 | | Tweets kept | 784 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/hci1jrbh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ejazaii's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1a5d8p5i) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1a5d8p5i/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/ejazaii') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/enderdev_
huggingtweets
2021-07-16T20:30:38Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/enderdev_/1626467434270/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1415445991503839234/RSxcTJiJ_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Kieran</div> <div style="text-align: center; font-size: 14px;">@enderdev_</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Kieran. | Data | Kieran | | --- | --- | | Tweets downloaded | 2518 | | Retweets | 388 | | Short tweets | 691 | | Tweets kept | 1439 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2qz7ps6o/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @enderdev_'s tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3aqdw40t) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3aqdw40t/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/enderdev_') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
BumBelDumBel/TRUMP
BumBelDumBel
2021-07-16T19:14:17Z
9
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- license: mit tags: - generated_from_trainer model_index: - name: TRUMP results: - task: name: Causal Language Modeling type: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # TRUMP This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unkown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.9.0+cu102 - Tokenizers 0.10.3
huggingtweets/vishigondi
huggingtweets
2021-07-16T19:01:40Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/vishigondi/1626462038417/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1019963234793684992/LVdF4ah2_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Vishi Gondi</div> <div style="text-align: center; font-size: 14px;">@vishigondi</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Vishi Gondi. | Data | Vishi Gondi | | --- | --- | | Tweets downloaded | 1154 | | Retweets | 166 | | Short tweets | 21 | | Tweets kept | 967 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2txfnkwt/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @vishigondi's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/eshvn0h5) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/eshvn0h5/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/vishigondi') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
BumBelDumBel/ZORK-AI-TEST
BumBelDumBel
2021-07-16T17:12:42Z
9
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- license: mit tags: - generated_from_trainer model_index: - name: ZORK-AI-TEST results: - task: name: Causal Language Modeling type: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ZORK-AI-TEST This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unkown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.9.0+cu102 - Tokenizers 0.10.3
huggingtweets/viktar_babaryka
huggingtweets
2021-07-16T15:48:12Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/viktar_babaryka/1626450488055/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1330073417048010752/kh1pK808_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Виктор Бабарико</div> <div style="text-align: center; font-size: 14px;">@viktar_babaryka</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Виктор Бабарико. | Data | Виктор Бабарико | | --- | --- | | Tweets downloaded | 1254 | | Retweets | 29 | | Short tweets | 64 | | Tweets kept | 1161 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2d99evm6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @viktar_babaryka's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2q2axvfy) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2q2axvfy/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/viktar_babaryka') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
clarin-pl/fastText-kgr10
clarin-pl
2021-07-16T13:09:29Z
0
2
null
[ "fastText", "pl", "dataset:kgr10", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: pl tags: - fastText datasets: - kgr10 --- # KGR10 FastText Polish word embeddings Distributional language model (both textual and binary) for Polish (word embeddings) trained on KGR10 corpus (over 4 billion of words) using Fasttext with the following variants (all possible combinations): - dimension: 100, 300 - method: skipgram, cbow - tool: FastText, Magnitude - source text: plain, plain.lower, plain.lemma, plain.lemma.lower ## Models In the repository you can find 4 selected models, that were examined in the paper (see Citation). A model that performed the best is the default model/config (see `default_config.json`). ## Usage To use these embedding models easily, it is required to install [embeddings](https://github.com/CLARIN-PL/embeddings). ```bash pip install clarinpl-embeddings ``` ### Utilising the default model (the easiest way) Word embedding: ```python from embeddings.embedding.auto_flair import AutoFlairWordEmbedding from flair.data import Sentence sentence = Sentence("Myśl z duszy leci bystro, Nim się w słowach złamie.") embedding = AutoFlairWordEmbedding.from_hub("clarin-pl/fastText-kgr10") embedding.embed([sentence]) for token in sentence: print(token) print(token.embedding) ``` Document embedding (averaged over words): ```python from embeddings.embedding.auto_flair import AutoFlairDocumentEmbedding from flair.data import Sentence sentence = Sentence("Myśl z duszy leci bystro, Nim się w słowach złamie.") embedding = AutoFlairDocumentEmbedding.from_hub("clarin-pl/fastText-kgr10") embedding.embed([sentence]) print(sentence.embedding) ``` ### Customisable way Word embedding: ```python from embeddings.embedding.static.embedding import AutoStaticWordEmbedding from embeddings.embedding.static.fasttext import KGR10FastTextConfig from flair.data import Sentence config = KGR10FastTextConfig(method='cbow', dimension=100) embedding = AutoStaticWordEmbedding.from_config(config) sentence = Sentence("Myśl z duszy leci bystro, Nim się w słowach złamie.") embedding.embed([sentence]) for token in sentence: print(token) print(token.embedding) ``` Document embedding (averaged over words): ```python from embeddings.embedding.static.embedding import AutoStaticDocumentEmbedding from embeddings.embedding.static.fasttext import KGR10FastTextConfig from flair.data import Sentence config = KGR10FastTextConfig(method='cbow', dimension=100) embedding = AutoStaticDocumentEmbedding.from_config(config) sentence = Sentence("Myśl z duszy leci bystro, Nim się w słowach złamie.") embedding.embed([sentence]) print(sentence.embedding) ``` ## Citation The link below leads to the NextCloud directory with all variants of embeddings. If you use it, please cite the following article: ``` @article{kocon2018embeddings, author = {Koco\'{n}, Jan and Gawor, Micha{\l}}, title = {Evaluating {KGR10} {P}olish word embeddings in the recognition of temporal expressions using {BiLSTM-CRF}}, journal = {Schedae Informaticae}, volume = {27}, year = {2018}, url = {http://www.ejournals.eu/Schedae-Informaticae/2018/Volume-27/art/13931/}, doi = {10.4467/20838476SI.18.008.10413} } ```
flax-community/gpt2-medium-persian
flax-community
2021-07-16T13:01:08Z
364
9
transformers
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "gpt2", "text-generation", "fa", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: fa tags: - text-generation widget: - text: "در یک اتفاق شگفت انگیز، پژوهشگران" - text: "گرفتگی بینی در کودکان و به‌خصوص نوزادان باعث می‌شود" - text: "امیدواریم نوروز امسال سالی" --- # GPT2 Medium 4 Persian > This is part of the [Flax/Jax Community Week](https://discuss.huggingface.co/t/pretrain-gpt2-from-scratch-in-persian/7560), organized by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google. ## Team Members - [Mehrdad Farahani](huggingface.co/m3hrdadfi) - [Saied Alimoradi](https://discuss.huggingface.co/u/saied) - [M. Reza Zerehpoosh](huggingface.co/ironcladgeek) - [Hooman Sedghamiz](https://discuss.huggingface.co/u/hooman650) - [Mazeyar Moeini Feizabadi](https://discuss.huggingface.co/u/mazy1998) ## Dataset We used [Oscar](https://huggingface.co/datasets/oscar) dataset, which is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus. ## How To Use You can use this model directly with a pipeline for text generation. ```python from transformers import pipeline, AutoTokenizer, GPT2LMHeadModel tokenizer = AutoTokenizer.from_pretrained('flax-community/gpt2-medium-persian') model = GPT2LMHeadModel.from_pretrained('flax-community/gpt2-medium-persian') generator = pipeline('text-generation', model, tokenizer=tokenizer, config={'max_length':100}) generated_text = generator('در یک اتفاق شگفت انگیز، پژوهشگران') ``` For using Tensorflow import TFGPT2LMHeadModel instead of GPT2LMHeadModel. ## Demo ... SOON ## Evaluation ... SOON
huggingtweets/skinny_pickens
huggingtweets
2021-07-16T04:53:07Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/skinny_pickens/1626411183607/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1414812371495776257/iChEbuNI_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">skinny pickens</div> <div style="text-align: center; font-size: 14px;">@skinny_pickens</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from skinny pickens. | Data | skinny pickens | | --- | --- | | Tweets downloaded | 2817 | | Retweets | 1329 | | Short tweets | 154 | | Tweets kept | 1334 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2guwsx1g/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @skinny_pickens's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/109349ze) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/109349ze/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/skinny_pickens') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
liam168/trans-opus-mt-en-zh
liam168
2021-07-16T04:17:11Z
446
29
transformers
[ "transformers", "pytorch", "marian", "text2text-generation", "translation", "en", "zh", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:05Z
--- language: - en - zh tags: - translation widget: - text: "I like to study Data Science and Machine Learning." --- # liam168/trans-opus-mt-en-zh ## Model description * source group: English * target group: Chinese * model: transformer * source language(s): eng * target language(s): cjy_Hans cjy_Hant cmn cmn_Hans cmn_Hant gan lzh lzh_Hans nan wuu yue yue_Hans yue_Hant ## How to use ```python >>> from transformers import AutoModelWithLMHead,AutoTokenizer,pipeline >>> mode_name = 'liam168/trans-opus-mt-en-zh' >>> model = AutoModelWithLMHead.from_pretrained(mode_name) >>> tokenizer = AutoTokenizer.from_pretrained(mode_name) >>> translation = pipeline("translation_en_to_zh", model=model, tokenizer=tokenizer) >>> translation('I like to study Data Science and Machine Learning.', max_length=400) [{'translation_text': '我喜欢学习数据科学和机器学习'}] ``` ## Contact liam168520@gmail.com
liam168/trans-opus-mt-zh-en
liam168
2021-07-16T03:34:38Z
251
21
transformers
[ "transformers", "pytorch", "marian", "text2text-generation", "translation", "en", "zh", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:05Z
--- language: - en - zh tags: - translation widget: - text: "我喜欢学习数据科学和机器学习。" --- # liam168/trans-opus-mt-zh-en ## Model description * source group: English * target group: Chinese * model: transformer * source language(s): eng ## How to use ```python >>> from transformers import AutoModelWithLMHead,AutoTokenizer,pipeline >>> mode_name = 'liam168/trans-opus-mt-zh-en' >>> model = AutoModelWithLMHead.from_pretrained(mode_name) >>> tokenizer = AutoTokenizer.from_pretrained(mode_name) >>> translation = pipeline("translation_zh_to_en", model=model, tokenizer=tokenizer) >>> translation('我喜欢学习数据科学和机器学习。', max_length=400) [{'translation_text': 'I like to study data science and machine learning.'}] ``` ## Contact liam168520@gmail.com
huggingtweets/gambsvns
huggingtweets
2021-07-15T21:50:46Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/gambsvns/1626385842515/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1415310065960198148/w9Yr9mLK_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">gãmbs</div> <div style="text-align: center; font-size: 14px;">@gambsvns</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from gãmbs. | Data | gãmbs | | --- | --- | | Tweets downloaded | 3246 | | Retweets | 86 | | Short tweets | 308 | | Tweets kept | 2852 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2wahjzcj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gambsvns's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1td3tcaf) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1td3tcaf/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/gambsvns') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
echarlaix/bert-base-uncased-qqp-f87.8-d36-hybrid
echarlaix
2021-07-15T13:11:02Z
5
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "en", "dataset:qqp", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: en license: apache-2.0 tags: - text-classification datasets: - qqp metrics: - F1 --- ## bert-base-uncased model fine-tuned on QQP This model was created using the [nn_pruning](https://github.com/huggingface/nn_pruning) python library: the linear layers contains **36%** of the original weights. The model contains **50%** of the original weights **overall** (the embeddings account for a significant part of the model, and they are not pruned by this method). <div class="graph"><script src="/echarlaix/bert-base-uncased-qqp-f87.8-d36-hybrid/raw/main/model_card/density_info.js" id="70162e64-2a82-4147-ac7a-864cfe18a013"></script></div> ## Fine-Pruning details This model was fine-tuned from the HuggingFace [model](https://huggingface.co/bert-base-uncased) checkpoint on task, and distilled from the model [textattack/bert-base-uncased-QQP](https://huggingface.co/textattack/bert-base-uncased-QQP). This model is case-insensitive: it does not make a difference between english and English. A side-effect of block pruning is that some of the attention heads are completely removed: 54 heads were removed on a total of 144 (37.5%). <div class="graph"><script src="/echarlaix/bert-base-uncased-qqp-f87.8-d36-hybrid/raw/main/model_card/pruning_info.js" id="f4fb8229-3e66-406e-b99f-f771ce6117c8"></script></div> ## Details of the QQP dataset | Dataset | Split | # samples | | -------- | ----- | --------- | | QQP | train | 364K | | QQP | eval | 40K | ### Results **Pytorch model file size**: `377MB` (original BERT: `420MB`) | Metric | # Value | | ------ | --------- | | **F1** | **87.87** |
Vivek/gpt2-common-sense-reasoning
Vivek
2021-07-15T09:29:46Z
11
1
transformers
[ "transformers", "jax", "tensorboard", "gpt2", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
This is to test the common sense reasoning of a GPT-2 model.To assess how intelligent or it is adapted to this datasets which requires not only big models but also a little common sense also.
huggingtweets/jennyenicholson
huggingtweets
2021-07-14T21:37:16Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/jennyenicholson/1626298632174/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1406127247283548162/tOo7-e6j_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Jenny Nicholson</div> <div style="text-align: center; font-size: 14px;">@jennyenicholson</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Jenny Nicholson. | Data | Jenny Nicholson | | --- | --- | | Tweets downloaded | 3247 | | Retweets | 126 | | Short tweets | 252 | | Tweets kept | 2869 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3kptwa31/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jennyenicholson's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/37kyl0hh) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/37kyl0hh/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/jennyenicholson') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
YusufSahin99/IFIS_ZORK_AI_MODERN
YusufSahin99
2021-07-14T15:12:29Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer model_index: - name: IFIS_ZORK_AI_MODERN results: - task: name: Causal Language Modeling type: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # IFIS_ZORK_AI_MODERN This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unkown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.9.0+cu102 - Tokenizers 0.10.3
YusufSahin99/IFIS_ZORK_AI_HORROR
YusufSahin99
2021-07-14T14:11:24Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer model_index: - name: IFIS_ZORK_AI_HORROR results: - task: name: Causal Language Modeling type: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # IFIS_ZORK_AI_HORROR This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unkown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.9.0+cu102 - Tokenizers 0.10.3
YusufSahin99/IFIS_ZORK_AI_FANTASY
YusufSahin99
2021-07-14T13:18:10Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer model_index: - name: IFIS_ZORK_AI_FANTASY results: - task: name: Causal Language Modeling type: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # IFIS_ZORK_AI_FANTASY This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unkown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.9.0+cu102 - Tokenizers 0.10.3
ehdwns1516/klue-roberta-base-kornli
ehdwns1516
2021-07-14T08:11:08Z
19
1
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
# klue-roberta-base-kornli * This model trained with Korean dataset. * Input premise sentence and hypothesis sentence. * You can use English, but don't expect accuracy. * If the context is longer than 1200 characters, the context may be cut in the middle and the result may not come out well. klue-roberta-base-kornli DEMO: [Ainize DEMO](https://main-klue-roberta-base-kornli-ehdwns1516.endpoint.ainize.ai/) klue-roberta-base-kornli API: [Ainize API](https://ainize.web.app/redirect?git_repo=https://github.com/ehdwns1516/klue-roberta-base_kornli) ## Overview Language model: [klue/roberta-base](https://huggingface.co/klue/roberta-base) Language: Korean Training data: [kakaobrain KorNLI](https://github.com/kakaobrain/KorNLUDatasets/tree/master/KorNLI) Eval data: [kakaobrain KorNLI](https://github.com/kakaobrain/KorNLUDatasets/tree/master/KorNLI) Code: See [Ainize Workspace](https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/ehdwns1516/klue-roberta-base_finetunning_ex) ## Usage ## In Transformers ``` from transformers import AutoTokenizer, pipeline tokenizer = AutoTokenizer.from_pretrained("ehdwns1516/klue-roberta-base-kornli") classifier = pipeline( "text-classification", model="ehdwns1516/klue-roberta-base-kornli", return_all_scores=True, ) premise = "your premise" hypothesis = "your hypothesis" result = dict() result[0] = classifier(premise + tokenizer.sep_token + hypothesis)[0] ```
cstorm125/wangchanberta-base-wiki-20210520-news-spm_span-mask-finetune-qa
cstorm125
2021-07-14T07:41:41Z
4
0
transformers
[ "transformers", "pytorch", "camembert", "question-answering", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- widget: - text: "สวนกุหลาบเป็นโรงเรียนอะไร" context: "โรงเรียนสวนกุหลาบวิทยาลัย (Suankularb Wittayalai School) (อักษรย่อ : ส.ก. / S.K.) เป็นโรงเรียนชายล้วน ระดับชั้นมัธยมศึกษาขนาดใหญ่พิเศษ สังกัดสำนักงานเขตพื้นที่การศึกษามัธยมศึกษาเขต 1 สำนักงานคณะกรรมการการศึกษาขั้นพื้นฐาน (ชื่อเดิม: กรมสามัญศึกษา) กระทรวงศึกษาธิการ ก่อตั้งโดย พระบาทสมเด็จพระจุลจอมเกล้าเจ้าอยู่หัว ได้รับการสถาปนาขึ้นในวันที่ 8 มีนาคม พ.ศ. 2424 (ขณะนั้นนับวันที่ 1 เมษายน เป็นวันขึ้นปีใหม่ เมื่อนับอย่างสากลถือเป็น พ.ศ. 2425) โดยเป็นโรงเรียนรัฐบาลแห่งแรกของประเทศไทย" --- # wangchanberta-base-wiki-20210520-news-spm_span-mask-finetune-qa Finetuning `airesearch/wangchanberta-base-wiki-20210520-news-spm_span-mask` with the training set of `iapp_wiki_qa_squad`, `thaiqa_squad`, and `nsc_qa` (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 `newmm` words). Benchmarks shared on [wandb](https://wandb.ai/cstorm125/wangchanberta-qa) using validation and test sets of `iapp_wiki_qa_squad`. Trained with [thai2transformers](https://github.com/vistec-AI/thai2transformers/blob/dev/scripts/downstream/train_question_answering_lm_finetuning.py). Run with: ``` export MODEL_NAME=airesearch/wangchanberta-base-wiki-20210520-news-spm_span-mask CUDA_LAUNCH_BLOCKING=1 python train_question_answering_lm_finetuning.py \ --model_name $MODEL_NAME \ --dataset_name chimera_qa \ --output_dir $MODEL_NAME-finetune-chimera_qa-model \ --log_dir $MODEL_NAME-finetune-chimera_qa-log \ --model_max_length 400 \ --pad_on_right \ --fp16 \ --use_auth_token ```
cstorm125/wangchanberta-base-att-spm-uncased-finetune-qa
cstorm125
2021-07-14T07:24:50Z
5
0
transformers
[ "transformers", "pytorch", "camembert", "question-answering", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- widget: - text: "สวนกุหลาบเป็นโรงเรียนอะไร" context: "โรงเรียนสวนกุหลาบวิทยาลัย (Suankularb Wittayalai School) (อักษรย่อ : ส.ก. / S.K.) เป็นโรงเรียนชายล้วน ระดับชั้นมัธยมศึกษาขนาดใหญ่พิเศษ สังกัดสำนักงานเขตพื้นที่การศึกษามัธยมศึกษาเขต 1 สำนักงานคณะกรรมการการศึกษาขั้นพื้นฐาน (ชื่อเดิม: กรมสามัญศึกษา) กระทรวงศึกษาธิการ ก่อตั้งโดย พระบาทสมเด็จพระจุลจอมเกล้าเจ้าอยู่หัว ได้รับการสถาปนาขึ้นในวันที่ 8 มีนาคม พ.ศ. 2424 (ขณะนั้นนับวันที่ 1 เมษายน เป็นวันขึ้นปีใหม่ เมื่อนับอย่างสากลถือเป็น พ.ศ. 2425) โดยเป็นโรงเรียนรัฐบาลแห่งแรกของประเทศไทย" --- # airesearch/wangchanberta-base-att-spm-uncased Finetuning `airesearch/wangchanberta-base-att-spm-uncased` with the training set of `iapp_wiki_qa_squad`, `thaiqa_squad`, and `nsc_qa` (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 `newmm` words). Benchmarks shared on [wandb](https://wandb.ai/cstorm125/wangchanberta-qa) using validation and test sets of `iapp_wiki_qa_squad`. Trained with [thai2transformers](https://github.com/vistec-AI/thai2transformers/blob/dev/scripts/downstream/train_question_answering_lm_finetuning.py). Run with: ``` export MODEL_NAME=airesearch/wangchanberta-base-att-spm-uncased python train_question_answering_lm_finetuning.py \ --model_name $MODEL_NAME \ --dataset_name chimera_qa \ --output_dir $MODEL_NAME-finetune-chimera_qa-model \ --log_dir $MODEL_NAME-finetune-chimera_qa-log \ --lowercase \ --pad_on_right \ --fp16 ```
keshan/sinhala-roberta-oscar
keshan
2021-07-14T06:28:47Z
7
0
transformers
[ "transformers", "pytorch", "jax", "tensorboard", "roberta", "fill-mask", "oscar", "Sinhala", "si", "dataset:oscar", "arxiv:1907.11692", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: si tags: - oscar - Sinhala - roberta - fill-mask widget: - text: "මම සිංහල භාෂාව <mask>" datasets: - oscar --- ### Overview This is a slightly smaller model trained on [OSCAR](https://oscar-corpus.com/) Sinhala dedup dataset. As Sinhala is one of those low resource languages, there are only a handful of models been trained. So, this would be a great place to start training for more downstream tasks. ## Model Specification The model chosen for training is [Roberta](https://arxiv.org/abs/1907.11692) with the following specifications: 1. vocab_size=50265 2. max_position_embeddings=514 3. num_attention_heads=12 4. num_hidden_layers=12 5. type_vocab_size=1 ## How to Use You can use this model directly with a pipeline for masked language modeling: ```py from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline model = AutoModelWithLMHead.from_pretrained("keshan/sinhala-roberta-oscar") tokenizer = AutoTokenizer.from_pretrained("keshan/sinhala-roberta-oscar") fill_mask = pipeline('fill-mask', model=model, tokenizer=tokenizer) fill_mask("මම ගෙදර <mask>.") ```
andi611/roberta-base-ner-conll2003
andi611
2021-07-14T00:25:37Z
4
1
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - conll2003 model_index: - name: roberta-base-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-ner This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the conll2003 dataset. It achieves the following results on the evaluation set: - eval_loss: 0.0814 - eval_precision: 0.9101 - eval_recall: 0.9336 - eval_f1: 0.9217 - eval_accuracy: 0.9799 - eval_runtime: 10.2964 - eval_samples_per_second: 315.646 - eval_steps_per_second: 39.529 - epoch: 1.14 - step: 500 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
huggingtweets/jplatzhalter
huggingtweets
2021-07-13T22:13:16Z
9
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/jplatzhalter/1626214256716/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1204103314733821954/O_QCiMdI_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Julia Platz-Halter</div> <div style="text-align: center; font-size: 14px;">@jplatzhalter</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Julia Platz-Halter. | Data | Julia Platz-Halter | | --- | --- | | Tweets downloaded | 3235 | | Retweets | 270 | | Short tweets | 373 | | Tweets kept | 2592 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2z39jb5g/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jplatzhalter's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1deih6g9) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1deih6g9/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/jplatzhalter') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
YusufSahin99/IFIS_ZORK_AI_SCIFI
YusufSahin99
2021-07-13T15:34:34Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer model_index: - name: IFIS_ZORK_AI_SCIFI results: - task: name: Causal Language Modeling type: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # IFIS_ZORK_AI_SCIFI This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unkown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.9.0+cu102 - Tokenizers 0.10.3
YusufSahin99/Zork_AI_SciFi
YusufSahin99
2021-07-13T14:58:01Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer model_index: - name: Zork_AI_SciFi results: - task: name: Causal Language Modeling type: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Zork_AI_SciFi This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unkown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.9.0+cu102 - Tokenizers 0.10.3
AIDA-UPM/mstsb-paraphrase-multilingual-mpnet-base-v2
AIDA-UPM
2021-07-13T14:12:45Z
292
12
transformers
[ "transformers", "pytorch", "xlm-roberta", "feature-extraction", "sentence-similarity", "multilingual", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:04Z
--- pipeline_tag: sentence-similarity language: "multilingual" tags: - feature-extraction - sentence-similarity - transformers - multilingual --- # mstsb-paraphrase-multilingual-mpnet-base-v2 This is a fine-tuned version of `paraphrase-multilingual-mpnet-base-v2` from [sentence-transformers](https://www.SBERT.net) model with [Semantic Textual Similarity Benchmark](http://ixa2.si.ehu.eus/stswiki/index.php/Main_Page) extended to 15 languages: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering, semantic search and measuring the similarity between two sentences. <!--- Describe your model here --> This model is fine-tuned version of `paraphrase-multilingual-mpnet-base-v2` for semantic textual similarity with multilingual data. The dataset used for this fine-tuning is STSb extended to 15 languages with Google Translator. For mantaining data quality the sentence pairs with a confidence value below 0.7 were dropped. The extended dataset is available at [GitHub](https://github.com/Huertas97/Multilingual-STSB). The languages included in the extended version are: ar, cs, de, en, es, fr, hi, it, ja, nl, pl, pt, ru, tr, zh-CN, zh-TW. The pooling operation used to condense the word embeddings into a sentence embedding is mean pooling (more info below). <!-- ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer # It support several languages sentences = ["This is an example sentence", "Esta es otra frase de ejemplo", "最後の例文"] # The pooling technique is automatically detected (mean pooling) model = SentenceTransformer('mstsb-paraphrase-multilingual-mpnet-base-v2') embeddings = model.encode(sentences) print(embeddings) ``` --> ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch # We should define the proper pooling function: Mean pooling # Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ["This is an example sentence", "Esta es otra frase de ejemplo", "最後の例文"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('AIDA-UPM/mstsb-paraphrase-multilingual-mpnet-base-v2') model = AutoModel.from_pretrained('AIDA-UPM/mstsb-paraphrase-multilingual-mpnet-base-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> Check the test results in the Semantic Textual Similarity Tasks. The 15 languages available at the [Multilingual STSB](https://github.com/Huertas97/Multilingual-STSB) have been combined into monolingual and cross-lingual tasks, giving a total of 31 tasks. Monolingual tasks have both sentences from the same language source (e.g., Ar-Ar, Es-Es), while cross-lingual tasks have two sentences, each in a different language being one of them English (e.g., en-ar, en-es). Here we compare the average multilingual semantic textual similairty capabilities between the `paraphrase-multilingual-mpnet-base-v2` based model and the `mstsb-paraphrase-multilingual-mpnet-base-v2` fine-tuned model across the 31 tasks. It is worth noting that both models are multilingual, but the second model is adjusted with multilingual data for semantic similarity. The average of correlation coefficients is computed by transforming each correlation coefficient to a Fisher's z value, averaging them, and then back-transforming to a correlation coefficient. | Model | Average Spearman Cosine Test | |:---------------------------------------------:|:------------------------------:| | mstsb-paraphrase-multilingual-mpnet-base-v2 | 0.835890 | | paraphrase-multilingual-mpnet-base-v2 | 0.818896 | <br> The following tables breakdown the performance of `mstsb-paraphrase-multilingual-mpnet-base-v2` according to the different tasks. For the sake of readability tasks have been splitted into monolingual and cross-lingual tasks. | Monolingual Task | Pearson Cosine test | Spearman Cosine test | |:------------------:|:---------------------:|:-----------------------:| | en;en | 0.868048310692506 | 0.8740170943535747 | | ar;ar | 0.8267139454193487 | 0.8284459741532022 | | cs;cs | 0.8466821720942157 | 0.8485417688803879 | | de;de | 0.8517285961812183 | 0.8557680051557893 | | es;es | 0.8519185309064691 | 0.8552243211580456 | | fr;fr | 0.8430951067985064 | 0.8466614534379704 | | hi;hi | 0.8178258630578092 | 0.8176462079184331 | | it;it | 0.8475909574305637 | 0.8494216064459076 | | ja;ja | 0.8435588859386477 | 0.8456031494178619 | | nl;nl | 0.8486765104527032 | 0.8520856765262531 | | pl;pl | 0.8407840177883407 | 0.8443070467300299 | | pt;pt | 0.8534880178249296 | 0.8578544068829622 | | ru;ru | 0.8390897585455678 | 0.8423041443534423 | | tr;tr | 0.8382125451820572 | 0.8421587450058385 | | zh-CN;zh-CN | 0.826233678946644 | 0.8248515460782744 | | zh-TW;zh-TW | 0.8242683809675422 | 0.8235506799952028 | <br> | Cross-lingual Task | Pearson Cosine test | Spearman Cosine test | |:--------------------:|:---------------------:|:-----------------------:| | en;ar | 0.7990830340462535 | 0.7956792016468148 | | en;cs | 0.8381274879061265 | 0.8388713450024455 | | en;de | 0.8414439600928739 | 0.8441971698649943 | | en;es | 0.8442337511356952 | 0.8445035292903559 | | en;fr | 0.8378437644605063 | 0.8387903367907733 | | en;hi | 0.7951955086055527 | 0.7905052217683244 | | en;it | 0.8415686372978766 | 0.8419480899107785 | | en;ja | 0.8094306665283388 | 0.8032512280936449 | | en;nl | 0.8389526140129767 | 0.8409310421803277 | | en;pl | 0.8261309163979578 | 0.825976253023656 | | en;pt | 0.8475546209070765 | 0.8506606391790897 | | en;ru | 0.8248514914263723 | 0.8224871183202255 | | en;tr | 0.8191803661207868 | 0.8194200775744044 | | en;zh-CN | 0.8147678083378249 | 0.8102089470690433 | | en;zh-TW | 0.8107272160374955 | 0.8056129680510944 | ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 687 with parameters: ``` {'batch_size': 132, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "callback": null, "epochs": 2, "evaluation_steps": 1000, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 140, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
lewtun/dummy-translation
lewtun
2021-07-13T12:43:13Z
3
0
transformers
[ "transformers", "pytorch", "marian", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer model_index: - name: dummy-translation results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dummy-translation This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on an unkown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.8.2 - Pytorch 1.9.0+cu102 - Datasets 1.9.0 - Tokenizers 0.10.3
huggingtweets/onlinepete-recyrb
huggingtweets
2021-07-13T03:03:22Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/onlinepete-recyrb/1626145398544/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1305948384524730369/FoDgA8gh_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/456958582731603969/QZKpv6eI_400x400.jpeg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">bryce & im pete online</div> <div style="text-align: center; font-size: 14px;">@onlinepete-recyrb</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from bryce & im pete online. | Data | bryce | im pete online | | --- | --- | --- | | Tweets downloaded | 3176 | 3190 | | Retweets | 183 | 94 | | Short tweets | 406 | 1003 | | Tweets kept | 2587 | 2093 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1y0uormq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @onlinepete-recyrb's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/16lwsen4) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/16lwsen4/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/onlinepete-recyrb') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
jaimin/Gujarati-Model
jaimin
2021-07-12T13:23:21Z
5
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
tokenizer = AutoTokenizer.from_pretrained("jaimin/Gujarati-Model") model = AutoModel.from_pretrained("jaimin/Gujarati-Model")
Littlejohn/analisis_sentimientos
Littlejohn
2021-07-12T00:22:27Z
11
0
transformers
[ "transformers", "text-classification", "en", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- language: - en pipeline_tag: text-classification --- # bert-base-cased-sentiment Es un modelo de BERT (bert-base-cased) afinado para el analisis de sentimientos para dos clases. El sentimiento solo se define como positivo negativo según sea el caso de la oración suministrada. ## Training data El set de datos utilizado para el entrenamiento del modelo fue a traves de una recopilación de reseñas de amazón, el cual se puede descargar desde el autor original en Kaggle [Adam Bittlingmayer](https://www.kaggle.com/bittlingmayer/amazonreviews) Amazon Reviews for Sentiment Analysis. El numero de datos fue solo de 40 000 oraciones de las cuales solo se tomaron las primeras 100 palabras para conformar cada una de las oraciones. ## Accuaracy El modelo afinado fue sometido a 3 pruebas para conocer su precisión. - La primera prueba fue en un set de datos de Reseñas de hoteles | Accuracy (Precisión) | | -------- | | 95% | - La segunda prueba fue en un set de datos de Reseñas de comida | Accuracy (Precisión) | | -------- | | 88% | - La tercera prueba fue en un set de datos de Sentimientos generales | Accuracy (Precisión) | | -------- | | 65% | ## Contact Contacto a traves de github [Murdoocc7](https://github.com/murdoocc)
keshan/sinhala-gpt2
keshan
2021-07-11T17:53:31Z
14
1
transformers
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "gpt2", "feature-extraction", "Sinhala", "text-generation", "si", "dataset:mc4", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: si tags: - Sinhala - text-generation - gpt2 datasets: - mc4 --- ### Overview This is a smaller GPT2 model trained on [MC4](https://github.com/allenai/allennlp/discussions/5056) Sinhala dataset. As Sinhala is one of those low resource languages, there are only a handful of models been trained. So, this would be a great place to start training for more downstream tasks. ## Model Specification The model chosen for training is GPT2 with the following specifications: 1. vocab_size=50257 2. n_embd=768 3. n_head=12 4. n_layer=12 5. n_positions=1024 ## How to Use You can use this model directly with a pipeline for casual language modeling: ```py from transformers import pipeline generator = pipeline('text-generation', model='keshan/sinhala-gpt2') generator("මම", max_length=50, num_return_sequences=5) ```
ysharma/new-model-dummy
ysharma
2021-07-11T11:51:02Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
# Dummy model This is just a dummy model. Copying bert-base-uncased model files over here.
nateraw/donut-or-bagel
nateraw
2021-07-10T19:54:49Z
71
1
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: donut-or-bagel results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9375 --- # donut-or-bagel Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### bagel ![bagel](images/bagel.jpg) #### donut ![donut](images/donut.jpg)
rafio/argentina
rafio
2021-07-10T17:29:18Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
https://twitter.com/i/events/1413870919320104965 https://peatix.com/group/11420372/ https://cmdt-guyane.fr/advert/argentina-vs-brazil-live-stream-final-2021/ https://www.quisqueyapeach.com/advert/argentina-vs-brazil-live-stream-final-2021/ https://www.beauvaissubaquatique.fr/advert/argentina-vs-brazil-live-stream-final-2021/ https://www.dmcityview.com/event/argentina-vs-brazil-live-stream-final-2021/
huggingtweets/averagesmasher
huggingtweets
2021-07-10T13:47:30Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/averagesmasher/1625924846625/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1368753714568327168/oh6BSjqX_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">AverageVermontSmasher</div> <div style="text-align: center; font-size: 14px;">@averagesmasher</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from AverageVermontSmasher. | Data | AverageVermontSmasher | | --- | --- | | Tweets downloaded | 41 | | Retweets | 0 | | Short tweets | 2 | | Tweets kept | 39 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/auyr340s/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @averagesmasher's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2qnfjchi) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2qnfjchi/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/averagesmasher') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
shahukareem/dhivehi-roberta-base
shahukareem
2021-07-10T00:19:12Z
4
0
transformers
[ "transformers", "pytorch", "jax", "tensorboard", "roberta", "fill-mask", "dv", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: dv tags: - dv - roberta widget: - text: "<mask> މާލެ އަކީ ދިވެހިރާއްޖޭގެ" --- # Dhivehi Roberta Base - Oscar ## Description RoBERTA pretrained from scratch using Jax/Flax backend and with the Dhivehi Oscar Corpus only.
huggingtweets/freudotheism
huggingtweets
2021-07-09T21:54:33Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/freudotheism/1625867628365/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1412918415703019521/J2TQHTDo_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Evelyn🪶🇰🇵</div> <div style="text-align: center; font-size: 14px;">@freudotheism</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Evelyn🪶🇰🇵. | Data | Evelyn🪶🇰🇵 | | --- | --- | | Tweets downloaded | 3231 | | Retweets | 333 | | Short tweets | 968 | | Tweets kept | 1930 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3rbzyyts/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @freudotheism's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/elt06ed5) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/elt06ed5/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/freudotheism') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
flax-community/t5-covid-qa
flax-community
2021-07-09T19:03:44Z
1
0
null
[ "arxiv:2002.08910", "region:us" ]
null
2022-03-02T23:29:05Z
# Covid19 Related Question Answering (Closed book question answering) In 2020, COVID-19 which is caused by a coronavirus called SARS-CoV-2 took over the world. It touched the lives of many people and caused a lot of hardship for humanity. There are still many questions in regards to COVID-19 and it is often difficult to get the right answers. The aim of this project is to finetune models for closed book question answering. In closed-book QA, we feed the model a question *without any context or access to external knowledge* and train it to predict the answer. Since the model doesn't receive any context, the primary way it can learn to answer these questions is based on the "knowledge" it obtained during pre-training [[1]](https://colab.research.google.com/github/google-research/text-to-text-transfer-transformer/blob/master/notebooks/t5-trivia.ipynb#scrollTo=zSeyoqE7WMwu) [[2]](https://arxiv.org/abs/2002.08910). The main goals of this project are: 1. Train a model for question answering in regards to COVID-19 2. Release the top performing models for further research and enhancement 3. Release all of the preprocessing and postprocessing scripts and findings for future research. ## TO DO LIST: - [x] Team members met and the following was discussed: - Data preparation script is prepared that mixes CORD-19 and Pubmed. - Agreed to finalize the training scripts by 9pm PDT 7/9/2021. - Tokenizer is now trained. - [ ] Setup the pretraining script - [ ] Prepare the finetuning tasks inspired from [T5 Trivia Colab](https://colab.research.google.com/github/google-research/text-to-text-transfer-transformer/blob/master/notebooks/t5-trivia.ipynb) - What datasets we want to go with? - [Covid-QA](https://huggingface.co/datasets/covid_qa_deepset) (Maybe as test set?) - [Trivia](https://huggingface.co/datasets/covid_qa_deepset) - [CDC-QA](https://www.cdc.gov/coronavirus/2019-ncov/faq.html) (We can scrape quickly using beautiful soup or something) - [More Medical Datasets](https://aclanthology.org/2020.findings-emnlp.289.pdf) (See the dataset section for inspiratio) ## 1. Model We will be using T5 model. ## 2. Datasets The following datasets would be used for finetuning the model. Note that the last dataset is optional and the model is evaluated only using Covid-QA. For **Intermediate Pre-Training**: 1. [CORD-19](https://allenai.org/data/cord-19) For **Fine-Tuning** : 1. [Covid-QA](https://huggingface.co/datasets/covid_qa_deepset) 2. [CDC-QA](https://www.cdc.gov/coronavirus/2019-ncov/faq.html) 4. Optional - [Trivia-QA](https://nlp.cs.washington.edu/triviaqa/) ## 3. Training Scripts We can make use of : 1. [For preprocessing and mixing datasets](https://colab.research.google.com/github/google-research/text-to-text-transfer-transformer/blob/master/notebooks/t5-trivia.ipynb#:~:text=In%20this%20notebook%2C%20we&#39;ll,it%20to%20predict%20the%20answer.) 2. [For T5 training](https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/modeling_flax_t5.py) ## 4. Additional Reading - [How Much Knowledge Can You Pack Into the Parameters of a Language Model?](https://arxiv.org/pdf/2002.08910.pdf)