modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-13 00:37:47
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
555 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-13 00:35:18
card
stringlengths
11
1.01M
M-CLIP/Swedish-2M
M-CLIP
2022-09-15T10:46:07Z
154
1
transformers
[ "transformers", "pytorch", "jax", "bert", "feature-extraction", "sv", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:04Z
--- language: sv --- <br /> <p align="center"> <h1 align="center">Swe-CLIP 2M</h1> <p align="center"> <a href="https://github.com/FreddeFrallan/Multilingual-CLIP/tree/main/Model%20Cards/Swe-CLIP%202M">Github Model Card</a> </p> </p> ## Usage To use this model along with the original CLIP vision encoder you need to download the code and additional linear weights from the [Multilingual-CLIP Github](https://github.com/FreddeFrallan/Multilingual-CLIP). Once this is done, you can load and use the model with the following code ```python from src import multilingual_clip model = multilingual_clip.load_model('Swe-CLIP-500k') embeddings = model(['Älgen är skogens konung!', 'Alla isbjörnar är vänsterhänta']) print(embeddings.shape) # Yields: torch.Size([2, 640]) ``` <!-- ABOUT THE PROJECT --> ## About A [KB/Bert-Swedish-Cased](https://huggingface.co/KB/bert-base-swedish-cased) tuned to match the embedding space of the CLIP text encoder which accompanies the Res50x4 vision encoder. <br> Training data pairs was generated by sampling 2 Million sentences from the combined descriptions of [GCC](https://ai.google.com/research/ConceptualCaptions/) + [MSCOCO](https://cocodataset.org/#home) + [VizWiz](https://vizwiz.org/tasks-and-datasets/image-captioning/), and translating them into Swedish. All translation was done using the [Huggingface Opus Model](https://huggingface.co/Helsinki-NLP/opus-mt-en-sv), which seemingly procudes higher quality translations than relying on the [AWS translate service](https://aws.amazon.com/translate/).
M-CLIP/XLM-Roberta-Large-Vit-B-16Plus
M-CLIP
2022-09-15T10:45:56Z
57,956
27
transformers
[ "transformers", "pytorch", "tf", "multilingual", "af", "sq", "am", "ar", "az", "bn", "bs", "bg", "ca", "zh", "hr", "cs", "da", "nl", "en", "et", "fr", "de", "el", "hi", "hu", "is", "id", "it", "ja", "mk", "ml", "mr", "pl", "pt", "ro", "ru", "sr", "sl", "es", "sw", "sv", "tl", "te", "tr", "tk", "uk", "ur", "ug", "uz", "vi", "xh", "endpoints_compatible", "region:us" ]
null
2022-05-30T21:33:14Z
--- language: - multilingual - af - sq - am - ar - az - bn - bs - bg - ca - zh - hr - cs - da - nl - en - et - fr - de - el - hi - hu - is - id - it - ja - mk - ml - mr - pl - pt - ro - ru - sr - sl - es - sw - sv - tl - te - tr - tk - uk - ur - ug - uz - vi - xh --- ## Multilingual-clip: XLM-Roberta-Large-Vit-B-16Plus Multilingual-CLIP extends OpenAI's English text encoders to multiple other languages. This model *only* contains the multilingual text encoder. The corresponding image model `Vit-B-16Plus` can be retrieved via instructions found on `mlfoundations` [open_clip repository on Github](https://github.com/mlfoundations/open_clip). We provide a usage example below. ## Requirements To use both the multilingual text encoder and corresponding image encoder, we need to install the packages [`multilingual-clip`](https://github.com/FreddeFrallan/Multilingual-CLIP) and [`open_clip_torch`](https://github.com/mlfoundations/open_clip). ``` pip install multilingual-clip pip install open_clip_torch ``` ## Usage Extracting embeddings from the text encoder can be done in the following way: ```python from multilingual_clip import pt_multilingual_clip import transformers texts = [ 'Three blind horses listening to Mozart.', 'Älgen är skogens konung!', 'Wie leben Eisbären in der Antarktis?', 'Вы знали, что все белые медведи левши?' ] model_name = 'M-CLIP/XLM-Roberta-Large-Vit-B-16Plus' # Load Model & Tokenizer model = pt_multilingual_clip.MultilingualCLIP.from_pretrained(model_name) tokenizer = transformers.AutoTokenizer.from_pretrained(model_name) embeddings = model.forward(texts, tokenizer) print("Text features shape:", embeddings.shape) ``` Extracting embeddings from the corresponding image encoder: ```python import torch import open_clip import requests from PIL import Image device = "cuda" if torch.cuda.is_available() else "cpu" model, _, preprocess = open_clip.create_model_and_transforms('ViT-B-16-plus-240', pretrained="laion400m_e32") model.to(device) url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) image = preprocess(image).unsqueeze(0).to(device) with torch.no_grad(): image_features = model.encode_image(image) print("Image features shape:", image_features.shape) ``` ## Evaluation results None of the M-CLIP models have been extensivly evaluated, but testing them on Txt2Img retrieval on the humanly translated MS-COCO dataset, we see the following **R@10** results: | Name | En | De | Es | Fr | Zh | It | Pl | Ko | Ru | Tr | Jp | | ----------------------------------|:-----: |:-----: |:-----: |:-----: | :-----: |:-----: |:-----: |:-----: |:-----: |:-----: |:-----: | | [OpenAI CLIP Vit-B/32](https://github.com/openai/CLIP)| 90.3 | - | - | - | - | - | - | - | - | - | - | | [OpenAI CLIP Vit-L/14](https://github.com/openai/CLIP)| 91.8 | - | - | - | - | - | - | - | - | - | - | | [OpenCLIP ViT-B-16+-](https://github.com/openai/CLIP)| 94.3 | - | - | - | - | - | - | - | - | - | - | | [LABSE Vit-L/14](https://huggingface.co/M-CLIP/LABSE-Vit-L-14)| 91.6 | 89.6 | 89.5 | 89.9 | 88.9 | 90.1 | 89.8 | 80.8 | 85.5 | 89.8 | 73.9 | | [XLM-R Large Vit-B/32](https://huggingface.co/M-CLIP/XLM-Roberta-Large-Vit-B-32)| 91.8 | 88.7 | 89.1 | 89.4 | 89.3 | 89.8| 91.4 | 82.1 | 86.1 | 88.8 | 81.0 | | [XLM-R Vit-L/14](https://huggingface.co/M-CLIP/XLM-Roberta-Large-Vit-L-14)| 92.4 | 90.6 | 91.0 | 90.0 | 89.7 | 91.1 | 91.3 | 85.2 | 85.8 | 90.3 | 81.9 | | [XLM-R Large Vit-B/16+](https://huggingface.co/M-CLIP/XLM-Roberta-Large-Vit-B-16Plus)| **95.0** | **93.0** | **93.6** | **93.1** | **94.0** | **93.1** | **94.4** | **89.0** | **90.0** | **93.0** | **84.2** | ## Training/Model details Further details about the model training and data can be found in the [model card](https://github.com/FreddeFrallan/Multilingual-CLIP/blob/main/larger_mclip.md).
M-CLIP/XLM-Roberta-Large-Vit-B-32
M-CLIP
2022-09-15T10:45:49Z
13,427
15
transformers
[ "transformers", "pytorch", "tf", "M-CLIP", "multilingual", "af", "sq", "am", "ar", "az", "bn", "bs", "bg", "ca", "zh", "hr", "cs", "da", "nl", "en", "et", "fr", "de", "el", "hi", "hu", "is", "id", "it", "ja", "mk", "ml", "mr", "pl", "pt", "ro", "ru", "sr", "sl", "es", "sw", "sv", "tl", "te", "tr", "tk", "uk", "ur", "ug", "uz", "vi", "xh", "endpoints_compatible", "region:us" ]
null
2022-05-31T09:50:54Z
--- language: - multilingual - af - sq - am - ar - az - bn - bs - bg - ca - zh - hr - cs - da - nl - en - et - fr - de - el - hi - hu - is - id - it - ja - mk - ml - mr - pl - pt - ro - ru - sr - sl - es - sw - sv - tl - te - tr - tk - uk - ur - ug - uz - vi - xh --- ## Multilingual-clip: XLM-Roberta-Large-Vit-B-32 Multilingual-CLIP extends OpenAI's English text encoders to multiple other languages. This model *only* contains the multilingual text encoder. The corresponding image model `ViT-B-32` can be retrieved via instructions found on OpenAI's [CLIP repository on Github](https://github.com/openai/CLIP). We provide a usage example below. ## Requirements To use both the multilingual text encoder and corresponding image encoder, we need to install the packages [`multilingual-clip`](https://github.com/FreddeFrallan/Multilingual-CLIP) and [`clip`](https://github.com/openai/CLIP). ``` pip install multilingual-clip pip install git+https://github.com/openai/CLIP.git ``` ## Usage Extracting embeddings from the text encoder can be done in the following way: ```python from multilingual_clip import pt_multilingual_clip import transformers texts = [ 'Three blind horses listening to Mozart.', 'Älgen är skogens konung!', 'Wie leben Eisbären in der Antarktis?', 'Вы знали, что все белые медведи левши?' ] model_name = 'M-CLIP/XLM-Roberta-Large-Vit-B-32' # Load Model & Tokenizer model = pt_multilingual_clip.MultilingualCLIP.from_pretrained(model_name) tokenizer = transformers.AutoTokenizer.from_pretrained(model_name) embeddings = model.forward(texts, tokenizer) print("Text features shape:", embeddings.shape) ``` Extracting embeddings from the corresponding image encoder: ```python import torch import clip import requests from PIL import Image device = "cuda" if torch.cuda.is_available() else "cpu" model, preprocess = clip.load("ViT-B/32", device=device) url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) image = preprocess(image).unsqueeze(0).to(device) with torch.no_grad(): image_features = model.encode_image(image) print("Image features shape:", image_features.shape) ``` ## Evaluation results None of the M-CLIP models have been extensivly evaluated, but testing them on Txt2Img retrieval on the humanly translated MS-COCO dataset, we see the following **R@10** results: | Name | En | De | Es | Fr | Zh | It | Pl | Ko | Ru | Tr | Jp | | ----------------------------------|:-----: |:-----: |:-----: |:-----: | :-----: |:-----: |:-----: |:-----: |:-----: |:-----: |:-----: | | [OpenAI CLIP Vit-B/32](https://github.com/openai/CLIP)| 90.3 | - | - | - | - | - | - | - | - | - | - | | [OpenAI CLIP Vit-L/14](https://github.com/openai/CLIP)| 91.8 | - | - | - | - | - | - | - | - | - | - | | [OpenCLIP ViT-B-16+-](https://github.com/openai/CLIP)| 94.3 | - | - | - | - | - | - | - | - | - | - | | [LABSE Vit-L/14](https://huggingface.co/M-CLIP/LABSE-Vit-L-14)| 91.6 | 89.6 | 89.5 | 89.9 | 88.9 | 90.1 | 89.8 | 80.8 | 85.5 | 89.8 | 73.9 | | [XLM-R Large Vit-B/32](https://huggingface.co/M-CLIP/XLM-Roberta-Large-Vit-B-32)| 91.8 | 88.7 | 89.1 | 89.4 | 89.3 | 89.8| 91.4 | 82.1 | 86.1 | 88.8 | 81.0 | | [XLM-R Vit-L/14](https://huggingface.co/M-CLIP/XLM-Roberta-Large-Vit-L-14)| 92.4 | 90.6 | 91.0 | 90.0 | 89.7 | 91.1 | 91.3 | 85.2 | 85.8 | 90.3 | 81.9 | | [XLM-R Large Vit-B/16+](https://huggingface.co/M-CLIP/XLM-Roberta-Large-Vit-B-16Plus)| **95.0** | **93.0** | **93.6** | **93.1** | **94.0** | **93.1** | **94.4** | **89.0** | **90.0** | **93.0** | **84.2** | ## Training/Model details Further details about the model training and data can be found in the [model card](https://github.com/FreddeFrallan/Multilingual-CLIP/blob/main/larger_mclip.md).
M-CLIP/XLM-Roberta-Large-Vit-L-14
M-CLIP
2022-09-15T10:44:59Z
27,133
14
transformers
[ "transformers", "pytorch", "tf", "multilingual", "af", "sq", "am", "ar", "az", "bn", "bs", "bg", "ca", "zh", "hr", "cs", "da", "nl", "en", "et", "fr", "de", "el", "hi", "hu", "is", "id", "it", "ja", "mk", "ml", "mr", "pl", "pt", "ro", "ru", "sr", "sl", "es", "sw", "sv", "tl", "te", "tr", "tk", "uk", "ur", "ug", "uz", "vi", "xh", "endpoints_compatible", "region:us" ]
null
2022-05-30T14:35:41Z
--- language: - multilingual - af - sq - am - ar - az - bn - bs - bg - ca - zh - hr - cs - da - nl - en - et - fr - de - el - hi - hu - is - id - it - ja - mk - ml - mr - pl - pt - ro - ru - sr - sl - es - sw - sv - tl - te - tr - tk - uk - ur - ug - uz - vi - xh --- ## Multilingual-clip: XLM-Roberta-Large-Vit-L-14 Multilingual-CLIP extends OpenAI's English text encoders to multiple other languages. This model *only* contains the multilingual text encoder. The corresponding image model `ViT-L-14` can be retrieved via instructions found on OpenAI's [CLIP repository on Github](https://github.com/openai/CLIP). We provide a usage example below. ## Requirements To use both the multilingual text encoder and corresponding image encoder, we need to install the packages [`multilingual-clip`](https://github.com/FreddeFrallan/Multilingual-CLIP) and [`clip`](https://github.com/openai/CLIP). ``` pip install multilingual-clip pip install git+https://github.com/openai/CLIP.git ``` ## Usage Extracting embeddings from the text encoder can be done in the following way: ```python from multilingual_clip import pt_multilingual_clip import transformers texts = [ 'Three blind horses listening to Mozart.', 'Älgen är skogens konung!', 'Wie leben Eisbären in der Antarktis?', 'Вы знали, что все белые медведи левши?' ] model_name = 'M-CLIP/XLM-Roberta-Large-Vit-L-14' # Load Model & Tokenizer model = pt_multilingual_clip.MultilingualCLIP.from_pretrained(model_name) tokenizer = transformers.AutoTokenizer.from_pretrained(model_name) embeddings = model.forward(texts, tokenizer) print("Text features shape:", embeddings.shape) ``` Extracting embeddings from the corresponding image encoder: ```python import torch import clip import requests from PIL import Image device = "cuda" if torch.cuda.is_available() else "cpu" model, preprocess = clip.load("ViT-L/14", device=device) url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) image = preprocess(image).unsqueeze(0).to(device) with torch.no_grad(): image_features = model.encode_image(image) print("Image features shape:", image_features.shape) ``` ## Evaluation results None of the M-CLIP models have been extensivly evaluated, but testing them on Txt2Img retrieval on the humanly translated MS-COCO dataset, we see the following **R@10** results: | Name | En | De | Es | Fr | Zh | It | Pl | Ko | Ru | Tr | Jp | | ----------------------------------|:-----: |:-----: |:-----: |:-----: | :-----: |:-----: |:-----: |:-----: |:-----: |:-----: |:-----: | | [OpenAI CLIP Vit-B/32](https://github.com/openai/CLIP)| 90.3 | - | - | - | - | - | - | - | - | - | - | | [OpenAI CLIP Vit-L/14](https://github.com/openai/CLIP)| 91.8 | - | - | - | - | - | - | - | - | - | - | | [OpenCLIP ViT-B-16+-](https://github.com/openai/CLIP)| 94.3 | - | - | - | - | - | - | - | - | - | - | | [LABSE Vit-L/14](https://huggingface.co/M-CLIP/LABSE-Vit-L-14)| 91.6 | 89.6 | 89.5 | 89.9 | 88.9 | 90.1 | 89.8 | 80.8 | 85.5 | 89.8 | 73.9 | | [XLM-R Large Vit-B/32](https://huggingface.co/M-CLIP/XLM-Roberta-Large-Vit-B-32)| 91.8 | 88.7 | 89.1 | 89.4 | 89.3 | 89.8| 91.4 | 82.1 | 86.1 | 88.8 | 81.0 | | [XLM-R Vit-L/14](https://huggingface.co/M-CLIP/XLM-Roberta-Large-Vit-L-14)| 92.4 | 90.6 | 91.0 | 90.0 | 89.7 | 91.1 | 91.3 | 85.2 | 85.8 | 90.3 | 81.9 | | [XLM-R Large Vit-B/16+](https://huggingface.co/M-CLIP/XLM-Roberta-Large-Vit-B-16Plus)| **95.0** | **93.0** | **93.6** | **93.1** | **94.0** | **93.1** | **94.4** | **89.0** | **90.0** | **93.0** | **84.2** | ## Training/Model details Further details about the model training and data can be found in the [model card](https://github.com/FreddeFrallan/Multilingual-CLIP/blob/main/larger_mclip.md).
mpapucci/it5-age-classification-tag-it
mpapucci
2022-09-15T10:31:04Z
111
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "T5", "Text Classification", "it", "dataset:TAG-IT", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-09-01T20:24:09Z
--- language: - it tags: - T5 - Text Classification datasets: - TAG-IT --- Write an italian sentence with the prefix "Classifica Età: " to get an age classification of the sentence. The dataset used for the task is: [TAG-IT](https://sites.google.com/view/tag-it-2020/). The model is a fine tuned version of [IT5-base](https://huggingface.co/gsarti/it5-base) of Sarti and Nissim.
sd-concepts-library/cow-uwu
sd-concepts-library
2022-09-15T09:36:06Z
0
1
null
[ "license:mit", "region:us" ]
null
2022-09-15T09:36:02Z
--- license: mit --- ### cow uwu on Stable Diffusion This is the `<cow-uwu>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<cow-uwu> 0](https://huggingface.co/sd-concepts-library/cow-uwu/resolve/main/concept_images/0.jpeg) ![<cow-uwu> 1](https://huggingface.co/sd-concepts-library/cow-uwu/resolve/main/concept_images/1.jpeg) ![<cow-uwu> 2](https://huggingface.co/sd-concepts-library/cow-uwu/resolve/main/concept_images/2.jpeg)
oeg/esT5s-small
oeg
2022-09-15T09:31:00Z
0
0
null
[ "license:cc-by-nc-4.0", "region:us" ]
null
2022-09-15T08:50:26Z
--- license: cc-by-nc-4.0 --- This is the small version (274MB) of the summarization model for the Spanish language presented in the SEMANTiCS 2022 conference (paper entitled "esT5s: A Spanish Model for Text Summarization"). This model was created in less than 1 hour (using a single GPU, specifically an NVIDIA v100 16GB) from the multilingual T5 model using the XL-Sum dataset. It achieves a ROUGE-1 value of 22.21 (mT5 achieves 26.21 after a 96h training using 4 GPUs), ROUGE-2 5.28 (mT5 achieves 8.74), and ROUGE-l 17.44 (mT5 achieves 21.06).
sd-concepts-library/renalla
sd-concepts-library
2022-09-15T09:23:43Z
0
0
null
[ "license:mit", "region:us" ]
null
2022-09-15T09:23:40Z
--- license: mit --- ### Renalla on Stable Diffusion This is the `Renalla` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![Renalla 0](https://huggingface.co/sd-concepts-library/renalla/resolve/main/concept_images/0.jpeg) ![Renalla 1](https://huggingface.co/sd-concepts-library/renalla/resolve/main/concept_images/3.jpeg) ![Renalla 2](https://huggingface.co/sd-concepts-library/renalla/resolve/main/concept_images/5.jpeg) ![Renalla 3](https://huggingface.co/sd-concepts-library/renalla/resolve/main/concept_images/1.jpeg) ![Renalla 4](https://huggingface.co/sd-concepts-library/renalla/resolve/main/concept_images/2.jpeg) ![Renalla 5](https://huggingface.co/sd-concepts-library/renalla/resolve/main/concept_images/4.jpeg)
yandex/RuLeanALBERT
yandex
2022-09-15T09:16:42Z
36
32
transformers
[ "transformers", "lean_albert", "fill-mask", "ru", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-09-15T06:41:21Z
--- language: ru license: apache-2.0 --- RuLeanALBERT is a pretrained masked language model for the Russian language using a memory-efficient architecture. Read more about the model in [this blog post](https://habr.com/ru/company/yandex/blog/688234/) (in Russian). See its implementation, as well as the pretraining and finetuning code, at [https://github.com/yandex-research/RuLeanALBERT](https://github.com/yandex-research/RuLeanALBERT).
sd-concepts-library/style-of-marc-allante
sd-concepts-library
2022-09-15T07:48:41Z
0
47
null
[ "license:mit", "region:us" ]
null
2022-09-11T01:25:10Z
--- license: mit --- ### Style-of-Marc-Allante on Stable Diffusion This is the `<Marc_Allante>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<Marc Allante> 0](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/63.jpeg) ![<Marc Allante> 1](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/43.jpeg) ![<Marc Allante> 2](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/56.jpeg) ![<Marc Allante> 3](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/30.jpeg) ![<Marc Allante> 4](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/24.jpeg) ![<Marc Allante> 5](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/37.jpeg) ![<Marc Allante> 6](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/19.jpeg) ![<Marc Allante> 7](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/54.jpeg) ![<Marc Allante> 8](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/5.jpeg) ![<Marc Allante> 9](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/55.jpeg) ![<Marc Allante> 10](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/66.jpeg) ![<Marc Allante> 11](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/49.jpeg) ![<Marc Allante> 12](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/45.jpeg) ![<Marc Allante> 13](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/58.jpeg) ![<Marc Allante> 14](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/39.jpeg) ![<Marc Allante> 15](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/6.jpeg) ![<Marc Allante> 16](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/38.jpeg) ![<Marc Allante> 17](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/15.jpeg) ![<Marc Allante> 18](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/20.jpeg) ![<Marc Allante> 19](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/35.jpeg) ![<Marc Allante> 20](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/62.jpeg) ![<Marc Allante> 21](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/14.jpeg) ![<Marc Allante> 22](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/9.jpeg) ![<Marc Allante> 23](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/3.jpeg) ![<Marc Allante> 24](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/47.jpeg) ![<Marc Allante> 25](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/0.jpeg) ![<Marc Allante> 26](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/42.jpeg) ![<Marc Allante> 27](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/36.jpeg) ![<Marc Allante> 28](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/33.jpeg) ![<Marc Allante> 29](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/17.jpeg) ![<Marc Allante> 30](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/12.jpeg) ![<Marc Allante> 31](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/13.jpeg) ![<Marc Allante> 32](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/2.jpeg) ![<Marc Allante> 33](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/44.jpeg) ![<Marc Allante> 34](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/16.jpeg) ![<Marc Allante> 35](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/65.jpeg) ![<Marc Allante> 36](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/52.jpeg) ![<Marc Allante> 37](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/59.jpeg) ![<Marc Allante> 38](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/25.jpeg) ![<Marc Allante> 39](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/50.jpeg) ![<Marc Allante> 40](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/48.jpeg) ![<Marc Allante> 41](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/18.jpeg) ![<Marc Allante> 42](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/22.jpeg) ![<Marc Allante> 43](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/10.jpeg) ![<Marc Allante> 44](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/53.jpeg) ![<Marc Allante> 45](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/46.jpeg) ![<Marc Allante> 46](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/41.jpeg) ![<Marc Allante> 47](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/31.jpeg) ![<Marc Allante> 48](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/7.jpeg) ![<Marc Allante> 49](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/51.jpeg) ![<Marc Allante> 50](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/1.jpeg) ![<Marc Allante> 51](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/27.jpeg) ![<Marc Allante> 52](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/32.jpeg) ![<Marc Allante> 53](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/34.jpeg) ![<Marc Allante> 54](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/26.jpeg) ![<Marc Allante> 55](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/21.jpeg) ![<Marc Allante> 56](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/23.jpeg) ![<Marc Allante> 57](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/67.jpeg) ![<Marc Allante> 58](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/61.jpeg) ![<Marc Allante> 59](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/29.jpeg) ![<Marc Allante> 60](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/60.jpeg) ![<Marc Allante> 61](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/40.jpeg) ![<Marc Allante> 62](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/64.jpeg) ![<Marc Allante> 63](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/11.jpeg) ![<Marc Allante> 64](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/57.jpeg) ![<Marc Allante> 65](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/28.jpeg) ![<Marc Allante> 66](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/4.jpeg) ![<Marc Allante> 67](https://huggingface.co/sd-concepts-library/style-of-marc-allante/resolve/main/concept_images/8.jpeg)
racheltong/wav2vec2-large-xlsr-chinese
racheltong
2022-09-15T07:36:59Z
109
1
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-09-15T05:46:49Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xlsr-chinese results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-chinese This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 3.3216 - Cer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 16.1908 | 7.83 | 400 | 3.3216 | 1.0 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
lxj616/stable-diffusion-cn-poster
lxj616
2022-09-15T06:41:48Z
0
9
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2022-09-15T05:31:45Z
--- license: bigscience-bloom-rail-1.0 --- ## Stable Diffusion With Chinese Characteristics A finetuned stable diffusion on early chinese posters of farmers and workers The meme text in sample are added manually afterwards This prompt generate posters as original ``` a drawing of people in style of chinese propaganda poster ``` <img src="https://huggingface.co/lxj616/stable-diffusion-cn-poster/resolve/main/sample1.jpg" width=50% height=50%> You may add some creativity to the prompt ``` a drawing of batman/superman in style of chinese propaganda poster ``` <img src="https://huggingface.co/lxj616/stable-diffusion-cn-poster/resolve/main/sample2.jpg" width=50% height=50%> ## License See LICENSE.TXT from original stable-diffusion model repo ## Technical Details On a single RTX 3090 Ti, 24G VRAM, 3 hours, 40 images of chinese posters, based on sd-v1.4 [Finetune stable diffusion under 24gb vram in hours](https://lxj616.github.io/jekyll/update/2022/09/12/finetune-stable-diffusion-under-24gb-vram-in-hours.html)
sd-concepts-library/ttte
sd-concepts-library
2022-09-15T06:33:45Z
0
2
null
[ "license:mit", "region:us" ]
null
2022-09-15T06:33:31Z
--- license: mit --- ### TTTE on Stable Diffusion This is the `<ttte-2>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<ttte-2> 0](https://huggingface.co/sd-concepts-library/ttte/resolve/main/concept_images/0.jpeg) ![<ttte-2> 1](https://huggingface.co/sd-concepts-library/ttte/resolve/main/concept_images/3.jpeg) ![<ttte-2> 2](https://huggingface.co/sd-concepts-library/ttte/resolve/main/concept_images/1.jpeg) ![<ttte-2> 3](https://huggingface.co/sd-concepts-library/ttte/resolve/main/concept_images/2.jpeg)
slplab/wav2vec2_xlsr50k_english_phoneme
slplab
2022-09-15T05:10:10Z
38
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-08-29T15:42:18Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2_xlsr50k_english_phoneme results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2_xlsr50k_english_phoneme This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on [the TIMIT dataset](https://catalog.ldc.upenn.edu/LDC93s1). It achieves the following results on the evaluation set: - Loss: 0.5783 - Cer: 0.1178 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.8403 | 6.94 | 500 | 1.1345 | 0.4657 | | 0.5795 | 13.88 | 1000 | 0.3579 | 0.1169 | | 0.3567 | 20.83 | 1500 | 0.3866 | 0.1174 | | 0.2717 | 27.77 | 2000 | 0.4219 | 0.1169 | | 0.2135 | 34.72 | 2500 | 0.4861 | 0.1199 | | 0.1664 | 41.66 | 3000 | 0.5490 | 0.1179 | | 0.1375 | 48.61 | 3500 | 0.5783 | 0.1178 | ### Framework versions - Transformers 4.22.0.dev0 - Pytorch 1.12.1 - Datasets 1.13.3 - Tokenizers 0.12.1
sd-concepts-library/babushork
sd-concepts-library
2022-09-15T04:29:38Z
0
0
null
[ "license:mit", "region:us" ]
null
2022-09-15T04:29:27Z
--- license: mit --- ### babushork on Stable Diffusion This is the `<babushork>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<babushork> 0](https://huggingface.co/sd-concepts-library/babushork/resolve/main/concept_images/0.jpeg) ![<babushork> 1](https://huggingface.co/sd-concepts-library/babushork/resolve/main/concept_images/3.jpeg) ![<babushork> 2](https://huggingface.co/sd-concepts-library/babushork/resolve/main/concept_images/5.jpeg) ![<babushork> 3](https://huggingface.co/sd-concepts-library/babushork/resolve/main/concept_images/1.jpeg) ![<babushork> 4](https://huggingface.co/sd-concepts-library/babushork/resolve/main/concept_images/2.jpeg) ![<babushork> 5](https://huggingface.co/sd-concepts-library/babushork/resolve/main/concept_images/4.jpeg)
xusysh/Reinforce-test-1
xusysh
2022-09-15T03:12:25Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2022-09-15T03:10:22Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-test-1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 462.70 +/- 92.29 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
sd-concepts-library/agm-style-nao
sd-concepts-library
2022-09-15T02:22:22Z
0
35
null
[ "license:mit", "region:us" ]
null
2022-09-15T02:13:10Z
--- license: mit --- ### NOTE: USED WAIFU DIFFUSION <https://huggingface.co/hakurei/waifu-diffusion> ### agm-style on Stable Diffusion Artist: <https://www.pixiv.net/en/users/20670939> This is the `<agm-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<agm-style> 0](https://huggingface.co/sd-concepts-library/agm-style/resolve/main/concept_images/0.jpeg) ![<agm-style> 1](https://huggingface.co/sd-concepts-library/agm-style/resolve/main/concept_images/3.jpeg) ![<agm-style> 2](https://huggingface.co/sd-concepts-library/agm-style/resolve/main/concept_images/5.jpeg) ![<agm-style> 3](https://huggingface.co/sd-concepts-library/agm-style/resolve/main/concept_images/1.jpeg) ![<agm-style> 4](https://huggingface.co/sd-concepts-library/agm-style/resolve/main/concept_images/2.jpeg) ![<agm-style> 5](https://huggingface.co/sd-concepts-library/agm-style/resolve/main/concept_images/4.jpeg)
sd-concepts-library/uma-meme-style
sd-concepts-library
2022-09-15T02:16:00Z
0
1
null
[ "license:mit", "region:us" ]
null
2022-09-15T02:15:36Z
--- license: mit --- ### uma-meme-style on Stable Diffusion This is the `<uma-meme-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<uma-meme-style> 0](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/unnamed_7_.jpg) ![<uma-meme-style> 1](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/28.jpg) ![<uma-meme-style> 2](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/unnamed_11_.jpg) ![<uma-meme-style> 3](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/unnamed_12_.jpg) ![<uma-meme-style> 4](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/unnamed_1_.png) ![<uma-meme-style> 5](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/22.jpg) ![<uma-meme-style> 6](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/10.jpg) ![<uma-meme-style> 7](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/KakaoTalk_20220904_015246222.jpg) ![<uma-meme-style> 8](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/50.jpg) ![<uma-meme-style> 9](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/unnamed.png) ![<uma-meme-style> 10](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/unnamed_6_.jpg) ![<uma-meme-style> 11](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/21.jpg) ![<uma-meme-style> 12](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/FbCVln9WIAA74Z2.png) ![<uma-meme-style> 13](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/file.jpg) ![<uma-meme-style> 14](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/tt0.png) ![<uma-meme-style> 15](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/31.jpg) ![<uma-meme-style> 16](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/unnamed-1.jpg) ![<uma-meme-style> 17](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/unnamed.jpg) ![<uma-meme-style> 18](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/unnamed_5_.jpg) ![<uma-meme-style> 19](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/3-30-25.png) ![<uma-meme-style> 20](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/Fb-Pk97aMAIgbYr.png) ![<uma-meme-style> 21](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/2.jpg) ![<uma-meme-style> 22](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/unnamed_2_.png) ![<uma-meme-style> 23](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/6.jpg) ![<uma-meme-style> 24](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/unnamed_1_.jpg) ![<uma-meme-style> 25](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/FZoyWUcXwAE3k2K.png) ![<uma-meme-style> 26](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/unnamed_4_.jpg) ![<uma-meme-style> 27](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/2022-09-14_13-02-28.png) ![<uma-meme-style> 28](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/16.jpg) ![<uma-meme-style> 29](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/unnamed_9_.jpg) ![<uma-meme-style> 30](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/unnamed_10_.jpg) ![<uma-meme-style> 31](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/4.jpg) ![<uma-meme-style> 32](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/unnamed_3_.jpg) ![<uma-meme-style> 33](https://huggingface.co/sd-concepts-library/uma-meme-style/resolve/main/concept_images/unnamed_8_.jpg)
rebolforces/a2c-AntBulletEnv-v0
rebolforces
2022-09-15T00:55:27Z
1
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-09-14T20:35:00Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - metrics: - type: mean_reward value: 2124.31 +/- 153.87 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Padomin/t5-base-TEDxJP-10front-1body-10rear
Padomin
2022-09-15T00:29:58Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:te_dx_jp", "license:cc-by-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-09-14T06:57:42Z
--- license: cc-by-sa-4.0 tags: - generated_from_trainer datasets: - te_dx_jp model-index: - name: t5-base-TEDxJP-10front-1body-10rear results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-TEDxJP-10front-1body-10rear This model is a fine-tuned version of [sonoisa/t5-base-japanese](https://huggingface.co/sonoisa/t5-base-japanese) on the te_dx_jp dataset. It achieves the following results on the evaluation set: - Loss: 0.4366 - Wer: 0.1693 - Mer: 0.1636 - Wil: 0.2493 - Wip: 0.7507 - Hits: 55904 - Substitutions: 6304 - Deletions: 2379 - Insertions: 2249 - Cer: 0.1332 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 40 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Mer | Wil | Wip | Hits | Substitutions | Deletions | Insertions | Cer | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:------:|:-----:|:-------------:|:---------:|:----------:|:------:| | 0.6166 | 1.0 | 1457 | 0.4595 | 0.2096 | 0.1979 | 0.2878 | 0.7122 | 54866 | 6757 | 2964 | 3819 | 0.1793 | | 0.4985 | 2.0 | 2914 | 0.4190 | 0.1769 | 0.1710 | 0.2587 | 0.7413 | 55401 | 6467 | 2719 | 2241 | 0.1417 | | 0.4787 | 3.0 | 4371 | 0.4130 | 0.1728 | 0.1670 | 0.2534 | 0.7466 | 55677 | 6357 | 2553 | 2249 | 0.1368 | | 0.4299 | 4.0 | 5828 | 0.4085 | 0.1726 | 0.1665 | 0.2530 | 0.7470 | 55799 | 6381 | 2407 | 2357 | 0.1348 | | 0.3855 | 5.0 | 7285 | 0.4130 | 0.1702 | 0.1644 | 0.2501 | 0.7499 | 55887 | 6309 | 2391 | 2292 | 0.1336 | | 0.3109 | 6.0 | 8742 | 0.4182 | 0.1732 | 0.1668 | 0.2525 | 0.7475 | 55893 | 6317 | 2377 | 2494 | 0.1450 | | 0.3027 | 7.0 | 10199 | 0.4256 | 0.1691 | 0.1633 | 0.2486 | 0.7514 | 55949 | 6273 | 2365 | 2283 | 0.1325 | | 0.2729 | 8.0 | 11656 | 0.4252 | 0.1709 | 0.1649 | 0.2503 | 0.7497 | 55909 | 6283 | 2395 | 2362 | 0.1375 | | 0.2531 | 9.0 | 13113 | 0.4329 | 0.1696 | 0.1639 | 0.2499 | 0.7501 | 55870 | 6322 | 2395 | 2235 | 0.1334 | | 0.2388 | 10.0 | 14570 | 0.4366 | 0.1693 | 0.1636 | 0.2493 | 0.7507 | 55904 | 6304 | 2379 | 2249 | 0.1332 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu116 - Datasets 2.4.0 - Tokenizers 0.12.1
mlaricheva/roberta-psych
mlaricheva
2022-09-14T23:37:07Z
170
1
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "arxiv:1907.11692", "arxiv:2208.06525", "doi:10.57967/hf/1497", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-09-09T22:44:25Z
# roberta-psych --- language: en --- This is a [RoBERTa](https://arxiv.org/pdf/1907.11692.pdf) model pretrained on Alexander Street Database of Counselling and Psychotherapy Transcripts (see more about database and its content [here](https://alexanderstreet.com/products/counseling-and-psychotherapy-transcripts-series)). Further information about training, parameters and evaluation is available in our paper: Laricheva, M., Zhang, C., Liu, Y., Chen, G., Tracey, T., Young, R., & Carenini, G. (2022). [Automated Utterance Labeling of Conversations Using Natural Language Processing.](https://arxiv.org/abs/2208.06525) arXiv preprint arXiv:2208.06525 --- license: cc-by-nc-sa-2.0 ---
Padomin/t5-base-TEDxJP-9front-1body-9rear
Padomin
2022-09-14T23:16:48Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:te_dx_jp", "license:cc-by-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-09-14T06:57:11Z
--- license: cc-by-sa-4.0 tags: - generated_from_trainer datasets: - te_dx_jp model-index: - name: t5-base-TEDxJP-9front-1body-9rear results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-TEDxJP-9front-1body-9rear This model is a fine-tuned version of [sonoisa/t5-base-japanese](https://huggingface.co/sonoisa/t5-base-japanese) on the te_dx_jp dataset. It achieves the following results on the evaluation set: - Loss: 0.4361 - Wer: 0.1687 - Mer: 0.1630 - Wil: 0.2486 - Wip: 0.7514 - Hits: 55941 - Substitutions: 6292 - Deletions: 2354 - Insertions: 2252 - Cer: 0.1338 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 40 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Mer | Wil | Wip | Hits | Substitutions | Deletions | Insertions | Cer | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:------:|:-----:|:-------------:|:---------:|:----------:|:------:| | 0.6124 | 1.0 | 1457 | 0.4613 | 0.2407 | 0.2209 | 0.3091 | 0.6909 | 54843 | 6758 | 2986 | 5804 | 0.2153 | | 0.4968 | 2.0 | 2914 | 0.4171 | 0.1777 | 0.1716 | 0.2580 | 0.7420 | 55404 | 6354 | 2829 | 2293 | 0.1402 | | 0.4817 | 3.0 | 4371 | 0.4129 | 0.1731 | 0.1673 | 0.2534 | 0.7466 | 55636 | 6332 | 2619 | 2227 | 0.1349 | | 0.4257 | 4.0 | 5828 | 0.4089 | 0.1722 | 0.1659 | 0.2520 | 0.7480 | 55904 | 6346 | 2337 | 2437 | 0.1361 | | 0.3831 | 5.0 | 7285 | 0.4144 | 0.1705 | 0.1646 | 0.2508 | 0.7492 | 55868 | 6343 | 2376 | 2290 | 0.1358 | | 0.3057 | 6.0 | 8742 | 0.4198 | 0.1690 | 0.1632 | 0.2492 | 0.7508 | 55972 | 6333 | 2282 | 2298 | 0.1350 | | 0.2919 | 7.0 | 10199 | 0.4220 | 0.1693 | 0.1635 | 0.2492 | 0.7508 | 55936 | 6310 | 2341 | 2281 | 0.1337 | | 0.2712 | 8.0 | 11656 | 0.4252 | 0.1688 | 0.1632 | 0.2487 | 0.7513 | 55905 | 6286 | 2396 | 2218 | 0.1348 | | 0.2504 | 9.0 | 13113 | 0.4332 | 0.1685 | 0.1629 | 0.2482 | 0.7518 | 55931 | 6270 | 2386 | 2226 | 0.1331 | | 0.2446 | 10.0 | 14570 | 0.4361 | 0.1687 | 0.1630 | 0.2486 | 0.7514 | 55941 | 6292 | 2354 | 2252 | 0.1338 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu116 - Datasets 2.4.0 - Tokenizers 0.12.1
sd-concepts-library/thunderdome-cover
sd-concepts-library
2022-09-14T23:12:38Z
0
2
null
[ "license:mit", "region:us" ]
null
2022-09-14T23:12:33Z
--- license: mit --- ### thunderdome-cover on Stable Diffusion This is the `<thunderdome-cover>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<thunderdome-cover> 0](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/26.jpeg) ![<thunderdome-cover> 1](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/0.jpeg) ![<thunderdome-cover> 2](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/31.jpeg) ![<thunderdome-cover> 3](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/8.jpeg) ![<thunderdome-cover> 4](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/3.jpeg) ![<thunderdome-cover> 5](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/5.jpeg) ![<thunderdome-cover> 6](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/22.jpeg) ![<thunderdome-cover> 7](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/32.jpeg) ![<thunderdome-cover> 8](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/29.jpeg) ![<thunderdome-cover> 9](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/6.jpeg) ![<thunderdome-cover> 10](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/30.jpeg) ![<thunderdome-cover> 11](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/11.jpeg) ![<thunderdome-cover> 12](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/34.jpeg) ![<thunderdome-cover> 13](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/27.jpeg) ![<thunderdome-cover> 14](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/36.jpeg) ![<thunderdome-cover> 15](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/35.jpeg) ![<thunderdome-cover> 16](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/1.jpeg) ![<thunderdome-cover> 17](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/25.jpeg) ![<thunderdome-cover> 18](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/21.jpeg) ![<thunderdome-cover> 19](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/14.jpeg) ![<thunderdome-cover> 20](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/15.jpeg) ![<thunderdome-cover> 21](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/23.jpeg) ![<thunderdome-cover> 22](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/17.jpeg) ![<thunderdome-cover> 23](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/16.jpeg) ![<thunderdome-cover> 24](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/10.jpeg) ![<thunderdome-cover> 25](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/2.jpeg) ![<thunderdome-cover> 26](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/28.jpeg) ![<thunderdome-cover> 27](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/12.jpeg) ![<thunderdome-cover> 28](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/19.jpeg) ![<thunderdome-cover> 29](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/4.jpeg) ![<thunderdome-cover> 30](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/7.jpeg) ![<thunderdome-cover> 31](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/24.jpeg) ![<thunderdome-cover> 32](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/33.jpeg) ![<thunderdome-cover> 33](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/9.jpeg) ![<thunderdome-cover> 34](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/20.jpeg) ![<thunderdome-cover> 35](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/18.jpeg) ![<thunderdome-cover> 36](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/13.jpeg)
sd-concepts-library/uma-meme
sd-concepts-library
2022-09-14T23:08:43Z
0
1
null
[ "license:mit", "region:us" ]
null
2022-09-14T23:08:20Z
--- license: mit --- ### uma-meme on Stable Diffusion This is the `<uma-object-full>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<uma-object-full> 0](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/unnamed_7_.jpg) ![<uma-object-full> 1](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/28.jpg) ![<uma-object-full> 2](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/unnamed_11_.jpg) ![<uma-object-full> 3](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/unnamed_12_.jpg) ![<uma-object-full> 4](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/unnamed_1_.png) ![<uma-object-full> 5](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/22.jpg) ![<uma-object-full> 6](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/10.jpg) ![<uma-object-full> 7](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/KakaoTalk_20220904_015246222.jpg) ![<uma-object-full> 8](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/50.jpg) ![<uma-object-full> 9](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/unnamed.png) ![<uma-object-full> 10](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/unnamed_6_.jpg) ![<uma-object-full> 11](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/21.jpg) ![<uma-object-full> 12](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/FbCVln9WIAA74Z2.png) ![<uma-object-full> 13](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/file.jpg) ![<uma-object-full> 14](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/tt0.png) ![<uma-object-full> 15](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/31.jpg) ![<uma-object-full> 16](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/unnamed-1.jpg) ![<uma-object-full> 17](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/unnamed.jpg) ![<uma-object-full> 18](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/unnamed_5_.jpg) ![<uma-object-full> 19](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/3-30-25.png) ![<uma-object-full> 20](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/Fb-Pk97aMAIgbYr.png) ![<uma-object-full> 21](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/2.jpg) ![<uma-object-full> 22](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/unnamed_2_.png) ![<uma-object-full> 23](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/6.jpg) ![<uma-object-full> 24](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/unnamed_1_.jpg) ![<uma-object-full> 25](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/FZoyWUcXwAE3k2K.png) ![<uma-object-full> 26](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/unnamed_4_.jpg) ![<uma-object-full> 27](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/2022-09-14_13-02-28.png) ![<uma-object-full> 28](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/16.jpg) ![<uma-object-full> 29](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/unnamed_9_.jpg) ![<uma-object-full> 30](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/unnamed_10_.jpg) ![<uma-object-full> 31](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/4.jpg) ![<uma-object-full> 32](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/unnamed_3_.jpg) ![<uma-object-full> 33](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/unnamed_8_.jpg)
sd-concepts-library/mayor-richard-irvin
sd-concepts-library
2022-09-14T23:07:45Z
0
0
null
[ "license:mit", "region:us" ]
null
2022-09-14T23:07:34Z
--- license: mit --- ### mayor-richard-irvin on Stable Diffusion This is the `<Richard_Irvin>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<Richard_Irvin> 0](https://huggingface.co/sd-concepts-library/mayor-richard-irvin/resolve/main/concept_images/0.jpeg) ![<Richard_Irvin> 1](https://huggingface.co/sd-concepts-library/mayor-richard-irvin/resolve/main/concept_images/3.jpeg) ![<Richard_Irvin> 2](https://huggingface.co/sd-concepts-library/mayor-richard-irvin/resolve/main/concept_images/1.jpeg) ![<Richard_Irvin> 3](https://huggingface.co/sd-concepts-library/mayor-richard-irvin/resolve/main/concept_images/2.jpeg)
sd-concepts-library/buddha-statue
sd-concepts-library
2022-09-14T22:23:55Z
0
4
null
[ "license:mit", "region:us" ]
null
2022-09-14T22:23:48Z
--- license: mit --- ### Buddha statue on Stable Diffusion This is the `<buddha-statue>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<buddha-statue> 0](https://huggingface.co/sd-concepts-library/buddha-statue/resolve/main/concept_images/0.jpeg) ![<buddha-statue> 1](https://huggingface.co/sd-concepts-library/buddha-statue/resolve/main/concept_images/3.jpeg) ![<buddha-statue> 2](https://huggingface.co/sd-concepts-library/buddha-statue/resolve/main/concept_images/1.jpeg) ![<buddha-statue> 3](https://huggingface.co/sd-concepts-library/buddha-statue/resolve/main/concept_images/2.jpeg)
sd-concepts-library/retro-girl
sd-concepts-library
2022-09-14T21:34:09Z
0
9
null
[ "license:mit", "region:us" ]
null
2022-09-14T21:33:58Z
--- license: mit --- ### Retro-Girl on Stable Diffusion This is the `<retro-girl>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<retro-girl> 0](https://huggingface.co/sd-concepts-library/retro-girl/resolve/main/concept_images/0.jpeg) ![<retro-girl> 1](https://huggingface.co/sd-concepts-library/retro-girl/resolve/main/concept_images/3.jpeg) ![<retro-girl> 2](https://huggingface.co/sd-concepts-library/retro-girl/resolve/main/concept_images/1.jpeg) ![<retro-girl> 3](https://huggingface.co/sd-concepts-library/retro-girl/resolve/main/concept_images/2.jpeg) ![<retro-girl> 4](https://huggingface.co/sd-concepts-library/retro-girl/resolve/main/concept_images/4.jpeg)
michael20at/q-Taxi-v3
michael20at
2022-09-14T21:20:43Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-09-14T21:16:38Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
sd-concepts-library/collage3-hubcity
sd-concepts-library
2022-09-14T20:42:35Z
0
0
null
[ "license:mit", "region:us" ]
null
2022-09-14T05:37:48Z
--- license: mit --- ### Collage3-HubCity on Stable Diffusion This is the `<C3Hub>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<C3Hub> 0](https://huggingface.co/sd-concepts-library/collage3-hubcity/resolve/main/concept_images/4.jpeg) ![<C3Hub> 1](https://huggingface.co/sd-concepts-library/collage3-hubcity/resolve/main/concept_images/3.jpeg) ![<C3Hub> 2](https://huggingface.co/sd-concepts-library/collage3-hubcity/resolve/main/concept_images/2.jpeg) ![<C3Hub> 3](https://huggingface.co/sd-concepts-library/collage3-hubcity/resolve/main/concept_images/1.jpeg) ![<C3Hub> 4](https://huggingface.co/sd-concepts-library/collage3-hubcity/resolve/main/concept_images/0.jpeg)
DamianCummins/distilbert-base-uncased-finetuned-ner
DamianCummins
2022-09-14T20:03:13Z
61
0
transformers
[ "transformers", "tf", "tensorboard", "distilbert", "token-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-09-14T17:54:02Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: DamianCummins/distilbert-base-uncased-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # DamianCummins/distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0556 - Validation Loss: 0.0608 - Train Precision: 0.9196 - Train Recall: 0.9304 - Train F1: 0.9250 - Train Accuracy: 0.9820 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2631, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch | |:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:| | 0.0556 | 0.0608 | 0.9196 | 0.9304 | 0.9250 | 0.9820 | 0 | ### Framework versions - Transformers 4.21.3 - TensorFlow 2.9.2 - Datasets 2.4.0 - Tokenizers 0.12.1
michael20at/q-FrozenLake-v1-4x4-noSlippery
michael20at
2022-09-14T20:03:08Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-09-14T20:03:02Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="michael20at/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
sd-concepts-library/rektguy
sd-concepts-library
2022-09-14T19:39:29Z
0
0
null
[ "license:mit", "region:us" ]
null
2022-09-14T19:39:24Z
--- license: mit --- ### rektguy on Stable Diffusion This is the `<rektguy>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<rektguy> 0](https://huggingface.co/sd-concepts-library/rektguy/resolve/main/concept_images/1.jpeg) ![<rektguy> 1](https://huggingface.co/sd-concepts-library/rektguy/resolve/main/concept_images/0.jpeg) ![<rektguy> 2](https://huggingface.co/sd-concepts-library/rektguy/resolve/main/concept_images/4.jpeg) ![<rektguy> 3](https://huggingface.co/sd-concepts-library/rektguy/resolve/main/concept_images/2.jpeg) ![<rektguy> 4](https://huggingface.co/sd-concepts-library/rektguy/resolve/main/concept_images/3.jpeg)
matemato/ppo-LunarLander-v2
matemato
2022-09-14T18:58:12Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-09-14T18:57:33Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 191.85 +/- 23.17 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
sd-concepts-library/joe-whiteford-art-style
sd-concepts-library
2022-09-14T18:43:02Z
0
3
null
[ "license:mit", "region:us" ]
null
2022-09-14T18:42:46Z
--- license: mit --- ### Joe Whiteford Art Style on Stable Diffusion This is the `<joe-whiteford-artstyle>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<joe-whiteford-artstyle> 0](https://huggingface.co/sd-concepts-library/joe-whiteford-art-style/resolve/main/concept_images/0.jpeg) ![<joe-whiteford-artstyle> 1](https://huggingface.co/sd-concepts-library/joe-whiteford-art-style/resolve/main/concept_images/8.jpeg) ![<joe-whiteford-artstyle> 2](https://huggingface.co/sd-concepts-library/joe-whiteford-art-style/resolve/main/concept_images/3.jpeg) ![<joe-whiteford-artstyle> 3](https://huggingface.co/sd-concepts-library/joe-whiteford-art-style/resolve/main/concept_images/5.jpeg) ![<joe-whiteford-artstyle> 4](https://huggingface.co/sd-concepts-library/joe-whiteford-art-style/resolve/main/concept_images/6.jpeg) ![<joe-whiteford-artstyle> 5](https://huggingface.co/sd-concepts-library/joe-whiteford-art-style/resolve/main/concept_images/1.jpeg) ![<joe-whiteford-artstyle> 6](https://huggingface.co/sd-concepts-library/joe-whiteford-art-style/resolve/main/concept_images/2.jpeg) ![<joe-whiteford-artstyle> 7](https://huggingface.co/sd-concepts-library/joe-whiteford-art-style/resolve/main/concept_images/4.jpeg) ![<joe-whiteford-artstyle> 8](https://huggingface.co/sd-concepts-library/joe-whiteford-art-style/resolve/main/concept_images/7.jpeg)
sd-concepts-library/my-mug
sd-concepts-library
2022-09-14T17:53:46Z
0
0
null
[ "license:mit", "region:us" ]
null
2022-09-14T17:53:40Z
--- license: mit --- ### My mug on Stable Diffusion This is the `<my-mug>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<my-mug> 0](https://huggingface.co/sd-concepts-library/my-mug/resolve/main/concept_images/1.jpeg) ![<my-mug> 1](https://huggingface.co/sd-concepts-library/my-mug/resolve/main/concept_images/0.jpeg) ![<my-mug> 2](https://huggingface.co/sd-concepts-library/my-mug/resolve/main/concept_images/2.jpeg)
rvidaurre/ddpm-butterflies-128
rvidaurre
2022-09-14T17:21:07Z
0
0
diffusers
[ "diffusers", "tensorboard", "en", "dataset:huggan/smithsonian_butterflies_subset", "license:apache-2.0", "diffusers:DDPMPipeline", "region:us" ]
null
2022-09-14T16:06:53Z
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: huggan/smithsonian_butterflies_subset metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflies-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/smithsonian_butterflies_subset` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/rvidaurre/ddpm-butterflies-128/tensorboard?#scalars)
sd-concepts-library/sterling-archer
sd-concepts-library
2022-09-14T17:00:25Z
0
13
null
[ "license:mit", "region:us" ]
null
2022-09-14T17:00:11Z
--- license: mit --- ### Sterling-Archer on Stable Diffusion This is the `<archer-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<archer-style> 0](https://huggingface.co/sd-concepts-library/sterling-archer/resolve/main/concept_images/1.jpeg) ![<archer-style> 1](https://huggingface.co/sd-concepts-library/sterling-archer/resolve/main/concept_images/5.jpeg) ![<archer-style> 2](https://huggingface.co/sd-concepts-library/sterling-archer/resolve/main/concept_images/0.jpeg) ![<archer-style> 3](https://huggingface.co/sd-concepts-library/sterling-archer/resolve/main/concept_images/4.jpeg) ![<archer-style> 4](https://huggingface.co/sd-concepts-library/sterling-archer/resolve/main/concept_images/2.jpeg) ![<archer-style> 5](https://huggingface.co/sd-concepts-library/sterling-archer/resolve/main/concept_images/3.jpeg)
VietAI/vit5-base-vietnews-summarization
VietAI
2022-09-14T16:46:02Z
543
7
transformers
[ "transformers", "pytorch", "tf", "jax", "t5", "text2text-generation", "summarization", "vi", "dataset:cc100", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-09-07T02:47:53Z
--- language: vi datasets: - cc100 tags: - summarization license: mit widget: - text: "VietAI là tổ chức phi lợi nhuận với sứ mệnh ươm mầm tài năng về trí tuệ nhân tạo và xây dựng một cộng đồng các chuyên gia trong lĩnh vực trí tuệ nhân tạo đẳng cấp quốc tế tại Việt Nam." --- # ViT5-Base Finetuned on `vietnews` Abstractive Summarization (No prefix needed) State-of-the-art pretrained Transformer-based encoder-decoder model for Vietnamese. [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/vit5-pretrained-text-to-text-transformer-for/abstractive-text-summarization-on-vietnews)](https://paperswithcode.com/sota/abstractive-text-summarization-on-vietnews?p=vit5-pretrained-text-to-text-transformer-for) ## How to use For more details, do check out [our Github repo](https://github.com/vietai/ViT5) and [eval script](https://github.com/vietai/ViT5/blob/main/eval/Eval_vietnews_sum.ipynb). ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM ​ tokenizer = AutoTokenizer.from_pretrained("VietAI/vit5-base-vietnews-summarization") model = AutoModelForSeq2SeqLM.from_pretrained("VietAI/vit5-base-vietnews-summarization") model.cuda() ​ sentence = "VietAI là tổ chức phi lợi nhuận với sứ mệnh ươm mầm tài năng về trí tuệ nhân tạo và xây dựng một cộng đồng các chuyên gia trong lĩnh vực trí tuệ nhân tạo đẳng cấp quốc tế tại Việt Nam." sentence = sentence + "</s>" encoding = tokenizer(sentence, return_tensors="pt") input_ids, attention_masks = encoding["input_ids"].to("cuda"), encoding["attention_mask"].to("cuda") outputs = model.generate( input_ids=input_ids, attention_mask=attention_masks, max_length=256, early_stopping=True ) for output in outputs: line = tokenizer.decode(output, skip_special_tokens=True, clean_up_tokenization_spaces=True) print(line) ``` ## Citation ``` @inproceedings{phan-etal-2022-vit5, title = "{V}i{T}5: Pretrained Text-to-Text Transformer for {V}ietnamese Language Generation", author = "Phan, Long and Tran, Hieu and Nguyen, Hieu and Trinh, Trieu H.", booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop", year = "2022", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.naacl-srw.18", pages = "136--142", } ```
jcastanyo/Reinforce-CP-v0
jcastanyo
2022-09-14T16:19:16Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2022-09-14T16:18:12Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CP-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 62.30 +/- 30.18 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
sd-concepts-library/neon-pastel
sd-concepts-library
2022-09-14T15:55:45Z
0
6
null
[ "license:mit", "region:us" ]
null
2022-09-14T15:47:29Z
--- license: mit --- ### Neon Pastel on Stable Diffusion This is the `<neon-pastel>` style taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here are some of the outputs from this model: Prompt: the taj mahal in `<neon-pastel>` style ![<neon-pastel> 0](https://huggingface.co/sd-concepts-library/neon-pastel/resolve/main/outputs/taj_mahal.jpeg) Prompt: portrait of barack obama in `<neon-pastel>` style ![<neon-pastel> 1](https://huggingface.co/sd-concepts-library/neon-pastel/resolve/main/outputs/portraitOfBarackObama.jpeg) Prompt: a beautiful beach landscape in `<neon-pastel>` style ![<neon-pastel> 2](https://huggingface.co/sd-concepts-library/neon-pastel/resolve/main/outputs/beachLandscape.jpeg)
theunnecessarythings/ddpm-ema-flowers-64
theunnecessarythings
2022-09-14T15:54:23Z
2
0
diffusers
[ "diffusers", "tensorboard", "en", "dataset:huggan/flowers-102-categories", "license:apache-2.0", "diffusers:DDPMPipeline", "region:us" ]
null
2022-09-14T14:15:13Z
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: huggan/flowers-102-categories metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-ema-flowers-64 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/flowers-102-categories` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(0.95, 0.999), weight_decay=1e-06 and epsilon=1e-08 - lr_scheduler: cosine - lr_warmup_steps: 500 - ema_inv_gamma: 1.0 - ema_inv_gamma: 0.75 - ema_inv_gamma: 0.9999 - mixed_precision: no ### Training results 📈 [TensorBoard logs](https://huggingface.co/sreerajr000/ddpm-ema-flowers-64/tensorboard?#scalars)
sd-concepts-library/tb303
sd-concepts-library
2022-09-14T15:26:31Z
0
3
null
[ "license:mit", "region:us" ]
null
2022-09-14T15:26:17Z
--- license: mit --- ### TB303 on Stable Diffusion This is the `<"tb303>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<"tb303> 0](https://huggingface.co/sd-concepts-library/tb303/resolve/main/concept_images/1.jpeg) ![<"tb303> 1](https://huggingface.co/sd-concepts-library/tb303/resolve/main/concept_images/0.jpeg) ![<"tb303> 2](https://huggingface.co/sd-concepts-library/tb303/resolve/main/concept_images/4.jpeg) ![<"tb303> 3](https://huggingface.co/sd-concepts-library/tb303/resolve/main/concept_images/2.jpeg) ![<"tb303> 4](https://huggingface.co/sd-concepts-library/tb303/resolve/main/concept_images/3.jpeg)
MayaGalvez/bert-base-multilingual-cased-finetuned-multilingual-nli
MayaGalvez
2022-09-14T15:24:58Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-09-14T13:25:22Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: bert-base-multilingual-cased-finetuned-multilingual-nli_newdata_oneepoch results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-multilingual-cased-finetuned-multilingual-nli_newdata_oneepoch This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7647 - Accuracy: 0.6853 - Precision: 0.6932 - Recall: 0.6853 - F1: 0.6847 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.9394 | 0.04 | 500 | 0.9044 | 0.592 | 0.5985 | 0.592 | 0.5917 | | 0.8603 | 0.08 | 1000 | 0.9159 | 0.579 | 0.6210 | 0.579 | 0.5739 | | 0.8293 | 0.11 | 1500 | 0.8520 | 0.6214 | 0.6278 | 0.6214 | 0.6215 | | 0.8042 | 0.15 | 2000 | 0.8085 | 0.6418 | 0.6439 | 0.6418 | 0.6414 | | 0.7945 | 0.19 | 2500 | 0.8251 | 0.6319 | 0.6575 | 0.6319 | 0.6262 | | 0.7768 | 0.23 | 3000 | 0.8298 | 0.6383 | 0.6556 | 0.6383 | 0.6365 | | 0.753 | 0.27 | 3500 | 0.8225 | 0.6464 | 0.6684 | 0.6464 | 0.6436 | | 0.754 | 0.3 | 4000 | 0.7979 | 0.6529 | 0.6750 | 0.6529 | 0.6523 | | 0.7466 | 0.34 | 4500 | 0.7644 | 0.6718 | 0.6727 | 0.6718 | 0.6713 | | 0.7331 | 0.38 | 5000 | 0.7861 | 0.6591 | 0.6757 | 0.6591 | 0.6581 | | 0.72 | 0.42 | 5500 | 0.7972 | 0.6595 | 0.6815 | 0.6595 | 0.6582 | | 0.7103 | 0.46 | 6000 | 0.7652 | 0.6702 | 0.6728 | 0.6702 | 0.6688 | | 0.7103 | 0.49 | 6500 | 0.7732 | 0.6684 | 0.6796 | 0.6684 | 0.6670 | | 0.7023 | 0.53 | 7000 | 0.7921 | 0.6657 | 0.6834 | 0.6657 | 0.6663 | | 0.6827 | 0.57 | 7500 | 0.7672 | 0.6733 | 0.6824 | 0.6733 | 0.6726 | | 0.6826 | 0.61 | 8000 | 0.7665 | 0.6755 | 0.6789 | 0.6755 | 0.6747 | | 0.6705 | 0.65 | 8500 | 0.7659 | 0.6755 | 0.6815 | 0.6755 | 0.6748 | | 0.662 | 0.68 | 9000 | 0.7738 | 0.6767 | 0.6833 | 0.6767 | 0.6757 | | 0.6556 | 0.72 | 9500 | 0.7623 | 0.6805 | 0.6906 | 0.6805 | 0.6799 | | 0.6462 | 0.76 | 10000 | 0.7863 | 0.6719 | 0.6849 | 0.6719 | 0.6701 | | 0.6405 | 0.8 | 10500 | 0.7523 | 0.681 | 0.6845 | 0.681 | 0.6805 | | 0.6407 | 0.84 | 11000 | 0.7661 | 0.6807 | 0.6856 | 0.6807 | 0.6801 | | 0.6341 | 0.87 | 11500 | 0.7672 | 0.6787 | 0.6904 | 0.6787 | 0.6770 | | 0.6292 | 0.91 | 12000 | 0.7742 | 0.682 | 0.6922 | 0.682 | 0.6803 | | 0.6238 | 0.95 | 12500 | 0.7584 | 0.6855 | 0.6926 | 0.6855 | 0.6850 | | 0.6201 | 0.99 | 13000 | 0.7647 | 0.6853 | 0.6932 | 0.6853 | 0.6847 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu102 - Datasets 2.4.0 - Tokenizers 0.12.1
JAS100/bert-finetuned-ner
JAS100
2022-09-14T14:28:29Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-09-14T14:10:54Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 config: conll2003 split: train args: conll2003 metrics: - name: Precision type: precision value: 0.9330024813895782 - name: Recall type: recall value: 0.9491753618310333 - name: F1 type: f1 value: 0.9410194377242012 - name: Accuracy type: accuracy value: 0.9865926885265203 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0620 - Precision: 0.9330 - Recall: 0.9492 - F1: 0.9410 - Accuracy: 0.9866 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0852 | 1.0 | 1756 | 0.0722 | 0.9149 | 0.9295 | 0.9221 | 0.9814 | | 0.0353 | 2.0 | 3512 | 0.0593 | 0.9223 | 0.9492 | 0.9356 | 0.9863 | | 0.018 | 3.0 | 5268 | 0.0620 | 0.9330 | 0.9492 | 0.9410 | 0.9866 | ### Framework versions - Transformers 4.21.3 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Abdulmateen/bert-finetuned-ner
Abdulmateen
2022-09-14T14:26:48Z
61
0
transformers
[ "transformers", "tf", "bert", "token-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-09-13T05:49:10Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Abdulmateen/bert-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Abdulmateen/bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1865 - Validation Loss: 0.1351 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2634, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.4679 | 0.2717 | 0 | | 0.2578 | 0.1703 | 1 | | 0.1865 | 0.1351 | 2 | ### Framework versions - Transformers 4.21.3 - TensorFlow 2.8.2 - Datasets 2.4.0 - Tokenizers 0.12.1
BlinkDL/rwkv-3-pile-1b5
BlinkDL
2022-09-14T13:54:48Z
0
7
null
[ "pytorch", "text-generation", "causal-lm", "rwkv", "en", "license:apache-2.0", "region:us" ]
text-generation
2022-06-23T11:44:36Z
--- language: - en tags: - pytorch - text-generation - causal-lm - rwkv license: apache-2.0 datasets: - The Pile --- # RWKV-3 1.5B ## Model Description RWKV-3 1.5B is a L24-D2048 causal language model trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details. RWKV-4 1.5B is out: https://huggingface.co/BlinkDL/rwkv-4-pile-1b5 At this moment you have to use my Github code (https://github.com/BlinkDL/RWKV-v2-RNN-Pile) to run it. ctx_len = 896 n_layer = 24 n_embd = 2048 Preview checkpoint: RWKV-3-Pile-20220723-3542.pth : Trained on the Pile for 127B tokens. * Pile loss 2.102 * LAMBADA ppl 7.52, acc 54.71% * PIQA acc 71.11% * SC2016 acc 67.24% * Hellaswag acc_norm 50.45% Preview checkpoint: 20220708-1905.pth : Trained on the Pile for 68B tokens. * Pile loss 2.148 * LAMBADA ppl 8.41, acc 53.17% * PIQA acc 69.64% * SC2016 acc 67.08% * Hellaswag acc_norm 48.20% (I am still training it)
BlinkDL/rwkv-2-pile-430m
BlinkDL
2022-09-14T13:54:23Z
0
4
null
[ "pytorch", "text-generation", "causal-lm", "rwkv", "en", "license:apache-2.0", "region:us" ]
text-generation
2022-06-23T03:09:51Z
--- language: - en tags: - pytorch - text-generation - causal-lm - rwkv license: apache-2.0 datasets: - The Pile --- # RWKV-2 430M ## Model Description RWKV-2 430M is a L24-D1024 causal language model trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details. At this moment you have to use my Github code (https://github.com/BlinkDL/RWKV-v2-RNN-Pile) to run it. ctx_len = 768 n_layer = 24 n_embd = 1024 Final checkpoint: 20220615-10803.pth : Trained on the Pile for 331B tokens. * Pile loss 2.349 * LAMBADA ppl 15.34, acc 42.42% * PIQA acc 67.03% * SC2016 acc 62.05% * Hellaswag acc_norm 38.47%
NoaG/try
NoaG
2022-09-14T12:26:30Z
0
0
null
[ "region:us" ]
null
2022-09-14T12:09:04Z
# Dummy model This is a Dummy model.
burakyldrm/wav2vec2-burak-new-300-v2
burakyldrm
2022-09-14T12:10:15Z
94
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-09-14T03:30:38Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-burak-new-300-v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-burak-new-300-v2 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.7018 - Wer: 0.3641 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 141 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 4.2967 | 8.62 | 500 | 1.0561 | 0.8351 | | 0.5199 | 17.24 | 1000 | 0.6019 | 0.5054 | | 0.2249 | 25.86 | 1500 | 0.6036 | 0.4576 | | 0.1573 | 34.48 | 2000 | 0.6680 | 0.4532 | | 0.1301 | 43.1 | 2500 | 0.6823 | 0.4396 | | 0.1108 | 51.72 | 3000 | 0.6630 | 0.4263 | | 0.0941 | 60.34 | 3500 | 0.6574 | 0.4226 | | 0.0797 | 68.97 | 4000 | 0.6796 | 0.4174 | | 0.0689 | 77.59 | 4500 | 0.6426 | 0.4088 | | 0.0612 | 86.21 | 5000 | 0.6615 | 0.3949 | | 0.0528 | 94.83 | 5500 | 0.6769 | 0.3906 | | 0.0515 | 103.45 | 6000 | 0.6073 | 0.3827 | | 0.0394 | 112.07 | 6500 | 0.7099 | 0.3767 | | 0.0352 | 120.69 | 7000 | 0.7082 | 0.3688 | | 0.0324 | 129.31 | 7500 | 0.6939 | 0.3699 | | 0.028 | 137.93 | 8000 | 0.7018 | 0.3641 | ### Framework versions - Transformers 4.21.3 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
sd-concepts-library/junji-ito-artstyle
sd-concepts-library
2022-09-14T11:45:57Z
0
6
null
[ "license:mit", "region:us" ]
null
2022-09-14T11:45:43Z
--- license: mit --- ### Junji Ito ArtStyle on Stable Diffusion This is the `<junji-ito-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<junji-ito-style> 0](https://huggingface.co/sd-concepts-library/junji-ito-artstyle/resolve/main/concept_images/1.jpeg) ![<junji-ito-style> 1](https://huggingface.co/sd-concepts-library/junji-ito-artstyle/resolve/main/concept_images/0.jpeg) ![<junji-ito-style> 2](https://huggingface.co/sd-concepts-library/junji-ito-artstyle/resolve/main/concept_images/2.jpeg) ![<junji-ito-style> 3](https://huggingface.co/sd-concepts-library/junji-ito-artstyle/resolve/main/concept_images/3.jpeg)
sd-concepts-library/breakcore
sd-concepts-library
2022-09-14T11:39:49Z
0
5
null
[ "license:mit", "region:us" ]
null
2022-09-14T11:39:45Z
--- license: mit --- ### breakcore on Stable Diffusion This is the `breakcore` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![breakcore 0](https://huggingface.co/sd-concepts-library/breakcore/resolve/main/concept_images/1.jpeg) ![breakcore 1](https://huggingface.co/sd-concepts-library/breakcore/resolve/main/concept_images/0.jpeg) ![breakcore 2](https://huggingface.co/sd-concepts-library/breakcore/resolve/main/concept_images/2.jpeg) ![breakcore 3](https://huggingface.co/sd-concepts-library/breakcore/resolve/main/concept_images/3.jpeg)
theojolliffe/bart-model2-1409
theojolliffe
2022-09-14T11:35:12Z
110
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-09-14T09:28:15Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bart-model2-1409 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-model2-1409 This model is a fine-tuned version of [theojolliffe/bart-paraphrase-v4-e1-feedback](https://huggingface.co/theojolliffe/bart-paraphrase-v4-e1-feedback) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 434 | 0.0866 | 65.9374 | 57.1515 | 64.1273 | 64.1253 | 19.787 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.9.0 - Datasets 1.18.0 - Tokenizers 0.10.3
michael20at/ppo-LunarLander-v2
michael20at
2022-09-14T11:23:41Z
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-09-12T05:17:47Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 261.95 +/- 15.45 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
1ucky40nc3/wav2vec2-common_voice-tr-demo
1ucky40nc3
2022-09-14T11:12:34Z
165
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "common_voice", "generated_from_trainer", "tr", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-09-14T08:47:52Z
--- language: - tr license: apache-2.0 tags: - automatic-speech-recognition - common_voice - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-common_voice-tr-demo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-common_voice-tr-demo This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - TR dataset. It achieves the following results on the evaluation set: - Loss: 0.3798 - Wer: 0.3448 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 15.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 0.92 | 100 | 3.5932 | 1.0 | | No log | 1.83 | 200 | 3.0185 | 0.9999 | | No log | 2.75 | 300 | 0.9357 | 0.8007 | | No log | 3.67 | 400 | 0.5945 | 0.6318 | | 3.1829 | 4.59 | 500 | 0.4931 | 0.5265 | | 3.1829 | 5.5 | 600 | 0.4757 | 0.4784 | | 3.1829 | 6.42 | 700 | 0.4282 | 0.4540 | | 3.1829 | 7.34 | 800 | 0.3995 | 0.4252 | | 3.1829 | 8.26 | 900 | 0.4046 | 0.4149 | | 0.2215 | 9.17 | 1000 | 0.4048 | 0.3951 | | 0.2215 | 10.09 | 1100 | 0.3944 | 0.3865 | | 0.2215 | 11.01 | 1200 | 0.3853 | 0.3643 | | 0.2215 | 11.93 | 1300 | 0.3950 | 0.3632 | | 0.2215 | 12.84 | 1400 | 0.3836 | 0.3506 | | 0.1009 | 13.76 | 1500 | 0.3808 | 0.3510 | | 0.1009 | 14.68 | 1600 | 0.3807 | 0.3456 | ### Framework versions - Transformers 4.22.0.dev0 - Pytorch 1.12.1+cu102 - Datasets 2.4.0 - Tokenizers 0.12.1
sd-concepts-library/reksio-dog
sd-concepts-library
2022-09-14T10:47:52Z
0
1
null
[ "license:mit", "region:us" ]
null
2022-09-14T10:47:46Z
--- license: mit --- ### reksio dog on Stable Diffusion This is the `<reksio-dog>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<reksio-dog> 0](https://huggingface.co/sd-concepts-library/reksio-dog/resolve/main/concept_images/1.jpeg) ![<reksio-dog> 1](https://huggingface.co/sd-concepts-library/reksio-dog/resolve/main/concept_images/5.jpeg) ![<reksio-dog> 2](https://huggingface.co/sd-concepts-library/reksio-dog/resolve/main/concept_images/0.jpeg) ![<reksio-dog> 3](https://huggingface.co/sd-concepts-library/reksio-dog/resolve/main/concept_images/4.jpeg) ![<reksio-dog> 4](https://huggingface.co/sd-concepts-library/reksio-dog/resolve/main/concept_images/2.jpeg) ![<reksio-dog> 5](https://huggingface.co/sd-concepts-library/reksio-dog/resolve/main/concept_images/3.jpeg)
sguskin/dynamic-minilmv2-L6-H384-squad1.1
sguskin
2022-09-14T10:15:13Z
107
0
transformers
[ "transformers", "pytorch", "roberta", "question-answering", "dataset:SQuAD1.1", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-07-28T07:29:30Z
--- license: "mit" datasets: - SQuAD1.1 ---
IIIT-L/xlm-roberta-large-finetuned-TRAC-DS
IIIT-L
2022-09-14T09:45:42Z
104
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-09-14T07:48:15Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: xlm-roberta-large-finetuned-TRAC-DS results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-large-finetuned-TRAC-DS This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0992 - Accuracy: 0.3342 - Precision: 0.1114 - Recall: 0.3333 - F1: 0.1670 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.1187640010910775e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 43 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 1.1358 | 0.25 | 612 | 1.1003 | 0.4436 | 0.1479 | 0.3333 | 0.2049 | | 1.1199 | 0.5 | 1224 | 1.1130 | 0.4436 | 0.1479 | 0.3333 | 0.2049 | | 1.1221 | 0.75 | 1836 | 1.0992 | 0.3342 | 0.1114 | 0.3333 | 0.1670 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.10.1+cu111 - Datasets 2.3.2 - Tokenizers 0.12.1
sd-concepts-library/chillpill
sd-concepts-library
2022-09-14T09:44:54Z
0
0
null
[ "license:mit", "region:us" ]
null
2022-09-13T20:30:22Z
--- license: mit --- ### Chillpill on Stable Diffusion This is the `<Chillpill>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![Chillpill 0](https://huggingface.co/sd-concepts-library/chillpill/resolve/main/concept_images/1.jpeg) ![Chillpill 1](https://huggingface.co/sd-concepts-library/chillpill/resolve/main/concept_images/0.jpeg) ![Chillpill 2](https://huggingface.co/sd-concepts-library/chillpill/resolve/main/concept_images/4.jpeg) ![Chillpill 3](https://huggingface.co/sd-concepts-library/chillpill/resolve/main/concept_images/2.jpeg) ![Chillpill 4](https://huggingface.co/sd-concepts-library/chillpill/resolve/main/concept_images/3.jpeg)
MiguelCosta/distlBert-base-uncased-20Newsgroups
MiguelCosta
2022-09-14T09:34:23Z
61
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-09-14T09:34:13Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: distlBert-base-uncased-20Newsgroups results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # distlBert-base-uncased-20Newsgroups This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3180, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.21.3 - TensorFlow 2.8.2 - Datasets 2.4.0 - Tokenizers 0.12.1
NBF/distilbert-base-uncased-finetuned-cola
NBF
2022-09-14T08:56:56Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-09-13T10:25:26Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5512772054945002 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8147 - Matthews Correlation: 0.5513 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.532 | 1.0 | 535 | 0.5646 | 0.3988 | | 0.3638 | 2.0 | 1070 | 0.5168 | 0.4911 | | 0.2456 | 3.0 | 1605 | 0.5489 | 0.5414 | | 0.183 | 4.0 | 2140 | 0.7633 | 0.5437 | | 0.1375 | 5.0 | 2675 | 0.8147 | 0.5513 | ### Framework versions - Transformers 4.11.0 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.10.3
sd-concepts-library/naval-portrait
sd-concepts-library
2022-09-14T08:44:51Z
0
6
null
[ "license:mit", "region:us" ]
null
2022-09-14T08:44:36Z
--- license: mit --- ### naval-portrait on Stable Diffusion This is the `<naval-portrait>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<naval-portrait> 0](https://huggingface.co/sd-concepts-library/naval-portrait/resolve/main/concept_images/1.jpeg) ![<naval-portrait> 1](https://huggingface.co/sd-concepts-library/naval-portrait/resolve/main/concept_images/5.jpeg) ![<naval-portrait> 2](https://huggingface.co/sd-concepts-library/naval-portrait/resolve/main/concept_images/0.jpeg) ![<naval-portrait> 3](https://huggingface.co/sd-concepts-library/naval-portrait/resolve/main/concept_images/4.jpeg) ![<naval-portrait> 4](https://huggingface.co/sd-concepts-library/naval-portrait/resolve/main/concept_images/2.jpeg) ![<naval-portrait> 5](https://huggingface.co/sd-concepts-library/naval-portrait/resolve/main/concept_images/3.jpeg) ![<naval-portrait> 6](https://huggingface.co/sd-concepts-library/naval-portrait/resolve/main/concept_images/6.jpeg)
thkkvui/distilbert-base-uncased-finetuned-emotion
thkkvui
2022-09-14T08:31:47Z
109
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-09-14T08:08:11Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.925 - name: F1 type: f1 value: 0.9249199121289653 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2240 - Accuracy: 0.925 - F1: 0.9249 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8487 | 1.0 | 250 | 0.3310 | 0.9045 | 0.9011 | | 0.2606 | 2.0 | 500 | 0.2240 | 0.925 | 0.9249 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.13.0.dev20220711 - Datasets 2.4.0 - Tokenizers 0.12.1
hubtype/distilbert-base-uncased-nonsense
hubtype
2022-09-14T08:09:05Z
3,576
3
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-09-13T15:43:03Z
## Definition This text classification model detects whenever a text has no sense. ## Usage Recommendations - **max\_length**: 128 - **padding**: "max_length" - **truncation**: True ## Performance - **Accuracy**: 99\%
mishig/temp-model
mishig
2022-09-14T08:07:18Z
0
1
null
[ "layoutlm", "pdf", "document-question-answering", "en", "license:mit", "region:us" ]
document-question-answering
2022-05-25T09:59:02Z
--- pipeline_tag: document-question-answering language: en license: mit tags: - layoutlm - pdf --- # LayoutLM for Visual Question Answering This is a fine-tuned version of the multi-modal [LayoutLM](https://aka.ms/layoutlm) model for the task of question answering on documents. It has been fine-tuned using both the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) and [DocVQA](https://www.docvqa.org/) datasets. ## Getting started with the model To run these examples, you must have [PIL](https://pillow.readthedocs.io/en/stable/installation.html), [pytesseract](https://pypi.org/project/pytesseract/), and [PyTorch](https://pytorch.org/get-started/locally/) installed in addition to [transformers](https://huggingface.co/docs/transformers/index). ```python from transformers import pipeline nlp = pipeline( "document-question-answering", model="impira/layoutlm-document-qa", ) nlp( "https://templates.invoicehome.com/invoice-template-us-neat-750px.png", "What is the invoice number?" ) # {'score': 0.9943977, 'answer': 'us-001', 'start': 15, 'end': 15} nlp( "https://miro.medium.com/max/787/1*iECQRIiOGTmEFLdWkVIH2g.jpeg", "What is the purchase amount?" ) # {'score': 0.9912159, 'answer': '$1,000,000,000', 'start': 97, 'end': 97} nlp( "https://www.accountingcoach.com/wp-content/uploads/2013/10/income-statement-example@2x.png", "What are the 2020 net sales?" ) # {'score': 0.59147286, 'answer': '$ 3,750', 'start': 19, 'end': 20} ``` **NOTE**: This model and pipeline was recently landed in transformers via [PR #18407](https://github.com/huggingface/transformers/pull/18407) and [PR #18414](https://github.com/huggingface/transformers/pull/18414), so you'll need to use a recent version of transformers, for example: ```bash pip install git+https://github.com/huggingface/transformers.git@2ef774211733f0acf8d3415f9284c49ef219e991 ``` ## About us This model was created by the team at [Impira](https://www.impira.com/).
rajpurkar/results
rajpurkar
2022-09-14T08:01:27Z
166
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-09-14T07:55:47Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.1822 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 13 | 3.1802 | | No log | 2.0 | 26 | 3.1813 | | No log | 3.0 | 39 | 3.1822 | ### Framework versions - Transformers 4.21.3 - Pytorch 1.13.0.dev20220912 - Datasets 2.4.0 - Tokenizers 0.11.0
Vasanth/image_captioner_vit_gpt2
Vasanth
2022-09-14T07:47:02Z
46
0
transformers
[ "transformers", "pytorch", "tensorboard", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "endpoints_compatible", "region:us" ]
image-text-to-text
2022-09-13T23:49:03Z
--- tags: - generated_from_trainer model-index: - name: image_captioner_vit_gpt2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # image_captioner_vit_gpt2 This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.21.3 - Pytorch 1.12.1+cu113 - Tokenizers 0.12.1
sujathass/bert-base-uncased-finetuned-cola
sujathass
2022-09-14T06:37:34Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-09-14T03:35:12Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: bert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: cola split: train args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5910926879747481 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-cola This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4822 - Matthews Correlation: 0.5911 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | No log | 1.0 | 268 | 0.4768 | 0.4943 | | 0.4116 | 2.0 | 536 | 0.4275 | 0.5803 | | 0.4116 | 3.0 | 804 | 0.4822 | 0.5911 | | 0.1884 | 4.0 | 1072 | 0.5984 | 0.5880 | ### Framework versions - Transformers 4.21.3 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
paust/pko-t5-base
paust
2022-09-14T04:44:45Z
884
18
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "ko", "arxiv:2105.09680", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-16T11:59:13Z
--- language: ko license: cc-by-4.0 --- # pko-t5-base [Source Code](https://github.com/paust-team/pko-t5) pko-t5 는 한국어 전용 데이터로 학습한 [t5 v1.1 모델](https://github.com/google-research/text-to-text-transfer-transformer/blob/84f8bcc14b5f2c03de51bd3587609ba8f6bbd1cd/released_checkpoints.md)입니다. 한국어를 tokenize 하기 위해서 sentencepiece 대신 OOV 가 없는 BBPE 를 사용했으며 한국어 데이터 (나무위키, 위키피디아, 모두의말뭉치 등..) 를 T5 의 span corruption task 를 사용해서 unsupervised learning 만 적용하여 학습을 진행했습니다. pko-t5 를 사용하실 때는 대상 task 에 파인튜닝하여 사용하시기 바랍니다. ## Usage transformers 의 API 를 사용하여 접근 가능합니다. tokenizer 를 사용할때는 `T5Tokenizer` 가 아니라 `T5TokenizerFast` 를 사용해주십시오. model 은 T5ForConditionalGeneration 를 그대로 활용하시면 됩니다. ### Example ```python from transformers import T5TokenizerFast, T5ForConditionalGeneration tokenizer = T5TokenizerFast.from_pretrained('paust/pko-t5-base') model = T5ForConditionalGeneration.from_pretrained('paust/pko-t5-base') input_ids = tokenizer(["qa question: 당신의 이름은 무엇인가요?"]).input_ids labels = tokenizer(["T5 입니다."]).input_ids outputs = model(input_ids=input_ids, labels=labels) print(f"loss={outputs.loss} logits={outputs.logits}") ``` ## Klue 평가 (dev) | | Model | ynat (macro F1) | sts (pearsonr/F1) | nli (acc) | ner (entity-level F1) | re (micro F1) | dp (LAS) | mrc (EM/F1) | |-----|------------------------------------------------------------------|-----------------|-------------------|-----------|-----------------------|---------------|-----------|-------------| | | Baseline | **87.30** | **93.20/86.13** | **89.50** | 86.06 | 71.06 | 87.93 | **75.26/-** | | FT | [pko-t5-small](https://huggingface.co/paust/pko-t5-small) (77M) | 86.21 | 77.99/77.01 | 69.20 | 82.60 | 66.46 | 93.15 | 43.81/46.58 | | FT | [pko-t5-base](https://huggingface.co/paust/pko-t5-base) (250M) | 87.29 | 90.25/83.43 | 79.73 | 87.80 | 67.23 | 97.28 | 61.53/64.74 | | FT | [pko-t5-large](https://huggingface.co/paust/pko-t5-large) (800M) | 87.12 | 92.05/85.24 | 84.96 | **88.18** | **75.17** | **97.60** | 68.01/71.44 | | MT | pko-t5-small | 84.54 | 68.50/72/02 | 51.16 | 74.69 | 66.11 | 80.40 | 43.60/46.28 | | MT | pko-t5-base | 86.89 | 83.96/80.30 | 72.03 | 85.27 | 66.59 | 95.05 | 61.11/63.94 | | MT | pko-t5-large | 87.57 | 91.93/86.29 | 83.63 | 87.41 | 71.34 | 96.99 | 70.70/73.72 | - FT: 싱글태스크 파인튜닝 / MT: 멀티태스크 파인튜닝 - [Baseline](https://arxiv.org/abs/2105.09680): KLUE 논문에서 소개된 dev set 에 대한 SOTA 점수 ## License [PAUST](https://paust.io)에서 만든 pko-t5는 [MIT license](https://github.com/paust-team/pko-t5/blob/main/LICENSE) 하에 공개되어 있습니다.
paust/pko-t5-small
paust
2022-09-14T04:41:53Z
386
5
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "ko", "arxiv:2105.09680", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-16T10:26:56Z
--- language: ko license: cc-by-4.0 --- # pko-t5-small [Source Code](https://github.com/paust-team/pko-t5) pko-t5 는 한국어 전용 데이터로 학습한 [t5 v1.1 모델](https://github.com/google-research/text-to-text-transfer-transformer/blob/84f8bcc14b5f2c03de51bd3587609ba8f6bbd1cd/released_checkpoints.md)입니다. 한국어를 tokenize 하기 위해서 sentencepiece 대신 OOV 가 없는 BBPE 를 사용했으며 한국어 데이터 (나무위키, 위키피디아, 모두의말뭉치 등..) 를 T5 의 span corruption task 를 사용해서 unsupervised learning 만 적용하여 학습을 진행했습니다. pko-t5 를 사용하실 때는 대상 task 에 파인튜닝하여 사용하시기 바랍니다. ## Usage transformers 의 API 를 사용하여 접근 가능합니다. tokenizer 를 사용할때는 `T5Tokenizer` 가 아니라 `T5TokenizerFast` 를 사용해주십시오. model 은 T5ForConditionalGeneration 를 그대로 활용하시면 됩니다. ### Example ```python from transformers import T5TokenizerFast, T5ForConditionalGeneration tokenizer = T5TokenizerFast.from_pretrained('paust/pko-t5-small') model = T5ForConditionalGeneration.from_pretrained('paust/pko-t5-small') input_ids = tokenizer(["qa question: 당신의 이름은 무엇인가요?"]).input_ids labels = tokenizer(["T5 입니다."]).input_ids outputs = model(input_ids, labels) print(f"loss={outputs.loss} logits={outputs.logits}") ``` ## Klue 평가 (dev) | | Model | ynat (macro F1) | sts (pearsonr/F1) | nli (acc) | ner (entity-level F1) | re (micro F1) | dp (LAS) | mrc (EM/F1) | |-----|------------------------------------------------------------------|-----------------|-------------------|-----------|-----------------------|---------------|-----------|-------------| | | Baseline | **87.30** | **93.20/86.13** | **89.50** | 86.06 | 71.06 | 87.93 | **75.26/-** | | FT | [pko-t5-small](https://huggingface.co/paust/pko-t5-small) (77M) | 86.21 | 77.99/77.01 | 69.20 | 82.60 | 66.46 | 93.15 | 43.81/46.58 | | FT | [pko-t5-base](https://huggingface.co/paust/pko-t5-base) (250M) | 87.29 | 90.25/83.43 | 79.73 | 87.80 | 67.23 | 97.28 | 61.53/64.74 | | FT | [pko-t5-large](https://huggingface.co/paust/pko-t5-large) (800M) | 87.12 | 92.05/85.24 | 84.96 | **88.18** | **75.17** | **97.60** | 68.01/71.44 | | MT | pko-t5-small | 84.54 | 68.50/72/02 | 51.16 | 74.69 | 66.11 | 80.40 | 43.60/46.28 | | MT | pko-t5-base | 86.89 | 83.96/80.30 | 72.03 | 85.27 | 66.59 | 95.05 | 61.11/63.94 | | MT | pko-t5-large | 87.57 | 91.93/86.29 | 83.63 | 87.41 | 71.34 | 96.99 | 70.70/73.72 | - FT: 싱글태스크 파인튜닝 / MT: 멀티태스크 파인튜닝 - [Baseline](https://arxiv.org/abs/2105.09680): KLUE 논문에서 소개된 dev set 에 대한 SOTA 점수 ## License [PAUST](https://paust.io)에서 만든 pko-t5는 [MIT license](https://github.com/paust-team/pko-t5/blob/main/LICENSE) 하에 공개되어 있습니다.
momo/distilbert-base-uncased-finetuned-ner
momo
2022-09-14T04:35:14Z
112
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.9262123053131559 - name: Recall type: recall value: 0.9380243875153821 - name: F1 type: f1 value: 0.9320809248554913 - name: Accuracy type: accuracy value: 0.9839547555880344 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0617 - Precision: 0.9262 - Recall: 0.9380 - F1: 0.9321 - Accuracy: 0.9840 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2465 | 1.0 | 878 | 0.0727 | 0.9175 | 0.9199 | 0.9187 | 0.9808 | | 0.0527 | 2.0 | 1756 | 0.0610 | 0.9245 | 0.9361 | 0.9303 | 0.9834 | | 0.0313 | 3.0 | 2634 | 0.0617 | 0.9262 | 0.9380 | 0.9321 | 0.9840 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.8.0 - Datasets 1.16.1 - Tokenizers 0.10.3
dennislibre/westworld
dennislibre
2022-09-14T03:55:30Z
0
0
fastai
[ "fastai", "image-classification", "region:us" ]
image-classification
2022-09-14T03:38:10Z
--- tags: - fastai - image-classification --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
sd-concepts-library/uma-style-classic
sd-concepts-library
2022-09-14T02:53:06Z
0
1
null
[ "license:mit", "region:us" ]
null
2022-09-14T02:53:02Z
--- license: mit --- ### uma-style-classic on Stable Diffusion This is the `<uma>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<uma> 0](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/21.jpg) ![<uma> 1](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/33.jpg) ![<uma> 2](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/12.jpg) ![<uma> 3](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/26.jpg) ![<uma> 4](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/48.jpg) ![<uma> 5](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/22.jpg) ![<uma> 6](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/9.jpg) ![<uma> 7](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/27.jpg) ![<uma> 8](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/11.jpg) ![<uma> 9](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/10.jpg) ![<uma> 10](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/19.jpg) ![<uma> 11](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/13.jpg) ![<uma> 12](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/32.jpg) ![<uma> 13](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/24.jpg) ![<uma> 14](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/4.jpg) ![<uma> 15](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/35.jpg) ![<uma> 16](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/42.jpg) ![<uma> 17](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/7.jpg) ![<uma> 18](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/3.jpg) ![<uma> 19](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/15.jpg) ![<uma> 20](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/17.jpg) ![<uma> 21](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/6.jpg) ![<uma> 22](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/37.jpg) ![<uma> 23](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/16.jpg) ![<uma> 24](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/23.jpg) ![<uma> 25](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/29.jpg) ![<uma> 26](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/40.jpg) ![<uma> 27](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/51.jpg) ![<uma> 28](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/5.jpg) ![<uma> 29](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/50.jpg) ![<uma> 30](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/28.jpg) ![<uma> 31](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/8.jpg) ![<uma> 32](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/36.jpg) ![<uma> 33](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/14.jpg) ![<uma> 34](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/25.jpg) ![<uma> 35](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/39.jpg) ![<uma> 36](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/20.jpg) ![<uma> 37](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/34.jpg) ![<uma> 38](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/2.jpg) ![<uma> 39](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/41.jpg) ![<uma> 40](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/18.jpg) ![<uma> 41](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/1.jpg) ![<uma> 42](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/38.jpg) ![<uma> 43](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/31.jpg) ![<uma> 44](https://huggingface.co/sd-concepts-library/uma-style-classic/resolve/main/concept_images/30.jpg)
Padomin/t5-base-TEDxJP-3front-1body-3rear
Padomin
2022-09-14T01:46:01Z
9
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:te_dx_jp", "license:cc-by-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-09-13T15:58:46Z
--- license: cc-by-sa-4.0 tags: - generated_from_trainer datasets: - te_dx_jp model-index: - name: t5-base-TEDxJP-3front-1body-3rear results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-TEDxJP-3front-1body-3rear This model is a fine-tuned version of [sonoisa/t5-base-japanese](https://huggingface.co/sonoisa/t5-base-japanese) on the te_dx_jp dataset. It achieves the following results on the evaluation set: - Loss: 0.4427 - Wer: 0.1709 - Mer: 0.1651 - Wil: 0.2519 - Wip: 0.7481 - Hits: 55802 - Substitutions: 6391 - Deletions: 2394 - Insertions: 2252 - Cer: 0.1337 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Mer | Wil | Wip | Hits | Substitutions | Deletions | Insertions | Cer | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:------:|:-----:|:-------------:|:---------:|:----------:|:------:| | 0.628 | 1.0 | 1457 | 0.4785 | 0.2008 | 0.1912 | 0.2803 | 0.7197 | 54855 | 6650 | 3082 | 3234 | 0.1735 | | 0.5271 | 2.0 | 2914 | 0.4292 | 0.1779 | 0.1718 | 0.2602 | 0.7398 | 55387 | 6527 | 2673 | 2293 | 0.1469 | | 0.4724 | 3.0 | 4371 | 0.4222 | 0.1719 | 0.1664 | 0.2530 | 0.7470 | 55610 | 6365 | 2612 | 2123 | 0.1353 | | 0.4125 | 4.0 | 5828 | 0.4174 | 0.1707 | 0.1653 | 0.2512 | 0.7488 | 55694 | 6304 | 2589 | 2135 | 0.1342 | | 0.3646 | 5.0 | 7285 | 0.4218 | 0.1712 | 0.1655 | 0.2521 | 0.7479 | 55756 | 6373 | 2458 | 2224 | 0.1339 | | 0.3232 | 6.0 | 8742 | 0.4253 | 0.1695 | 0.1642 | 0.2505 | 0.7495 | 55726 | 6340 | 2521 | 2087 | 0.1333 | | 0.3583 | 7.0 | 10199 | 0.4303 | 0.1699 | 0.1645 | 0.2514 | 0.7486 | 55733 | 6393 | 2461 | 2120 | 0.1338 | | 0.2894 | 8.0 | 11656 | 0.4355 | 0.1699 | 0.1643 | 0.2508 | 0.7492 | 55827 | 6371 | 2389 | 2215 | 0.1325 | | 0.2825 | 9.0 | 13113 | 0.4399 | 0.1705 | 0.1648 | 0.2518 | 0.7482 | 55785 | 6409 | 2393 | 2207 | 0.1334 | | 0.2901 | 10.0 | 14570 | 0.4427 | 0.1709 | 0.1651 | 0.2519 | 0.7481 | 55802 | 6391 | 2394 | 2252 | 0.1337 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu116 - Datasets 2.4.0 - Tokenizers 0.12.1
sd-concepts-library/colossus
sd-concepts-library
2022-09-14T01:13:25Z
0
0
null
[ "license:mit", "region:us" ]
null
2022-09-14T01:13:14Z
--- license: mit --- ### colossus on Stable Diffusion This is the `<colossus>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<colossus> 0](https://huggingface.co/sd-concepts-library/colossus/resolve/main/concept_images/1.jpeg) ![<colossus> 1](https://huggingface.co/sd-concepts-library/colossus/resolve/main/concept_images/5.jpeg) ![<colossus> 2](https://huggingface.co/sd-concepts-library/colossus/resolve/main/concept_images/0.jpeg) ![<colossus> 3](https://huggingface.co/sd-concepts-library/colossus/resolve/main/concept_images/4.jpeg) ![<colossus> 4](https://huggingface.co/sd-concepts-library/colossus/resolve/main/concept_images/2.jpeg) ![<colossus> 5](https://huggingface.co/sd-concepts-library/colossus/resolve/main/concept_images/3.jpeg)
sd-concepts-library/ie-gravestone
sd-concepts-library
2022-09-14T00:47:38Z
0
0
null
[ "license:mit", "region:us" ]
null
2022-09-14T00:47:31Z
--- license: mit --- ### ie gravestone on Stable Diffusion This is the `<internet-explorer-gravestone>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<internet-explorer-gravestone> 0](https://huggingface.co/sd-concepts-library/ie-gravestone/resolve/main/concept_images/1.jpeg) ![<internet-explorer-gravestone> 1](https://huggingface.co/sd-concepts-library/ie-gravestone/resolve/main/concept_images/0.jpeg) ![<internet-explorer-gravestone> 2](https://huggingface.co/sd-concepts-library/ie-gravestone/resolve/main/concept_images/2.jpeg) ![<internet-explorer-gravestone> 3](https://huggingface.co/sd-concepts-library/ie-gravestone/resolve/main/concept_images/3.jpeg)
BigSalmon/Infill04
BigSalmon
2022-09-14T00:46:06Z
163
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-09-14T00:38:55Z
``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("BigSalmon/Infill04") model = AutoModelForCausalLM.from_pretrained("BigSalmon/Infill04") ``` ``` Try it out here: https://huggingface.co/spaces/BigSalmon/TestAnyGPTModel ``` ``` prompt = """few sights are as [blank] new york city as the colorful, flashing signage of its bodegas [sep]""" input_ids = tokenizer.encode(prompt, return_tensors='pt') outputs = model.generate(input_ids=input_ids, max_length=10 + len(prompt), temperature=1.0, top_k=50, top_p=0.95, do_sample=True, num_return_sequences=5, early_stopping=True) for i in range(5): print(tokenizer.decode(outputs[i])) ``` Most likely outputs (Disclaimer: I highly recommend using this over just generating): ``` prompt = """few sights are as [blank] new york city as the colorful, flashing signage of its bodegas [sep]""" text = tokenizer.encode(prompt) myinput, past_key_values = torch.tensor([text]), None myinput = myinput myinput= myinput.to(device) logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False) logits = logits[0,-1] probabilities = torch.nn.functional.softmax(logits) best_logits, best_indices = logits.topk(250) best_words = [tokenizer.decode([idx.item()]) for idx in best_indices] text.append(best_indices[0].item()) best_probabilities = probabilities[best_indices].tolist() words = [] print(best_words) ``` Infill / Infilling / Masking / Phrase Masking ``` His contention [blank] by the evidence [sep] was refuted [answer] *** Few sights are as [blank] New York City as the colorful, flashing signage of its bodegas [sep] synonymous with [answer] *** When rick won the lottery, all of his distant relatives [blank] his winnings [sep] clamored for [answer] *** The library’s quiet atmosphere encourages visitors to [blank] in their work [sep] immerse themselves [answer] *** ``` ``` original: Other film stars to have appeared in Scrubs include Heather Graham, while Friends actor Matthew Perry has guest-starred and directed an episode of the [MASK] star, who recently played the title role in historical blockbuster Alexander, will make a cameo appearance as an unruly Irishman. Its leading star, Zach Braff, has recently [MASK] the big screen in Garden State, which he also directed. Farrell is pencilled in to [MASK] of Crockett in a film version of 1980s police [MASK] Farrell's appearance is said to be a result of his friendship with Zach Braff, who stars in the programme. infill: Other film stars to have appeared in Scrubs include Heather Graham, while Friends actor Matthew Perry has guest-starred and directed an episode of the show. The film star, who recently played the title role in historical blockbuster Alexander, will make a cameo appearance as an unruly Irishman. Its leading star, Zach Braff, has recently been seen on the big screen in Garden State, which he also directed. Farrell is pencilled in to play the role of Crockett in a film version of 1980s police drama Miami Vice. Farrell's appearance is said to be a result of his friendship with Zach Braff, who stars in the programme. ```
Padomin/t5-base-TEDxJP-1front-1body-1rear
Padomin
2022-09-14T00:26:28Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:te_dx_jp", "license:cc-by-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-09-13T16:16:53Z
--- license: cc-by-sa-4.0 tags: - generated_from_trainer datasets: - te_dx_jp model-index: - name: t5-base-TEDxJP-1front-1body-1rear results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-TEDxJP-1front-1body-1rear This model is a fine-tuned version of [sonoisa/t5-base-japanese](https://huggingface.co/sonoisa/t5-base-japanese) on the te_dx_jp dataset. It achieves the following results on the evaluation set: - Loss: 0.4600 - Wer: 0.1742 - Mer: 0.1683 - Wil: 0.2562 - Wip: 0.7438 - Hits: 55625 - Substitutions: 6495 - Deletions: 2467 - Insertions: 2291 - Cer: 0.1364 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Mer | Wil | Wip | Hits | Substitutions | Deletions | Insertions | Cer | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:------:|:-----:|:-------------:|:---------:|:----------:|:------:| | 0.6478 | 1.0 | 1457 | 0.4880 | 0.2256 | 0.2100 | 0.2999 | 0.7001 | 54825 | 6842 | 2920 | 4808 | 0.2019 | | 0.542 | 2.0 | 2914 | 0.4461 | 0.1886 | 0.1807 | 0.2697 | 0.7303 | 55225 | 6615 | 2747 | 2817 | 0.1577 | | 0.4873 | 3.0 | 4371 | 0.4390 | 0.1764 | 0.1702 | 0.2584 | 0.7416 | 55541 | 6519 | 2527 | 2344 | 0.1392 | | 0.4271 | 4.0 | 5828 | 0.4361 | 0.1750 | 0.1691 | 0.2567 | 0.7433 | 55512 | 6453 | 2622 | 2226 | 0.1381 | | 0.3705 | 5.0 | 7285 | 0.4366 | 0.1741 | 0.1684 | 0.2558 | 0.7442 | 55508 | 6427 | 2652 | 2164 | 0.1358 | | 0.3557 | 6.0 | 8742 | 0.4424 | 0.1738 | 0.1679 | 0.2555 | 0.7445 | 55600 | 6453 | 2534 | 2235 | 0.1369 | | 0.3838 | 7.0 | 10199 | 0.4471 | 0.1741 | 0.1684 | 0.2562 | 0.7438 | 55550 | 6473 | 2564 | 2210 | 0.1362 | | 0.3095 | 8.0 | 11656 | 0.4517 | 0.1746 | 0.1685 | 0.2566 | 0.7434 | 55618 | 6499 | 2470 | 2305 | 0.1367 | | 0.306 | 9.0 | 13113 | 0.4573 | 0.1748 | 0.1688 | 0.2570 | 0.7430 | 55601 | 6517 | 2469 | 2304 | 0.1369 | | 0.3073 | 10.0 | 14570 | 0.4600 | 0.1742 | 0.1683 | 0.2562 | 0.7438 | 55625 | 6495 | 2467 | 2291 | 0.1364 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu116 - Datasets 2.4.0 - Tokenizers 0.12.1
sd-concepts-library/uma
sd-concepts-library
2022-09-14T00:08:35Z
0
0
null
[ "license:mit", "region:us" ]
null
2022-09-14T00:08:27Z
--- license: mit --- ### uma on Stable Diffusion This is the `<uma>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<uma> 0](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/21.jpg) ![<uma> 1](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/33.jpg) ![<uma> 2](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/12.jpg) ![<uma> 3](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/26.jpg) ![<uma> 4](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/48.jpg) ![<uma> 5](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/22.jpg) ![<uma> 6](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/9.jpg) ![<uma> 7](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/27.jpg) ![<uma> 8](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/11.jpg) ![<uma> 9](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/10.jpg) ![<uma> 10](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/19.jpg) ![<uma> 11](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/13.jpg) ![<uma> 12](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/32.jpg) ![<uma> 13](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/24.jpg) ![<uma> 14](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/4.jpg) ![<uma> 15](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/35.jpg) ![<uma> 16](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/42.jpg) ![<uma> 17](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/7.jpg) ![<uma> 18](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/3.jpg) ![<uma> 19](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/15.jpg) ![<uma> 20](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/17.jpg) ![<uma> 21](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/6.jpg) ![<uma> 22](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/37.jpg) ![<uma> 23](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/16.jpg) ![<uma> 24](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/23.jpg) ![<uma> 25](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/29.jpg) ![<uma> 26](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/40.jpg) ![<uma> 27](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/51.jpg) ![<uma> 28](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/5.jpg) ![<uma> 29](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/50.jpg) ![<uma> 30](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/28.jpg) ![<uma> 31](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/8.jpg) ![<uma> 32](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/36.jpg) ![<uma> 33](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/14.jpg) ![<uma> 34](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/25.jpg) ![<uma> 35](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/39.jpg) ![<uma> 36](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/20.jpg) ![<uma> 37](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/34.jpg) ![<uma> 38](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/2.jpg) ![<uma> 39](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/41.jpg) ![<uma> 40](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/18.jpg) ![<uma> 41](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/1.jpg) ![<uma> 42](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/38.jpg) ![<uma> 43](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/31.jpg) ![<uma> 44](https://huggingface.co/sd-concepts-library/uma/resolve/main/concept_images/30.jpg)
mfreihaut/distilbert-base-uncased-finetuned-cola
mfreihaut
2022-09-13T23:58:02Z
103
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-09-08T18:13:31Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5616581968995631 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7956 - Matthews Correlation: 0.5617 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5235 | 1.0 | 535 | 0.5394 | 0.4192 | | 0.3498 | 2.0 | 1070 | 0.4989 | 0.5065 | | 0.2343 | 3.0 | 1605 | 0.5506 | 0.5518 | | 0.1744 | 4.0 | 2140 | 0.7471 | 0.5354 | | 0.1243 | 5.0 | 2675 | 0.7956 | 0.5617 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 1.18.4 - Tokenizers 0.12.1
sd-concepts-library/tubby-cats
sd-concepts-library
2022-09-13T23:26:36Z
0
0
null
[ "license:mit", "region:us" ]
null
2022-09-13T23:26:30Z
--- license: mit --- ### Tubby Cats on Stable Diffusion This is the `<tubby>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<tubby> 0](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/21.jpeg) ![<tubby> 1](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/8.jpeg) ![<tubby> 2](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/18.jpeg) ![<tubby> 3](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/22.jpeg) ![<tubby> 4](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/14.jpeg) ![<tubby> 5](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/10.jpeg) ![<tubby> 6](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/1.jpeg) ![<tubby> 7](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/16.jpeg) ![<tubby> 8](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/15.jpeg) ![<tubby> 9](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/12.jpeg) ![<tubby> 10](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/11.jpeg) ![<tubby> 11](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/9.jpeg) ![<tubby> 12](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/23.jpeg) ![<tubby> 13](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/5.jpeg) ![<tubby> 14](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/0.jpeg) ![<tubby> 15](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/17.jpeg) ![<tubby> 16](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/4.jpeg) ![<tubby> 17](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/13.jpeg) ![<tubby> 18](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/2.jpeg) ![<tubby> 19](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/19.jpeg) ![<tubby> 20](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/3.jpeg) ![<tubby> 21](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/6.jpeg) ![<tubby> 22](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/20.jpeg) ![<tubby> 23](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/7.jpeg)
huggingtweets/taylorswift13
huggingtweets
2022-09-13T23:18:45Z
110
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/taylorswift13/1663111120837/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1564101520043479043/eJpWqka2_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Taylor Swift</div> <div style="text-align: center; font-size: 14px;">@taylorswift13</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Taylor Swift. | Data | Taylor Swift | | --- | --- | | Tweets downloaded | 721 | | Retweets | 89 | | Short tweets | 88 | | Tweets kept | 544 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/155f8g1q/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @taylorswift13's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1mywgndz) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1mywgndz/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/taylorswift13') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/tylerthecreator
huggingtweets
2022-09-13T23:17:27Z
106
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/tylerthecreator/1663110868565/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1405605347755139074/kKJuRJS0_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">T</div> <div style="text-align: center; font-size: 14px;">@tylerthecreator</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from T. | Data | T | | --- | --- | | Tweets downloaded | 3231 | | Retweets | 612 | | Short tweets | 635 | | Tweets kept | 1984 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/gsjklloe/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tylerthecreator's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2zwt7m6t) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2zwt7m6t/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/tylerthecreator') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/metallica
huggingtweets
2022-09-13T23:15:14Z
105
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-09-13T23:15:06Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1407334956716769288/HFgpsbmW_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Metallica</div> <div style="text-align: center; font-size: 14px;">@metallica</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Metallica. | Data | Metallica | | --- | --- | | Tweets downloaded | 3250 | | Retweets | 390 | | Short tweets | 185 | | Tweets kept | 2675 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3n6wz64s/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @metallica's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3ea9ctpp) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3ea9ctpp/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/metallica') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/mariahcarey
huggingtweets
2022-09-13T23:15:00Z
105
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-09-13T23:13:30Z
--- language: en thumbnail: http://www.huggingtweets.com/mariahcarey/1663110896270/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1486066100248981508/AwBY6X2x_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Mariah Carey</div> <div style="text-align: center; font-size: 14px;">@mariahcarey</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Mariah Carey. | Data | Mariah Carey | | --- | --- | | Tweets downloaded | 3225 | | Retweets | 697 | | Short tweets | 388 | | Tweets kept | 2140 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1euvplmf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mariahcarey's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3lc0u7bu) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3lc0u7bu/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/mariahcarey') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/1gunnagunna-iamcardib-pnbrock
huggingtweets
2022-09-13T23:12:13Z
107
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-09-13T23:12:04Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1478425372438011912/GQujYoYi_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1565550091334828032/flg5WPOb_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1541590121102905345/jxbNo0z0_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">WUNNA & PnBRock & Cardi B</div> <div style="text-align: center; font-size: 14px;">@1gunnagunna-iamcardib-pnbrock</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from WUNNA & PnBRock & Cardi B. | Data | WUNNA | PnBRock | Cardi B | | --- | --- | --- | --- | | Tweets downloaded | 2827 | 3104 | 3073 | | Retweets | 2216 | 1190 | 1500 | | Short tweets | 125 | 310 | 348 | | Tweets kept | 486 | 1604 | 1225 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3cayvnkn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @1gunnagunna-iamcardib-pnbrock's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/od188nqh) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/od188nqh/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/1gunnagunna-iamcardib-pnbrock') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/piercetheveil
huggingtweets
2022-09-13T23:09:47Z
107
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-09-13T23:08:05Z
--- language: en thumbnail: http://www.huggingtweets.com/piercetheveil/1663110582992/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1565188375157493760/pzWyT_Xh_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Pierce The Veil</div> <div style="text-align: center; font-size: 14px;">@piercetheveil</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Pierce The Veil. | Data | Pierce The Veil | | --- | --- | | Tweets downloaded | 3095 | | Retweets | 456 | | Short tweets | 206 | | Tweets kept | 2433 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/rbpf3rpq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @piercetheveil's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/rksak7da) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/rksak7da/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/piercetheveil') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/zodiac_mf
huggingtweets
2022-09-13T23:09:43Z
107
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-09-13T23:08:10Z
--- language: en thumbnail: http://www.huggingtweets.com/zodiac_mf/1663110577337/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1135964801992200192/QZ4fMALY_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">ZODIAC MOTHERFUCKER</div> <div style="text-align: center; font-size: 14px;">@zodiac_mf</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from ZODIAC MOTHERFUCKER. | Data | ZODIAC MOTHERFUCKER | | --- | --- | | Tweets downloaded | 3246 | | Retweets | 374 | | Short tweets | 387 | | Tweets kept | 2485 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1czfq5eo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @zodiac_mf's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3n0oxy8f) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3n0oxy8f/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/zodiac_mf') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/c9mang0
huggingtweets
2022-09-13T23:07:19Z
107
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-09-13T23:05:26Z
--- language: en thumbnail: http://www.huggingtweets.com/c9mang0/1663110434885/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1360714045599358978/oh2wRcYm_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">🐐 🐐 🐐 🐐 🐐 🐐 🐐</div> <div style="text-align: center; font-size: 14px;">@c9mang0</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from 🐐 🐐 🐐 🐐 🐐 🐐 🐐. | Data | 🐐 🐐 🐐 🐐 🐐 🐐 🐐 | | --- | --- | | Tweets downloaded | 3223 | | Retweets | 248 | | Short tweets | 547 | | Tweets kept | 2428 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3pk7njzw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @c9mang0's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/9ttvld6j) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/9ttvld6j/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/c9mang0') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/lingua_ignota_
huggingtweets
2022-09-13T23:06:10Z
107
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-09-13T23:05:25Z
--- language: en thumbnail: http://www.huggingtweets.com/lingua_ignota_/1663110365756/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1425094858558017550/-xYab3Fe_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">LINGUA IGNOTA</div> <div style="text-align: center; font-size: 14px;">@lingua_ignota_</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from LINGUA IGNOTA. | Data | LINGUA IGNOTA | | --- | --- | | Tweets downloaded | 3236 | | Retweets | 554 | | Short tweets | 640 | | Tweets kept | 2042 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1bcjlrml/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @lingua_ignota_'s tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2pjyx8m2) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2pjyx8m2/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/lingua_ignota_') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/gretathunberg
huggingtweets
2022-09-13T23:01:27Z
105
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-05-31T20:58:14Z
--- language: en thumbnail: http://www.huggingtweets.com/gretathunberg/1663110082774/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1459213153301053442/rL5hhpAI_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Greta Thunberg</div> <div style="text-align: center; font-size: 14px;">@gretathunberg</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Greta Thunberg. | Data | Greta Thunberg | | --- | --- | | Tweets downloaded | 3300 | | Retweets | 2457 | | Short tweets | 28 | | Tweets kept | 815 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3g6d8tpo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gretathunberg's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2iv3jq06) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2iv3jq06/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/gretathunberg') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
espnet/fsc_challenge_slu_2pass_transformer_gt
espnet
2022-09-13T21:48:07Z
0
0
espnet
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:fsc_challenge", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-09-13T21:40:42Z
--- tags: - espnet - audio - automatic-speech-recognition language: en datasets: - fsc_challenge license: cc-by-4.0 --- ## ESPnet2 ASR model ### `espnet/fsc_challenge_slu_2pass_transformer_gt` This model was trained by Siddhant using fsc_challenge recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html) if you haven't done that already. ```bash cd espnet git checkout 3b54bfe52a294cdfce668c20d777bfa65f413745 pip install -e . cd egs2/fsc_challenge/slu1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/fsc_challenge_slu_2pass_transformer_gt ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Sun Mar 13 20:59:06 EDT 2022` - python version: `3.8.11 (default, Aug 3 2021, 15:09:35) [GCC 7.5.0]` - espnet version: `espnet 0.10.3a3` - pytorch version: `pytorch 1.9.0+cu102` - Git hash: `97b9dad4dbca71702cb7928a126ec45d96414a3f` - Commit date: `Mon Sep 13 22:55:04 2021 +0900` ## asr_train_asr_hubert_transformer_adam_specaug_deliberation_transformer_3_raw_en_word ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |inference_asr_model_valid.acc.ave_5best/spk_test|3349|17937|99.9|0.1|0.0|0.0|0.1|0.6| |inference_asr_model_valid.acc.ave_5best/utt_test|4204|22540|89.8|6.6|3.6|0.0|10.2|27.6| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |inference_asr_model_valid.acc.ave_5best/spk_test|3349|152191|100.0|0.0|0.0|0.0|0.1|0.6| |inference_asr_model_valid.acc.ave_5best/utt_test|4204|191435|94.5|2.8|2.7|0.5|6.0|27.6| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| ## ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_hubert_transformer_adam_specaug_deliberation_transformer_3.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_hubert_transformer_adam_specaug_deliberation_transformer_3_raw_en_word ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 25 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - train - loss - min - - valid - loss - min - - train - acc - max - - valid - acc - max keep_nbest_models: 5 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: - ../../fsc_challenge/asr1/exp/asr_train_asr_hubert_transformer_adam_specaug_old_raw_en_word/valid.acc.ave_5best.pth:encoder:encoder ignore_init_mismatch: false freeze_param: - encoder - postdecoder.model - frontend.upstream num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_en_word/train/speech_shape - exp/asr_stats_raw_en_word/train/text_shape.word - exp/asr_stats_raw_en_word/train/transcript_shape.word valid_shape_file: - exp/asr_stats_raw_en_word/valid/speech_shape - exp/asr_stats_raw_en_word/valid/text_shape.word - exp/asr_stats_raw_en_word/valid/transcript_shape.word batch_type: folded valid_batch_type: null fold_length: - 80000 - 150 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train/wav.scp - speech - sound - - dump/raw/train/text - text - text - - dump/raw/train/transcript - transcript - text valid_data_path_and_name_and_type: - - dump/raw/valid/wav.scp - speech - sound - - dump/raw/valid/text - text - text - - dump/raw/valid/transcript - transcript - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.0002 scheduler: warmuplr scheduler_conf: warmup_steps: 25000 token_list: - <blank> - <unk> - the - turn - lights - in - up - 'on' - down - temperature - heat - switch - kitchen - volume - 'off' - increase_volume_none - bedroom - washroom - decrease_volume_none - language - bathroom - decrease - my - to - increase - decrease_heat_washroom - increase_heat_washroom - music - heating - bring - increase_heat_none - too - decrease_heat_none - me - change_language_none_none - activate_lights_washroom - set - activate_lights_kitchen - activate_music_none - lamp - deactivate_music_none - increase_heat_bedroom - i - increase_heat_kitchen - sound - get - decrease_heat_kitchen - loud - activate_lights_bedroom - deactivate_lights_bedroom - decrease_heat_bedroom - need - deactivate_lights_kitchen - bring_newspaper_none - newspaper - bring_shoes_none - shoes - bring_socks_none - socks - activate_lights_none - deactivate_lights_none - louder - go - deactivate_lights_washroom - change_language_Chinese_none - chinese - could - you - bring_juice_none - juice - deactivate_lamp_none - make - activate_lamp_none - it - stop - play - change - quiet - change_language_Korean_none - korean - some - practice - change_language_German_none - german - ok - now - main - change_language_English_none - english - its - hear - pause - this - thats - lower - far - audio - please - fetch - phones - a - different - start - resume - softer - couldnt - anything - quieter - put - video - is - low - max - phone - mute - reduce - use - languages - allow - device - system - <sos/eos> transcript_token_list: - <blank> - <unk> - the - turn - lights - in - up - 'on' - down - temperature - heat - switch - kitchen - volume - 'off' - bedroom - washroom - language - bathroom - decrease - my - to - increase - music - heating - bring - too - me - set - lamp - i - sound - get - loud - need - newspaper - shoes - socks - louder - go - chinese - could - you - juice - make - it - stop - play - change - quiet - korean - some - practice - german - ok - now - main - english - its - hear - pause - this - thats - lower - far - audio - please - fetch - phones - a - different - start - resume - softer - couldnt - anything - quieter - put - video - is - low - max - phone - mute - reduce - use - languages - allow - device - system - <sos/eos> two_pass: false pre_postencoder_norm: false init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true model_conf: transcript_token_list: - <blank> - <unk> - the - turn - lights - in - up - 'on' - down - temperature - heat - switch - kitchen - volume - 'off' - bedroom - washroom - language - bathroom - decrease - my - to - increase - music - heating - bring - too - me - set - lamp - i - sound - get - loud - need - newspaper - shoes - socks - louder - go - chinese - could - you - juice - make - it - stop - play - change - quiet - korean - some - practice - german - ok - now - main - english - its - hear - pause - this - thats - lower - far - audio - please - fetch - phones - a - different - start - resume - softer - couldnt - anything - quieter - put - video - is - low - max - phone - mute - reduce - use - languages - allow - device - system - <sos/eos> ctc_weight: 0.5 ignore_id: -1 lsm_weight: 0.0 length_normalized_loss: false report_cer: true report_wer: true sym_space: <space> sym_blank: <blank> extract_feats_in_collect_stats: true two_pass: false pre_postencoder_norm: false use_preprocessor: true token_type: word bpemodel: null non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: s3prl frontend_conf: frontend_conf: upstream: hubert_large_ll60k download_dir: ./hub multilayer_feature: true fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 30 num_freq_mask: 2 apply_time_mask: true time_mask_width_range: - 0 - 40 num_time_mask: 2 normalize: utterance_mvn normalize_conf: {} preencoder: linear preencoder_conf: input_size: 1024 output_size: 80 encoder: transformer encoder_conf: output_size: 256 attention_heads: 4 linear_units: 2048 num_blocks: 12 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.0 input_layer: conv2d normalize_before: true postencoder: null postencoder_conf: {} deliberationencoder: transformer deliberationencoder_conf: output_size: 256 attention_heads: 4 linear_units: 2048 num_blocks: 4 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.0 input_layer: linear normalize_before: true decoder: transformer decoder_conf: attention_heads: 4 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.0 src_attention_dropout_rate: 0.0 decoder2: rnn decoder2_conf: {} postdecoder: hugging_face_transformers postdecoder_conf: model_name_or_path: bert-base-cased output_size: 256 required: - output_dir - token_list version: 0.10.3a3 distributed: false ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/fsc_challenge_slu_2pass_conformer
espnet
2022-09-13T21:36:30Z
0
0
espnet
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:fsc_challenge", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-09-13T21:35:39Z
--- tags: - espnet - audio - automatic-speech-recognition language: en datasets: - fsc_challenge license: cc-by-4.0 --- ## ESPnet2 ASR model ### `espnet/fsc_challenge_slu_2pass_conformer` This model was trained by Siddhant using fsc_challenge recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html) if you haven't done that already. ```bash cd espnet git checkout 3b54bfe52a294cdfce668c20d777bfa65f413745 pip install -e . cd egs2/fsc_challenge/slu1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/fsc_challenge_slu_2pass_conformer ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Wed Jan 19 10:05:40 EST 2022` - python version: `3.8.11 (default, Aug 3 2021, 15:09:35) [GCC 7.5.0]` - espnet version: `espnet 0.10.3a3` - pytorch version: `pytorch 1.9.0+cu102` - Git hash: `97b9dad4dbca71702cb7928a126ec45d96414a3f` - Commit date: `Mon Sep 13 22:55:04 2021 +0900` ## asr_train_asr_hubert_transformer_adam_specaug_deliberation_raw_en_word ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |inference_asr_model_valid.acc.ave_5best/spk_test|3349|17937|97.1|1.2|1.7|0.4|3.3|7.1| |inference_asr_model_valid.acc.ave_5best/utt_test|4204|22540|80.5|12.4|7.0|3.0|22.5|55.2| |inference_asr_model_valid.acc.ave_5best/valid|2597|13782|97.5|1.0|1.6|0.2|2.7|5.5| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |inference_asr_model_valid.acc.ave_5best/spk_test|3349|152191|98.4|0.5|1.1|0.2|1.8|7.1| |inference_asr_model_valid.acc.ave_5best/utt_test|4204|191435|89.3|5.4|5.3|2.7|13.4|55.2| |inference_asr_model_valid.acc.ave_5best/valid|2597|117282|98.6|0.4|1.0|0.2|1.6|5.5| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| ## ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_hubert_transformer_adam_specaug_deliberation.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_hubert_transformer_adam_specaug_deliberation_raw_en_word ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 80 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - train - loss - min - - valid - loss - min - - train - acc - max - - valid - acc - max keep_nbest_models: 5 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: - ../../fsc_challenge/asr1/exp/asr_train_asr_hubert_transformer_adam_specaug_raw_en_word/valid.acc.ave_5best.pth:encoder:encoder ignore_init_mismatch: false freeze_param: - encoder - postdecoder.model - frontend.upstream num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_en_word/train/speech_shape - exp/asr_stats_raw_en_word/train/text_shape.word - exp/asr_stats_raw_en_word/train/transcript_shape.word valid_shape_file: - exp/asr_stats_raw_en_word/valid/speech_shape - exp/asr_stats_raw_en_word/valid/text_shape.word - exp/asr_stats_raw_en_word/valid/transcript_shape.word batch_type: folded valid_batch_type: null fold_length: - 80000 - 150 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train/wav.scp - speech - sound - - dump/raw/train/text - text - text - - dump/raw/train/transcript - transcript - text valid_data_path_and_name_and_type: - - dump/raw/valid/wav.scp - speech - sound - - dump/raw/valid/text - text - text - - dump/raw/valid/transcript - transcript - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.0002 scheduler: warmuplr scheduler_conf: warmup_steps: 25000 token_list: - <blank> - <unk> - the - turn - lights - in - up - 'on' - down - temperature - heat - switch - kitchen - volume - 'off' - increase_volume_none - bedroom - washroom - decrease_volume_none - language - bathroom - decrease - my - to - increase - decrease_heat_washroom - increase_heat_washroom - music - heating - bring - increase_heat_none - too - decrease_heat_none - me - change_language_none_none - activate_lights_washroom - set - activate_lights_kitchen - activate_music_none - lamp - deactivate_music_none - increase_heat_bedroom - i - increase_heat_kitchen - sound - get - decrease_heat_kitchen - loud - activate_lights_bedroom - deactivate_lights_bedroom - decrease_heat_bedroom - need - deactivate_lights_kitchen - bring_newspaper_none - newspaper - bring_shoes_none - shoes - bring_socks_none - socks - activate_lights_none - deactivate_lights_none - louder - go - deactivate_lights_washroom - change_language_Chinese_none - chinese - could - you - bring_juice_none - juice - deactivate_lamp_none - make - activate_lamp_none - it - stop - play - change - quiet - change_language_Korean_none - korean - some - practice - change_language_German_none - german - ok - now - main - change_language_English_none - english - its - hear - pause - this - thats - lower - far - audio - please - fetch - phones - a - different - start - resume - softer - couldnt - anything - quieter - put - video - is - low - max - phone - mute - reduce - use - languages - allow - device - system - <sos/eos> transcript_token_list: - <blank> - <unk> - the - turn - in - lights - up - down - 'on' - temperature - heat - kitchen - switch - 'off' - volume - bedroom - language - to - washroom - bathroom - my - increase - decrease - music - heating - bring - me - too - i - set - sound - get - and - lamp - loud - a - need - you - shoes - socks - newspaper - it - light - louder - chinese - could - go - make - juice - some - main - stop - change - quiet - practice - english - german - now - play - korean - okay - it's - room - that's - of - hear - is - different - wash - this - lower - fetch - audio - please - far - pause - start - use - two - phone - low - softer - put - bed - video - couldn't - languages - max - quieter - allow - system - mute - phone's - reduce - resume - anything - device - washer - ten - that - which - paper - phones - they - so - him - news - um - from - he - out - crease - your - ah - better - but - late - value - k - washing - turned - don't - then - o - whites - hating - any - said - buy - lies - by - watch - give - one - done - them - hit - place - jews - mites - send - for - thing - night - not - languish - after - was - decreased - lots - pass - dream - bad - life - do - search - case - increased - corian - we - swish - valium - lamb - or - right - row - germans - their - 'no' - soft - time - like - own - swiss - can - song - nights - lines - eating - bottom - us - are - churn - hate - load - top - em - take - what's - ok - chain - lates - here - with - quite - lice - at - lap - am - heap - lampon - temper - tough - hitting - zone - john - three - may - vine - bath - keep - into - burn - light's - map - meat - eat - sat - new - day - white - r - what - sheet - line - town - through - watchroom - sign - brain - tan - south - kitten - sit - such - should - back - rights - shows - knees - choose - meet - likes - sarks - say - frame - boss - land - bar - bites - plays - might - our - washerm - bit - she - kit - his - tip - she's - fire - free - find - bringing - mics - priest - they're - heats - run - shears - fish - stone - tone - there - picture - sent - t - bitum - sweet - sun - ring - love - i'll - pin - vi - try - have - bather - be - temp - zoom - all - earn - bethroom - sucks - fit - as - lives - break - sick - arm - thee - miss - basroom - viam - turns - found - dumb - ladder - trend - vime - viome - if - bathroomites - tom - during - bat - they'll - wish - way - four - about - cheese - mean - nice - ocean - page - turning - still - souths - thy - stand - ding - while - who - white's - step - switchy - made - mason - see - ashroom - vice - switched - la - ton - washering - favor - via - i'm - betum - create - you're - played - green - blank - bassroom - rim - nap - oh - lay - grease - ice - ball - kitchens - x - nine - her - bridge - pies - pepper - attempture - rain - wastrom - peace - b - push - bound - lawyer - these - let - fast - star - drone - sauce - practise - veteran - paul's - fall - temporary - chop - key - bow - daylights - loader - water - twitch - chains - lion - getting - pars - code - fly - makes - offs - lie - jim - tear - sacks - fine - print - five - tap - ye - mind - foot - kiss - fat - writes - will - switcher - de - baton - man - yeah - piece - vol - saw - come - m - when - talk - round - niece - musing - open - throw - bathing - viim - know - head - teacher - 'true' - term - lord - orshroom - trace - feature - sock - seas - sweets - layup - slang - stream - drum - d - plan - chun - sell - left - leg - settling - men - bathter - 'false' - stay - chapter - jane - son - rates - q - christie - awesome - tongue - chaff - there's - outer - clouder - past - red - chair - lab - shares - enough - fund - license - semi-phones - bones - sown - ups - needs - can't - kitchenites - drop - let's - french - tempter - source - beth - trying - volumes - slides - atherm - app - battle - drawing - 'yes' - trees - twenty - pleas - large - worship - oz - deep - bathrooms - loudest - trim - playing - saudi - volley - its - bite - wall - heater - bond - sand - catch - loss - tender - long - dam - heen - might's - fits - sue - eight - pray - sail - sounds - ain't - lang - spatroom - part - pitch - matron - jump - wrong - kills - why - band - name - ply - piper - thrown - roll - would - self - same - js - wash-rame - bed-room - lot's - swits - favour - meter - assume - math - heed - stitch - dawn - gorian - radio - thumb - pressure - store - dot - meal - seat - welsh - chin - swift - bing - waste - vassum - making - bend - boil - plain - decrees - square - tomb - finger - temperatures - speak - pep - bait - stepper - corey - allium - cloud - state - dan - stole - christy - fight - marshroom - plate - shown - big - bathrobe - torn - debate - chande - heaving - switz - batter - wash-room - trap - ram - jarman - friend's - tipture - lampos - flouter - brand - bass - brit - dome - decree - foam - link - sax - seeking - gain - values - beat - watching - dark - creates - midroom - months - camp - forium - compared - race - shines - tempre - pair - weight - read - decays - lump - worse - bus - truth - mice - door - sub - draw - cruci - tar - tend - batting - ms - means - buying - late's - legs - therm - mike - linguish - blaze - mine - peren - walden - arc - cruise - walling - lakes - passer - russian - rock - funds - curry - patch - per - wait - how - days - mamp - paz - just - bringches - must - nor - sing - u - blower - clay - heated - resound - fights - wamp - hearing - basserm - brewing - produce - sorry - wound - had - food - other - quick - an - face - flights - fun - downs - wine - vion - wampoff - lake - lane - last - soften - i'd - check - lowered - heam - ink - twist - think - die - crack - degree - washington - friend - doors - number - gave - button - tears - lash - many - raised - show - action - age - engage - says - join - under - till - save - follow - brings - shot - worst - trainees - night's - bear - rome - l - s - roam - da - pasta - spring - languished - tier - bone - pete - next - leads - dick - rate - dig - good - six - fought - mesic - beto - fitch - chimpashire - carrying - depritus - laws - youth - ethroom - baffer - rule - wing - langus - loudy - re - jeers - pull - lessons - lighter - took - jew - layout - fats - wit - nace - d's - temple - drawn - person - wastrong - codio - crank - chunny - inglace - increases - raleum - z - drones - slide - wet - saucer - embracer - c - bracher - bathrum - template - musics - radios - glyta - inglease - sad - shawn - deeps - nerve - sobs - blame - best - wandstrom - rostrum - sogs - sayings - itchen - churnies - ocana - fom's - witch - clamp - liked - manguish - lusher - saying - folium - foium - bae - vendor - edram - bendroom - bisium - acainos - walk - slight - pauses - alm - palima - temperating - loaded - fifty - banner - thank - hearts - croom - alima - sats - doing - list - masats - mesus - christa - feet - quote - tandy - bitroom - chand - fitjan - west - lampow - lords - press - chili - bedcom - fin - wasolates - pastor - breathe - stout - font's - perian - chich - chitty - bought - indication - fonts - gen - vash - ashram - india - avashroom - chrome - home - aria - newspad - steppad - barthrome - chant - software - chandy - shui - chandi - keitelates - belt - trujandelaits - indika - chen - shindy - shoot - district - stum - codeio - debesh - chandle - digit - prim - size - sodio - roshroom - vohima - suppose - shiny - versus - lowing - doubt - bake - semithones - waffroom - crucial - chippeture - ray - decreesadio - kenchet - decreesadi - meta - mang - baum - ate - washram - jacries - premium - sops - tempure - built - zoo - pring - lowers - opum - dipture - tetch - basper - peak - bassorm - pas - switchofta - lover - flap - incretes - bastroom - tikris - didn't - sweater - bastom - lets - lampoff - bedrim - jumites - bassum - vaim - badger - lengths - promise - batroom - dog - bedrooms - satio - involve - quiner - samayum - sulo - fused - butte - decrysovalium - subtuse - silo - tread - fetchment - lime - stuff - kin - switzerland - eaten - sex - sublinguisha - seling - singhush - fitchmus - ox - seven - mingle - shakorean - foams - keeps - setmingwich - quorian - swer - smingh - someites - washroomites - posse - seeding - haste - rosh - toff - vein - seating - chemi - sis - debt - lamish - jinnies - gennies - linguishu - tarn - gym - eats - dough - fedium - isaac - bender - does - healing - speed - adam - noun - aldio - etch - amp - sangus - southern - vodroom - sanch - post - beddo - things - sounded - tick - toverty - view - weshroom - comparator - fordom - keys - templar - worker - tiki - odio - geo - fordium - mode - mindless - fishermen - lua - pierto - voting - brad - europe - grade - shrieks - cures - stub - decase - ufoium - geophonia - moved - villium - drews - compare - vodal - dumpier - sorts - cases - bosh - modes - souls - rose - pedro - mash - shanja - druce - benise - draws - minces - laguish - throughdome - sweetzer - compato - science - dumper - breathing - yong - vanquish - chorean - bell - word - sentence - strain - washern - lava - basom - smith - less - sons - sights - sides - vying - allowed - cough - drew - knee - bastin - sot - stopping - blee - kisa - agenda - va - outset - belights - tingway - lada - sus - germ - bedroomed - length - phoned - rezone - lands - knitting - submit - broom - touch - hook - maths - chaffelites - kinch - june - limp - namp - basser - gitchen - dreamly - bash - upped - boff - mines - zik - chikurian - ab - ut - hamperture - pawn - okena - kain - coose - fon's - truck - battery - washm - likesov - cherp - scripture - small - welding - samphon - slaying - baldium - berumita - slamp - helium - habiture - saldom - vatum - blushham - apedial - sums - flipper - langworth - id - itch - isherm - washerms - odds - bedium - heights - kent - bethrym - atham - hos - induce - bathtub - lamped - bellium - iter - usherm - stu - rathroom - mees - resumed - barchy - betterim - trello - flowing - butcher - hooker - mashes - mu - site - riz - maty - handing - hand - predium - iton - kate - newspip - lifeton - printing - shiz - hidanu - bother - weights - jigs - glotter - templates - lamper - heata - boshroom - mounter - tipperger - washrooms - imager - washman - vayum - methroom - lossroom - waft - shrum - meshes - fringes - amma - keching - redees - crane - barthroom - yim - lara - speech - barther - tea - fra - chorium - trines - starks - firm's - wallian - rushing - lau - scott - fotherine - volleying - arthur - hesing - kna - path - meant - mutes - y - young - knows - kitchened - washherm - washher - kori - tipperture - ballium - rizo - though - topperture - tapature - douce - inblies - query - law - semi - turnouts - turneth - bassom - flay - damager - vassorm - cree - vassar - faster - julo - rub - septa - lost - sal - core - piney - desamla - bat's - third - poshroom - hunt - kickin - william - mackt - cheer - wardium - funny - wat - trough - ecogen - cannot - forget - lighted - engine - bind - edwin - mitoff - plensic - trump - setling - semi-divise - again - over - chat - endogen - mainling - increci - blease - fishing - washerb - downed - bedder - mython - volium - byton - mayam - basium - bathium - wilder - cape - viion - fet - dusk - betterm - file - spatorm - spedroom - starts - nightcloth - decris - borrow - vian - baimhap - track - church - batherine - washerroom - dirt - nam - trip - gin - brian - sup - swisher - parse - chainius - lat - sly - duna - shum - heal - rice - tiny - only - tied - tinder - board - later - gate - violin - tonight - pager - move - latin - warrior - major - magnum - each - bitter - none - tempreja - michaws - thirteen - little - cease - bytron - oita - neot - bank - armage - hang - wandhi - rantage - went - orsh - aloud - delete - stroll - side - browser - washa - ramp - hinting - grieve - marked - terran - embrace - breeze - lambas - emasek - got - blindness - stun - jama - bitumhida - he's - polite - ned - drumhim - langest - heem - midenings - papo - inglies - form - papu - larries - clean - lepicha - beja - very - coria - deal - mittron - damn - baja - timey - aima - going - ainting - don - adore - bullock - pain - audience - dotion - viving - notion - bytro - worry - talks - byte - dance - vosham - because - tree - ming - matox - modern - invasia - makeness - hawks - natom - arms - pay - hinder - latum - bench - hoofs - king - dampentering - north - boo - petations - donna - kabbatum - temprejo - chairman - mr - names - standard - batu - ridge - using - couldying - never - matter - ninety - powder - interviews - stir - changes - bap - jomo - courier - crop - void - safe - tamies - were - network - dambraja - once - glight - mightum - royal - gordian - viewers - cajanaton - stood - wind - treat - lean - teach - matam - tarsh - incase - dembation - clang - beta - lyphon - sang - dia - richer - bedoleitone - dampeja - pinosh - mishik - oft - hi - live - trumpet - butch - blights - seth - kids - herb - bettern - pathroom - shroom - ifridio - san - sroom - sanna - christ - clap - worshroom - cruce - crisadio - adi - iam - quarter - throne - heatsy - lace - shallize - pratt - beast - slash - paulster - brazil - priested - trust - slice - kind - jules - being - tying - gets - jewels - jills - arcs - jajarman - fesh - crate - grown - brained - porian - shills - grew - bare - keechin' - toss - reduced - var - libraries - slanguish - catching - rawroom - passa - laugh - dip - vima - plight - clider - tank - prat - securion - cause - care - bent - thyum - odd - washed - twitched - mangling - valiumt - pop - nicogen - switching - federm - washerrm - stolen - sub-language - heane - vap - babin - karan - mezek - towen - ek - full - saver - kill - plates - vien - eck - fi - myer - muzak - vampoline - bytes - fars - pratise - pat - decreases - ison - flight - blies - waffron - debates - double - taping - earth - greece - capture - corium - plake - blush - tech - than - fuff - alter - strengths - dial - betch - chanda - bartminate - fiz - outalates - bitumlets - sister - rich - washamlets - nigusa - digly - eighty - grace - approach - cachen - mats - embrature - impression - donate - nispepa - debrature - mistakes - misshik - printer - damperture - seam - degre - embreter - stretch - ready - ting - embedded - indawashroom - kitar - greeted - embrids - matterm - mythic - buff - fifth - cachan - lova - house - style - netson - wassum - fister - wasmite - version - fistum - debator - soon - gender - buttermate - fisterlets - bridger - waitum - crisset - embritter - content - itself - korian - mailing - dears - chimp - lambon - sarts - chippers - chimpature - lods - chine - koreen - ease - mat's - lightsigned - viin - vaya - kinship - sorrow - corin - near - chimpshire - annie - viium - loier - wise - tifeture - mizek - fiber - timbers - suck - juiced - thou - linguids - aaron - air - apt - art - erin - slowy - bedger - relights - seveji - edgerm - wide - broadroom - bet - betel - bats - bazo - venro - turnatives - bid - washerll - washerable - attempts - masaks - mix - andy - rep - rut - ani - sec - loan - gurin - advice - simus - encryst - lagwit - swisser - journeys - kid - bedma - temptary - father's - files - liners - fleets - paid - limo - washerel - signature - basil - baroom - lip - lampong - coming - mekita - kitchelettes - soul - lizone - we'll - eyes - drink - detention - debatero - every - timster - klater - rent - dell - plea - blow - riya - bashroom - stopped - junes - myton - sart - tart - cloth - reso - musiclotter - zet - loads - attempter - washham - carinian - decrysty - further - turos - suits - lampost - audiovian - meshirs - tim - emmy - dempature - look - cooking - dating - shis - ether - reason - volio - byzal - decrous - stating - bether - equal - edron - curse - semiphones - tell - dom - lad - glassor - enphusiatic - keton - flash - wasn't - sweeter - dry - sunlights - portion - rigging - level - auto - whips - lesson - killers - internalized - substituting - metro - tumblena - holy - folio - silence - worm - dvo - immute - doesn't - direction - tiquisidom - village - magic - whether - fatalouts - hot - thoughts - swipe - resem - wife - shoe - wiped - wipes - kits - battering - sweats - readase - jesus - switchs - g's - faium - wits - al - scream - taine - wids - sweat - mease - sees - lieout - salter - taints - founds - traced - sailed - bathtering - nail - ing - loose - hates - taint - languids - watering - bleed - vium - sam - liout - professor - paying - cogate - mushrooms - interpreter - library - words - unrate - insist - work - western - belenn - gangs - bodier - training - money - veil - patron - jonatha - mars - mainland - wasuporian - who's - starting - peretry - filtering - transformative - written - suit - defeat - nana - menasa - edbraphone - goes - der - reperator - segmaphone - nagostuperium - rating - competitive - betty - hasan - ventro - tantam - davos - suttape - loudness - random - volimex - posta - repellent - study - taste - sample - boiling - lysome - boy - states - lars - factor - point - nicer - sounder - boliam - poly - boom - bowling - body - chakawaliman - misploder - borium - thousand - stoke - islam - table - basic - temporator - quarters - middle - easter - antos - tools - painters - trans - world - sense - dieting - oaks - raising - been - mashaws - transfer - temples - <sos/eos> two_pass: false pre_postencoder_norm: false init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true model_conf: transcript_token_list: - <blank> - <unk> - the - turn - in - lights - up - down - 'on' - temperature - heat - kitchen - switch - 'off' - volume - bedroom - language - to - washroom - bathroom - my - increase - decrease - music - heating - bring - me - too - i - set - sound - get - and - lamp - loud - a - need - you - shoes - socks - newspaper - it - light - louder - chinese - could - go - make - juice - some - main - stop - change - quiet - practice - english - german - now - play - korean - okay - it's - room - that's - of - hear - is - different - wash - this - lower - fetch - audio - please - far - pause - start - use - two - phone - low - softer - put - bed - video - couldn't - languages - max - quieter - allow - system - mute - phone's - reduce - resume - anything - device - washer - ten - that - which - paper - phones - they - so - him - news - um - from - he - out - crease - your - ah - better - but - late - value - k - washing - turned - don't - then - o - whites - hating - any - said - buy - lies - by - watch - give - one - done - them - hit - place - jews - mites - send - for - thing - night - not - languish - after - was - decreased - lots - pass - dream - bad - life - do - search - case - increased - corian - we - swish - valium - lamb - or - right - row - germans - their - 'no' - soft - time - like - own - swiss - can - song - nights - lines - eating - bottom - us - are - churn - hate - load - top - em - take - what's - ok - chain - lates - here - with - quite - lice - at - lap - am - heap - lampon - temper - tough - hitting - zone - john - three - may - vine - bath - keep - into - burn - light's - map - meat - eat - sat - new - day - white - r - what - sheet - line - town - through - watchroom - sign - brain - tan - south - kitten - sit - such - should - back - rights - shows - knees - choose - meet - likes - sarks - say - frame - boss - land - bar - bites - plays - might - our - washerm - bit - she - kit - his - tip - she's - fire - free - find - bringing - mics - priest - they're - heats - run - shears - fish - stone - tone - there - picture - sent - t - bitum - sweet - sun - ring - love - i'll - pin - vi - try - have - bather - be - temp - zoom - all - earn - bethroom - sucks - fit - as - lives - break - sick - arm - thee - miss - basroom - viam - turns - found - dumb - ladder - trend - vime - viome - if - bathroomites - tom - during - bat - they'll - wish - way - four - about - cheese - mean - nice - ocean - page - turning - still - souths - thy - stand - ding - while - who - white's - step - switchy - made - mason - see - ashroom - vice - switched - la - ton - washering - favor - via - i'm - betum - create - you're - played - green - blank - bassroom - rim - nap - oh - lay - grease - ice - ball - kitchens - x - nine - her - bridge - pies - pepper - attempture - rain - wastrom - peace - b - push - bound - lawyer - these - let - fast - star - drone - sauce - practise - veteran - paul's - fall - temporary - chop - key - bow - daylights - loader - water - twitch - chains - lion - getting - pars - code - fly - makes - offs - lie - jim - tear - sacks - fine - print - five - tap - ye - mind - foot - kiss - fat - writes - will - switcher - de - baton - man - yeah - piece - vol - saw - come - m - when - talk - round - niece - musing - open - throw - bathing - viim - know - head - teacher - 'true' - term - lord - orshroom - trace - feature - sock - seas - sweets - layup - slang - stream - drum - d - plan - chun - sell - left - leg - settling - men - bathter - 'false' - stay - chapter - jane - son - rates - q - christie - awesome - tongue - chaff - there's - outer - clouder - past - red - chair - lab - shares - enough - fund - license - semi-phones - bones - sown - ups - needs - can't - kitchenites - drop - let's - french - tempter - source - beth - trying - volumes - slides - atherm - app - battle - drawing - 'yes' - trees - twenty - pleas - large - worship - oz - deep - bathrooms - loudest - trim - playing - saudi - volley - its - bite - wall - heater - bond - sand - catch - loss - tender - long - dam - heen - might's - fits - sue - eight - pray - sail - sounds - ain't - lang - spatroom - part - pitch - matron - jump - wrong - kills - why - band - name - ply - piper - thrown - roll - would - self - same - js - wash-rame - bed-room - lot's - swits - favour - meter - assume - math - heed - stitch - dawn - gorian - radio - thumb - pressure - store - dot - meal - seat - welsh - chin - swift - bing - waste - vassum - making - bend - boil - plain - decrees - square - tomb - finger - temperatures - speak - pep - bait - stepper - corey - allium - cloud - state - dan - stole - christy - fight - marshroom - plate - shown - big - bathrobe - torn - debate - chande - heaving - switz - batter - wash-room - trap - ram - jarman - friend's - tipture - lampos - flouter - brand - bass - brit - dome - decree - foam - link - sax - seeking - gain - values - beat - watching - dark - creates - midroom - months - camp - forium - compared - race - shines - tempre - pair - weight - read - decays - lump - worse - bus - truth - mice - door - sub - draw - cruci - tar - tend - batting - ms - means - buying - late's - legs - therm - mike - linguish - blaze - mine - peren - walden - arc - cruise - walling - lakes - passer - russian - rock - funds - curry - patch - per - wait - how - days - mamp - paz - just - bringches - must - nor - sing - u - blower - clay - heated - resound - fights - wamp - hearing - basserm - brewing - produce - sorry - wound - had - food - other - quick - an - face - flights - fun - downs - wine - vion - wampoff - lake - lane - last - soften - i'd - check - lowered - heam - ink - twist - think - die - crack - degree - washington - friend - doors - number - gave - button - tears - lash - many - raised - show - action - age - engage - says - join - under - till - save - follow - brings - shot - worst - trainees - night's - bear - rome - l - s - roam - da - pasta - spring - languished - tier - bone - pete - next - leads - dick - rate - dig - good - six - fought - mesic - beto - fitch - chimpashire - carrying - depritus - laws - youth - ethroom - baffer - rule - wing - langus - loudy - re - jeers - pull - lessons - lighter - took - jew - layout - fats - wit - nace - d's - temple - drawn - person - wastrong - codio - crank - chunny - inglace - increases - raleum - z - drones - slide - wet - saucer - embracer - c - bracher - bathrum - template - musics - radios - glyta - inglease - sad - shawn - deeps - nerve - sobs - blame - best - wandstrom - rostrum - sogs - sayings - itchen - churnies - ocana - fom's - witch - clamp - liked - manguish - lusher - saying - folium - foium - bae - vendor - edram - bendroom - bisium - acainos - walk - slight - pauses - alm - palima - temperating - loaded - fifty - banner - thank - hearts - croom - alima - sats - doing - list - masats - mesus - christa - feet - quote - tandy - bitroom - chand - fitjan - west - lampow - lords - press - chili - bedcom - fin - wasolates - pastor - breathe - stout - font's - perian - chich - chitty - bought - indication - fonts - gen - vash - ashram - india - avashroom - chrome - home - aria - newspad - steppad - barthrome - chant - software - chandy - shui - chandi - keitelates - belt - trujandelaits - indika - chen - shindy - shoot - district - stum - codeio - debesh - chandle - digit - prim - size - sodio - roshroom - vohima - suppose - shiny - versus - lowing - doubt - bake - semithones - waffroom - crucial - chippeture - ray - decreesadio - kenchet - decreesadi - meta - mang - baum - ate - washram - jacries - premium - sops - tempure - built - zoo - pring - lowers - opum - dipture - tetch - basper - peak - bassorm - pas - switchofta - lover - flap - incretes - bastroom - tikris - didn't - sweater - bastom - lets - lampoff - bedrim - jumites - bassum - vaim - badger - lengths - promise - batroom - dog - bedrooms - satio - involve - quiner - samayum - sulo - fused - butte - decrysovalium - subtuse - silo - tread - fetchment - lime - stuff - kin - switzerland - eaten - sex - sublinguisha - seling - singhush - fitchmus - ox - seven - mingle - shakorean - foams - keeps - setmingwich - quorian - swer - smingh - someites - washroomites - posse - seeding - haste - rosh - toff - vein - seating - chemi - sis - debt - lamish - jinnies - gennies - linguishu - tarn - gym - eats - dough - fedium - isaac - bender - does - healing - speed - adam - noun - aldio - etch - amp - sangus - southern - vodroom - sanch - post - beddo - things - sounded - tick - toverty - view - weshroom - comparator - fordom - keys - templar - worker - tiki - odio - geo - fordium - mode - mindless - fishermen - lua - pierto - voting - brad - europe - grade - shrieks - cures - stub - decase - ufoium - geophonia - moved - villium - drews - compare - vodal - dumpier - sorts - cases - bosh - modes - souls - rose - pedro - mash - shanja - druce - benise - draws - minces - laguish - throughdome - sweetzer - compato - science - dumper - breathing - yong - vanquish - chorean - bell - word - sentence - strain - washern - lava - basom - smith - less - sons - sights - sides - vying - allowed - cough - drew - knee - bastin - sot - stopping - blee - kisa - agenda - va - outset - belights - tingway - lada - sus - germ - bedroomed - length - phoned - rezone - lands - knitting - submit - broom - touch - hook - maths - chaffelites - kinch - june - limp - namp - basser - gitchen - dreamly - bash - upped - boff - mines - zik - chikurian - ab - ut - hamperture - pawn - okena - kain - coose - fon's - truck - battery - washm - likesov - cherp - scripture - small - welding - samphon - slaying - baldium - berumita - slamp - helium - habiture - saldom - vatum - blushham - apedial - sums - flipper - langworth - id - itch - isherm - washerms - odds - bedium - heights - kent - bethrym - atham - hos - induce - bathtub - lamped - bellium - iter - usherm - stu - rathroom - mees - resumed - barchy - betterim - trello - flowing - butcher - hooker - mashes - mu - site - riz - maty - handing - hand - predium - iton - kate - newspip - lifeton - printing - shiz - hidanu - bother - weights - jigs - glotter - templates - lamper - heata - boshroom - mounter - tipperger - washrooms - imager - washman - vayum - methroom - lossroom - waft - shrum - meshes - fringes - amma - keching - redees - crane - barthroom - yim - lara - speech - barther - tea - fra - chorium - trines - starks - firm's - wallian - rushing - lau - scott - fotherine - volleying - arthur - hesing - kna - path - meant - mutes - y - young - knows - kitchened - washherm - washher - kori - tipperture - ballium - rizo - though - topperture - tapature - douce - inblies - query - law - semi - turnouts - turneth - bassom - flay - damager - vassorm - cree - vassar - faster - julo - rub - septa - lost - sal - core - piney - desamla - bat's - third - poshroom - hunt - kickin - william - mackt - cheer - wardium - funny - wat - trough - ecogen - cannot - forget - lighted - engine - bind - edwin - mitoff - plensic - trump - setling - semi-divise - again - over - chat - endogen - mainling - increci - blease - fishing - washerb - downed - bedder - mython - volium - byton - mayam - basium - bathium - wilder - cape - viion - fet - dusk - betterm - file - spatorm - spedroom - starts - nightcloth - decris - borrow - vian - baimhap - track - church - batherine - washerroom - dirt - nam - trip - gin - brian - sup - swisher - parse - chainius - lat - sly - duna - shum - heal - rice - tiny - only - tied - tinder - board - later - gate - violin - tonight - pager - move - latin - warrior - major - magnum - each - bitter - none - tempreja - michaws - thirteen - little - cease - bytron - oita - neot - bank - armage - hang - wandhi - rantage - went - orsh - aloud - delete - stroll - side - browser - washa - ramp - hinting - grieve - marked - terran - embrace - breeze - lambas - emasek - got - blindness - stun - jama - bitumhida - he's - polite - ned - drumhim - langest - heem - midenings - papo - inglies - form - papu - larries - clean - lepicha - beja - very - coria - deal - mittron - damn - baja - timey - aima - going - ainting - don - adore - bullock - pain - audience - dotion - viving - notion - bytro - worry - talks - byte - dance - vosham - because - tree - ming - matox - modern - invasia - makeness - hawks - natom - arms - pay - hinder - latum - bench - hoofs - king - dampentering - north - boo - petations - donna - kabbatum - temprejo - chairman - mr - names - standard - batu - ridge - using - couldying - never - matter - ninety - powder - interviews - stir - changes - bap - jomo - courier - crop - void - safe - tamies - were - network - dambraja - once - glight - mightum - royal - gordian - viewers - cajanaton - stood - wind - treat - lean - teach - matam - tarsh - incase - dembation - clang - beta - lyphon - sang - dia - richer - bedoleitone - dampeja - pinosh - mishik - oft - hi - live - trumpet - butch - blights - seth - kids - herb - bettern - pathroom - shroom - ifridio - san - sroom - sanna - christ - clap - worshroom - cruce - crisadio - adi - iam - quarter - throne - heatsy - lace - shallize - pratt - beast - slash - paulster - brazil - priested - trust - slice - kind - jules - being - tying - gets - jewels - jills - arcs - jajarman - fesh - crate - grown - brained - porian - shills - grew - bare - keechin' - toss - reduced - var - libraries - slanguish - catching - rawroom - passa - laugh - dip - vima - plight - clider - tank - prat - securion - cause - care - bent - thyum - odd - washed - twitched - mangling - valiumt - pop - nicogen - switching - federm - washerrm - stolen - sub-language - heane - vap - babin - karan - mezek - towen - ek - full - saver - kill - plates - vien - eck - fi - myer - muzak - vampoline - bytes - fars - pratise - pat - decreases - ison - flight - blies - waffron - debates - double - taping - earth - greece - capture - corium - plake - blush - tech - than - fuff - alter - strengths - dial - betch - chanda - bartminate - fiz - outalates - bitumlets - sister - rich - washamlets - nigusa - digly - eighty - grace - approach - cachen - mats - embrature - impression - donate - nispepa - debrature - mistakes - misshik - printer - damperture - seam - degre - embreter - stretch - ready - ting - embedded - indawashroom - kitar - greeted - embrids - matterm - mythic - buff - fifth - cachan - lova - house - style - netson - wassum - fister - wasmite - version - fistum - debator - soon - gender - buttermate - fisterlets - bridger - waitum - crisset - embritter - content - itself - korian - mailing - dears - chimp - lambon - sarts - chippers - chimpature - lods - chine - koreen - ease - mat's - lightsigned - viin - vaya - kinship - sorrow - corin - near - chimpshire - annie - viium - loier - wise - tifeture - mizek - fiber - timbers - suck - juiced - thou - linguids - aaron - air - apt - art - erin - slowy - bedger - relights - seveji - edgerm - wide - broadroom - bet - betel - bats - bazo - venro - turnatives - bid - washerll - washerable - attempts - masaks - mix - andy - rep - rut - ani - sec - loan - gurin - advice - simus - encryst - lagwit - swisser - journeys - kid - bedma - temptary - father's - files - liners - fleets - paid - limo - washerel - signature - basil - baroom - lip - lampong - coming - mekita - kitchelettes - soul - lizone - we'll - eyes - drink - detention - debatero - every - timster - klater - rent - dell - plea - blow - riya - bashroom - stopped - junes - myton - sart - tart - cloth - reso - musiclotter - zet - loads - attempter - washham - carinian - decrysty - further - turos - suits - lampost - audiovian - meshirs - tim - emmy - dempature - look - cooking - dating - shis - ether - reason - volio - byzal - decrous - stating - bether - equal - edron - curse - semiphones - tell - dom - lad - glassor - enphusiatic - keton - flash - wasn't - sweeter - dry - sunlights - portion - rigging - level - auto - whips - lesson - killers - internalized - substituting - metro - tumblena - holy - folio - silence - worm - dvo - immute - doesn't - direction - tiquisidom - village - magic - whether - fatalouts - hot - thoughts - swipe - resem - wife - shoe - wiped - wipes - kits - battering - sweats - readase - jesus - switchs - g's - faium - wits - al - scream - taine - wids - sweat - mease - sees - lieout - salter - taints - founds - traced - sailed - bathtering - nail - ing - loose - hates - taint - languids - watering - bleed - vium - sam - liout - professor - paying - cogate - mushrooms - interpreter - library - words - unrate - insist - work - western - belenn - gangs - bodier - training - money - veil - patron - jonatha - mars - mainland - wasuporian - who's - starting - peretry - filtering - transformative - written - suit - defeat - nana - menasa - edbraphone - goes - der - reperator - segmaphone - nagostuperium - rating - competitive - betty - hasan - ventro - tantam - davos - suttape - loudness - random - volimex - posta - repellent - study - taste - sample - boiling - lysome - boy - states - lars - factor - point - nicer - sounder - boliam - poly - boom - bowling - body - chakawaliman - misploder - borium - thousand - stoke - islam - table - basic - temporator - quarters - middle - easter - antos - tools - painters - trans - world - sense - dieting - oaks - raising - been - mashaws - transfer - temples - <sos/eos> ctc_weight: 0.5 ignore_id: -1 lsm_weight: 0.0 length_normalized_loss: false report_cer: true report_wer: true sym_space: <space> sym_blank: <blank> extract_feats_in_collect_stats: true two_pass: false pre_postencoder_norm: false use_preprocessor: true token_type: word bpemodel: null non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: s3prl frontend_conf: frontend_conf: upstream: hubert_large_ll60k download_dir: ./hub multilayer_feature: true fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 30 num_freq_mask: 2 apply_time_mask: true time_mask_width_range: - 0 - 40 num_time_mask: 2 normalize: utterance_mvn normalize_conf: {} preencoder: linear preencoder_conf: input_size: 1024 output_size: 80 encoder: transformer encoder_conf: output_size: 256 attention_heads: 4 linear_units: 2048 num_blocks: 12 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.0 input_layer: conv2d normalize_before: true postencoder: null postencoder_conf: {} deliberationencoder: conformer deliberationencoder_conf: output_size: 256 attention_heads: 8 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.1 input_layer: linear normalize_before: true macaron_style: true pos_enc_layer_type: rel_pos selfattention_layer_type: rel_selfattn activation_type: swish use_cnn_module: true cnn_module_kernel: 31 decoder: transformer decoder_conf: attention_heads: 4 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.0 src_attention_dropout_rate: 0.0 decoder2: rnn decoder2_conf: {} postdecoder: hugging_face_transformers postdecoder_conf: model_name_or_path: bert-base-cased output_size: 256 required: - output_dir - token_list version: 0.10.3a3 distributed: false ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
sd-concepts-library/looney-anime
sd-concepts-library
2022-09-13T21:09:35Z
0
0
null
[ "license:mit", "region:us" ]
null
2022-09-13T21:09:29Z
--- license: mit --- ### looney anime on Stable Diffusion This is the `<looney-anime>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<looney-anime> 0](https://huggingface.co/sd-concepts-library/looney-anime/resolve/main/concept_images/8.jpeg) ![<looney-anime> 1](https://huggingface.co/sd-concepts-library/looney-anime/resolve/main/concept_images/18.jpeg) ![<looney-anime> 2](https://huggingface.co/sd-concepts-library/looney-anime/resolve/main/concept_images/14.jpeg) ![<looney-anime> 3](https://huggingface.co/sd-concepts-library/looney-anime/resolve/main/concept_images/10.jpeg) ![<looney-anime> 4](https://huggingface.co/sd-concepts-library/looney-anime/resolve/main/concept_images/1.jpeg) ![<looney-anime> 5](https://huggingface.co/sd-concepts-library/looney-anime/resolve/main/concept_images/16.jpeg) ![<looney-anime> 6](https://huggingface.co/sd-concepts-library/looney-anime/resolve/main/concept_images/15.jpeg) ![<looney-anime> 7](https://huggingface.co/sd-concepts-library/looney-anime/resolve/main/concept_images/12.jpeg) ![<looney-anime> 8](https://huggingface.co/sd-concepts-library/looney-anime/resolve/main/concept_images/11.jpeg) ![<looney-anime> 9](https://huggingface.co/sd-concepts-library/looney-anime/resolve/main/concept_images/9.jpeg) ![<looney-anime> 10](https://huggingface.co/sd-concepts-library/looney-anime/resolve/main/concept_images/5.jpeg) ![<looney-anime> 11](https://huggingface.co/sd-concepts-library/looney-anime/resolve/main/concept_images/0.jpeg) ![<looney-anime> 12](https://huggingface.co/sd-concepts-library/looney-anime/resolve/main/concept_images/17.jpeg) ![<looney-anime> 13](https://huggingface.co/sd-concepts-library/looney-anime/resolve/main/concept_images/4.jpeg) ![<looney-anime> 14](https://huggingface.co/sd-concepts-library/looney-anime/resolve/main/concept_images/13.jpeg) ![<looney-anime> 15](https://huggingface.co/sd-concepts-library/looney-anime/resolve/main/concept_images/2.jpeg) ![<looney-anime> 16](https://huggingface.co/sd-concepts-library/looney-anime/resolve/main/concept_images/19.jpeg) ![<looney-anime> 17](https://huggingface.co/sd-concepts-library/looney-anime/resolve/main/concept_images/3.jpeg) ![<looney-anime> 18](https://huggingface.co/sd-concepts-library/looney-anime/resolve/main/concept_images/6.jpeg) ![<looney-anime> 19](https://huggingface.co/sd-concepts-library/looney-anime/resolve/main/concept_images/7.jpeg)
sd-concepts-library/thunderdome-covers
sd-concepts-library
2022-09-13T20:26:37Z
0
2
null
[ "license:mit", "region:us" ]
null
2022-09-13T20:26:24Z
--- license: mit --- ### Thunderdome Covers on Stable Diffusion This is the `<thunderdome>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<thunderdome> 0](https://huggingface.co/sd-concepts-library/thunderdome-covers/resolve/main/concept_images/1.jpeg) ![<thunderdome> 1](https://huggingface.co/sd-concepts-library/thunderdome-covers/resolve/main/concept_images/0.jpeg) ![<thunderdome> 2](https://huggingface.co/sd-concepts-library/thunderdome-covers/resolve/main/concept_images/4.jpeg) ![<thunderdome> 3](https://huggingface.co/sd-concepts-library/thunderdome-covers/resolve/main/concept_images/2.jpeg) ![<thunderdome> 4](https://huggingface.co/sd-concepts-library/thunderdome-covers/resolve/main/concept_images/3.jpeg)
sd-concepts-library/eddie
sd-concepts-library
2022-09-13T20:14:14Z
0
1
null
[ "license:mit", "region:us" ]
null
2022-09-13T20:14:10Z
--- license: mit --- ### Eddie on Stable Diffusion This is the `Eddie` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![Eddie 0](https://huggingface.co/sd-concepts-library/eddie/resolve/main/concept_images/1.jpeg) ![Eddie 1](https://huggingface.co/sd-concepts-library/eddie/resolve/main/concept_images/0.jpeg) ![Eddie 2](https://huggingface.co/sd-concepts-library/eddie/resolve/main/concept_images/4.jpeg) ![Eddie 3](https://huggingface.co/sd-concepts-library/eddie/resolve/main/concept_images/2.jpeg) ![Eddie 4](https://huggingface.co/sd-concepts-library/eddie/resolve/main/concept_images/3.jpeg)
Padomin/t5-base-TEDxJP-0front-1body-1rear
Padomin
2022-09-13T19:50:21Z
12
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:te_dx_jp", "license:cc-by-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-09-13T12:31:15Z
--- license: cc-by-sa-4.0 tags: - generated_from_trainer datasets: - te_dx_jp model-index: - name: t5-base-TEDxJP-0front-1body-1rear results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-TEDxJP-0front-1body-1rear This model is a fine-tuned version of [sonoisa/t5-base-japanese](https://huggingface.co/sonoisa/t5-base-japanese) on the te_dx_jp dataset. It achieves the following results on the evaluation set: - Loss: 0.4869 - Wer: 0.1801 - Mer: 0.1739 - Wil: 0.2635 - Wip: 0.7365 - Hits: 55253 - Substitutions: 6626 - Deletions: 2708 - Insertions: 2296 - Cer: 0.1411 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Mer | Wil | Wip | Hits | Substitutions | Deletions | Insertions | Cer | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:------:|:-----:|:-------------:|:---------:|:----------:|:------:| | 0.6609 | 1.0 | 1457 | 0.5121 | 0.2181 | 0.2049 | 0.2958 | 0.7042 | 54651 | 6867 | 3069 | 4151 | 0.1880 | | 0.5633 | 2.0 | 2914 | 0.4719 | 0.1891 | 0.1817 | 0.2714 | 0.7286 | 55015 | 6654 | 2918 | 2644 | 0.1558 | | 0.5212 | 3.0 | 4371 | 0.4626 | 0.1838 | 0.1771 | 0.2666 | 0.7334 | 55168 | 6635 | 2784 | 2452 | 0.1462 | | 0.4498 | 4.0 | 5828 | 0.4616 | 0.1807 | 0.1747 | 0.2643 | 0.7357 | 55148 | 6630 | 2809 | 2231 | 0.1420 | | 0.4058 | 5.0 | 7285 | 0.4633 | 0.1799 | 0.1739 | 0.2631 | 0.7369 | 55200 | 6592 | 2795 | 2231 | 0.1419 | | 0.3802 | 6.0 | 8742 | 0.4675 | 0.1796 | 0.1733 | 0.2630 | 0.7370 | 55311 | 6636 | 2640 | 2321 | 0.1412 | | 0.4126 | 7.0 | 10199 | 0.4737 | 0.1781 | 0.1724 | 0.2617 | 0.7383 | 55245 | 6595 | 2747 | 2163 | 0.1394 | | 0.3436 | 8.0 | 11656 | 0.4772 | 0.1788 | 0.1729 | 0.2624 | 0.7376 | 55247 | 6616 | 2724 | 2208 | 0.1401 | | 0.3249 | 9.0 | 13113 | 0.4827 | 0.1796 | 0.1735 | 0.2632 | 0.7368 | 55265 | 6635 | 2687 | 2281 | 0.1407 | | 0.3347 | 10.0 | 14570 | 0.4869 | 0.1801 | 0.1739 | 0.2635 | 0.7365 | 55253 | 6626 | 2708 | 2296 | 0.1411 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu116 - Datasets 2.4.0 - Tokenizers 0.12.1
colorfulpen/1
colorfulpen
2022-09-13T19:38:19Z
0
0
null
[ "license:cc-by-nc-sa-4.0", "region:us" ]
null
2022-09-13T19:38:19Z
--- license: cc-by-nc-sa-4.0 ---
erikejw/swin-base-patch4-window7-224-in22k-finetuned-eurosat
erikejw
2022-09-13T19:22:40Z
219
0
transformers
[ "transformers", "pytorch", "tensorboard", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-09-13T18:27:16Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swin-base-patch4-window7-224-in22k-finetuned-eurosat results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9896296296296296 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-base-patch4-window7-224-in22k-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224-in22k](https://huggingface.co/microsoft/swin-base-patch4-window7-224-in22k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0390 - Accuracy: 0.9896 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1691 | 1.0 | 190 | 0.0693 | 0.9789 | | 0.1275 | 2.0 | 380 | 0.0419 | 0.9889 | | 0.1165 | 3.0 | 570 | 0.0390 | 0.9896 | ### Framework versions - Transformers 4.21.3 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
evegarcianz/bert-finetuned-squad
evegarcianz
2022-09-13T19:22:38Z
62
0
transformers
[ "transformers", "tf", "distilbert", "question-answering", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-09-13T07:56:09Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: evegarcianz/bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # evegarcianz/bert-finetuned-squad This model is a fine-tuned version of [distilbert-base-cased-distilled-squad](https://huggingface.co/distilbert-base-cased-distilled-squad) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4637 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 33276, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Epoch | |:----------:|:-----:| | 0.6630 | 0 | | 0.4637 | 1 | ### Framework versions - Transformers 4.21.0 - TensorFlow 2.8.1 - Datasets 2.4.0 - Tokenizers 0.12.1
sd-concepts-library/poutine-dish
sd-concepts-library
2022-09-13T18:24:28Z
0
0
null
[ "license:mit", "region:us" ]
null
2022-09-13T18:24:16Z
--- license: mit --- ### Poutine Dish on Stable Diffusion This is the `<poutine-qc>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<poutine-qc> 0](https://huggingface.co/sd-concepts-library/poutine-dish/resolve/main/concept_images/11.jpeg) ![<poutine-qc> 1](https://huggingface.co/sd-concepts-library/poutine-dish/resolve/main/concept_images/16.jpeg) ![<poutine-qc> 2](https://huggingface.co/sd-concepts-library/poutine-dish/resolve/main/concept_images/8.jpeg) ![<poutine-qc> 3](https://huggingface.co/sd-concepts-library/poutine-dish/resolve/main/concept_images/19.jpeg) ![<poutine-qc> 4](https://huggingface.co/sd-concepts-library/poutine-dish/resolve/main/concept_images/2.jpeg) ![<poutine-qc> 5](https://huggingface.co/sd-concepts-library/poutine-dish/resolve/main/concept_images/13.jpeg) ![<poutine-qc> 6](https://huggingface.co/sd-concepts-library/poutine-dish/resolve/main/concept_images/12.jpeg) ![<poutine-qc> 7](https://huggingface.co/sd-concepts-library/poutine-dish/resolve/main/concept_images/3.jpeg) ![<poutine-qc> 8](https://huggingface.co/sd-concepts-library/poutine-dish/resolve/main/concept_images/6.jpeg) ![<poutine-qc> 9](https://huggingface.co/sd-concepts-library/poutine-dish/resolve/main/concept_images/1.jpeg) ![<poutine-qc> 10](https://huggingface.co/sd-concepts-library/poutine-dish/resolve/main/concept_images/15.jpeg) ![<poutine-qc> 11](https://huggingface.co/sd-concepts-library/poutine-dish/resolve/main/concept_images/17.jpeg) ![<poutine-qc> 12](https://huggingface.co/sd-concepts-library/poutine-dish/resolve/main/concept_images/10.jpeg) ![<poutine-qc> 13](https://huggingface.co/sd-concepts-library/poutine-dish/resolve/main/concept_images/9.jpeg) ![<poutine-qc> 14](https://huggingface.co/sd-concepts-library/poutine-dish/resolve/main/concept_images/5.jpeg) ![<poutine-qc> 15](https://huggingface.co/sd-concepts-library/poutine-dish/resolve/main/concept_images/18.jpeg) ![<poutine-qc> 16](https://huggingface.co/sd-concepts-library/poutine-dish/resolve/main/concept_images/4.jpeg) ![<poutine-qc> 17](https://huggingface.co/sd-concepts-library/poutine-dish/resolve/main/concept_images/7.jpeg) ![<poutine-qc> 18](https://huggingface.co/sd-concepts-library/poutine-dish/resolve/main/concept_images/14.jpeg) ![<poutine-qc> 19](https://huggingface.co/sd-concepts-library/poutine-dish/resolve/main/concept_images/0.jpeg)