modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-07 06:34:03
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
544 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-07 06:33:46
card
stringlengths
11
1.01M
tner/bertweet-base-tweetner7-2021
tner
2022-09-27T15:23:18Z
110
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "dataset:tner/tweetner7", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-07-09T21:16:21Z
--- datasets: - tner/tweetner7 metrics: - f1 - precision - recall model-index: - name: tner/bertweet-base-tweetner7-2021 results: - task: name: Token Classification type: token-classification dataset: name: tner/tweetner7 type: tner/tweetner7 args: tner/tweetner7 metrics: - name: F1 (test_2021) type: f1 value: 0.6308962917798349 - name: Precision (test_2021) type: precision value: 0.6058767167039285 - name: Recall (test_2021) type: recall value: 0.6580712303422757 - name: Macro F1 (test_2021) type: f1_macro value: 0.5735468406550763 - name: Macro Precision (test_2021) type: precision_macro value: 0.5503198173085064 - name: Macro Recall (test_2021) type: recall_macro value: 0.6012922054817469 - name: Entity Span F1 (test_2021) type: f1_entity_span value: 0.7788214245778822 - name: Entity Span Precision (test_2020) type: precision_entity_span value: 0.7538694663924668 - name: Entity Span Recall (test_2021) type: recall_entity_span value: 0.8054816699433329 - name: F1 (test_2020) type: f1 value: 0.6205787781350482 - name: Precision (test_2020) type: precision value: 0.6415512465373961 - name: Recall (test_2020) type: recall value: 0.6009340944473275 - name: Macro F1 (test_2020) type: f1_macro value: 0.5723158793505982 - name: Macro Precision (test_2020) type: precision_macro value: 0.5910271170769507 - name: Macro Recall (test_2020) type: recall_macro value: 0.5568451570610017 - name: Entity Span F1 (test_2020) type: f1_entity_span value: 0.7595141700404859 - name: Entity Span Precision (test_2020) type: precision_entity_span value: 0.7913385826771654 - name: Entity Span Recall (test_2020) type: recall_entity_span value: 0.7301504929942917 pipeline_tag: token-classification widget: - text: "Get the all-analog Classic Vinyl Edition of `Takin' Off` Album from {@herbiehancock@} via {@bluenoterecords@} link below: {{URL}}" example_title: "NER Example 1" --- # tner/bertweet-base-tweetner7-2021 This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on the [tner/tweetner7](https://huggingface.co/datasets/tner/tweetner7) dataset (`train_2021` split). Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository for more detail). It achieves the following results on the test set of 2021: - F1 (micro): 0.6308962917798349 - Precision (micro): 0.6058767167039285 - Recall (micro): 0.6580712303422757 - F1 (macro): 0.5735468406550763 - Precision (macro): 0.5503198173085064 - Recall (macro): 0.6012922054817469 The per-entity breakdown of the F1 score on the test set are below: - corporation: 0.4565701559020044 - creative_work: 0.4098984771573604 - event: 0.4628410159924742 - group: 0.593177511054959 - location: 0.6333949476278496 - person: 0.8279457768508863 - product: 0.631 For F1 scores, the confidence interval is obtained by bootstrap as below: - F1 (micro): - 90%: [0.6218627510838193, 0.6407164862470697] - 95%: [0.6201627010426306, 0.6422908401462293] - F1 (macro): - 90%: [0.6218627510838193, 0.6407164862470697] - 95%: [0.6201627010426306, 0.6422908401462293] Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/bertweet-base-tweetner7-2021/raw/main/eval/metric.json) and [metric file of entity span](https://huggingface.co/tner/bertweet-base-tweetner7-2021/raw/main/eval/metric_span.json). ### Usage This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip. ```shell pip install tner ``` [TweetNER7](https://huggingface.co/datasets/tner/tweetner7) pre-processed tweets where the account name and URLs are converted into special formats (see the dataset page for more detail), so we process tweets accordingly and then run the model prediction as below. ```python import re from urlextract import URLExtract from tner import TransformersNER extractor = URLExtract() def format_tweet(tweet): # mask web urls urls = extractor.find_urls(tweet) for url in urls: tweet = tweet.replace(url, "{{URL}}") # format twitter account tweet = re.sub(r"\b(\s*)(@[\S]+)\b", r'\1{\2@}', tweet) return tweet text = "Get the all-analog Classic Vinyl Edition of `Takin' Off` Album from @herbiehancock via @bluenoterecords link below: http://bluenote.lnk.to/AlbumOfTheWeek" text_format = format_tweet(text) model = TransformersNER("tner/bertweet-base-tweetner7-2021") model.predict([text_format]) ``` It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment. ### Training hyperparameters The following hyperparameters were used during training: - dataset: ['tner/tweetner7'] - dataset_split: train_2021 - dataset_name: None - local_dataset: None - model: vinai/bertweet-base - crf: False - max_length: 128 - epoch: 30 - batch_size: 32 - lr: 0.0001 - random_seed: 0 - gradient_accumulation_steps: 1 - weight_decay: 1e-07 - lr_warmup_step_ratio: 0.3 - max_grad_norm: 1 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/bertweet-base-tweetner7-2021/raw/main/trainer_config.json). ### Reference If you use the model, please cite T-NER paper and TweetNER7 paper. - T-NER ``` @inproceedings{ushio-camacho-collados-2021-ner, title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition", author = "Ushio, Asahi and Camacho-Collados, Jose", booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations", month = apr, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.eacl-demos.7", doi = "10.18653/v1/2021.eacl-demos.7", pages = "53--62", abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.", } ``` - TweetNER7 ``` @inproceedings{ushio-etal-2022-tweet, title = "{N}amed {E}ntity {R}ecognition in {T}witter: {A} {D}ataset and {A}nalysis on {S}hort-{T}erm {T}emporal {S}hifts", author = "Ushio, Asahi and Neves, Leonardo and Silva, Vitor and Barbieri, Francesco. and Camacho-Collados, Jose", booktitle = "The 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing", month = nov, year = "2022", address = "Online", publisher = "Association for Computational Linguistics", } ```
tner/bert-large-tweetner7-random
tner
2022-09-27T15:22:53Z
120
0
transformers
[ "transformers", "pytorch", "bert", "token-classification", "dataset:tner/tweetner7", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-07-11T11:22:19Z
--- datasets: - tner/tweetner7 metrics: - f1 - precision - recall model-index: - name: tner/bert-large-tweetner7-random results: - task: name: Token Classification type: token-classification dataset: name: tner/tweetner7 type: tner/tweetner7 args: tner/tweetner7 metrics: - name: F1 (test_2021) type: f1 value: 0.6238958623895862 - name: Precision (test_2021) type: precision value: 0.6271028037383177 - name: Recall (test_2021) type: recall value: 0.6207215541165587 - name: Macro F1 (test_2021) type: f1_macro value: 0.5754103658637805 - name: Macro Precision (test_2021) type: precision_macro value: 0.5760445653768616 - name: Macro Recall (test_2021) type: recall_macro value: 0.5751041088351385 - name: Entity Span F1 (test_2021) type: f1_entity_span value: 0.7648665930360984 - name: Entity Span Precision (test_2020) type: precision_entity_span value: 0.7689340813464236 - name: Entity Span Recall (test_2021) type: recall_entity_span value: 0.760841910489187 - name: F1 (test_2020) type: f1 value: 0.6154274575327208 - name: Precision (test_2020) type: precision value: 0.6640625 - name: Recall (test_2020) type: recall value: 0.5734302023871303 - name: Macro F1 (test_2020) type: f1_macro value: 0.5709159092071027 - name: Macro Precision (test_2020) type: precision_macro value: 0.6168953196783556 - name: Macro Recall (test_2020) type: recall_macro value: 0.5321784485961766 - name: Entity Span F1 (test_2020) type: f1_entity_span value: 0.732943469785575 - name: Entity Span Precision (test_2020) type: precision_entity_span value: 0.7908653846153846 - name: Entity Span Recall (test_2020) type: recall_entity_span value: 0.6829268292682927 pipeline_tag: token-classification widget: - text: "Get the all-analog Classic Vinyl Edition of `Takin' Off` Album from {@herbiehancock@} via {@bluenoterecords@} link below: {{URL}}" example_title: "NER Example 1" --- # tner/bert-large-tweetner7-random This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the [tner/tweetner7](https://huggingface.co/datasets/tner/tweetner7) dataset (`train_random` split). Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository for more detail). It achieves the following results on the test set of 2021: - F1 (micro): 0.6238958623895862 - Precision (micro): 0.6271028037383177 - Recall (micro): 0.6207215541165587 - F1 (macro): 0.5754103658637805 - Precision (macro): 0.5760445653768616 - Recall (macro): 0.5751041088351385 The per-entity breakdown of the F1 score on the test set are below: - corporation: 0.49146005509641877 - creative_work: 0.3972413793103448 - event: 0.44788732394366193 - group: 0.5767073573078192 - location: 0.6721649484536083 - person: 0.8116810183451891 - product: 0.6307304785894207 For F1 scores, the confidence interval is obtained by bootstrap as below: - F1 (micro): - 90%: [0.6141711573096726, 0.6329835528622229] - 95%: [0.6127880547187768, 0.6345997209553179] - F1 (macro): - 90%: [0.6141711573096726, 0.6329835528622229] - 95%: [0.6127880547187768, 0.6345997209553179] Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/bert-large-tweetner7-random/raw/main/eval/metric.json) and [metric file of entity span](https://huggingface.co/tner/bert-large-tweetner7-random/raw/main/eval/metric_span.json). ### Usage This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip. ```shell pip install tner ``` [TweetNER7](https://huggingface.co/datasets/tner/tweetner7) pre-processed tweets where the account name and URLs are converted into special formats (see the dataset page for more detail), so we process tweets accordingly and then run the model prediction as below. ```python import re from urlextract import URLExtract from tner import TransformersNER extractor = URLExtract() def format_tweet(tweet): # mask web urls urls = extractor.find_urls(tweet) for url in urls: tweet = tweet.replace(url, "{{URL}}") # format twitter account tweet = re.sub(r"\b(\s*)(@[\S]+)\b", r'\1{\2@}', tweet) return tweet text = "Get the all-analog Classic Vinyl Edition of `Takin' Off` Album from @herbiehancock via @bluenoterecords link below: http://bluenote.lnk.to/AlbumOfTheWeek" text_format = format_tweet(text) model = TransformersNER("tner/bert-large-tweetner7-random") model.predict([text_format]) ``` It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment. ### Training hyperparameters The following hyperparameters were used during training: - dataset: ['tner/tweetner7'] - dataset_split: train_random - dataset_name: None - local_dataset: None - model: bert-large-cased - crf: True - max_length: 128 - epoch: 30 - batch_size: 32 - lr: 0.0001 - random_seed: 0 - gradient_accumulation_steps: 1 - weight_decay: 1e-07 - lr_warmup_step_ratio: 0.15 - max_grad_norm: 1 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/bert-large-tweetner7-random/raw/main/trainer_config.json). ### Reference If you use the model, please cite T-NER paper and TweetNER7 paper. - T-NER ``` @inproceedings{ushio-camacho-collados-2021-ner, title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition", author = "Ushio, Asahi and Camacho-Collados, Jose", booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations", month = apr, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.eacl-demos.7", doi = "10.18653/v1/2021.eacl-demos.7", pages = "53--62", abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.", } ``` - TweetNER7 ``` @inproceedings{ushio-etal-2022-tweet, title = "{N}amed {E}ntity {R}ecognition in {T}witter: {A} {D}ataset and {A}nalysis on {S}hort-{T}erm {T}emporal {S}hifts", author = "Ushio, Asahi and Neves, Leonardo and Silva, Vitor and Barbieri, Francesco. and Camacho-Collados, Jose", booktitle = "The 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing", month = nov, year = "2022", address = "Online", publisher = "Association for Computational Linguistics", } ```
tner/bert-large-tweetner7-2021
tner
2022-09-27T15:21:36Z
122
0
transformers
[ "transformers", "pytorch", "bert", "token-classification", "dataset:tner/tweetner7", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-07-12T09:24:07Z
--- datasets: - tner/tweetner7 metrics: - f1 - precision - recall model-index: - name: tner/bert-large-tweetner7-2021 results: - task: name: Token Classification type: token-classification dataset: name: tner/tweetner7 type: tner/tweetner7 args: tner/tweetner7 metrics: - name: F1 (test_2021) type: f1 value: 0.5974718775368201 - name: Precision (test_2021) type: precision value: 0.5992091183996279 - name: Recall (test_2021) type: recall value: 0.5957446808510638 - name: Macro F1 (test_2021) type: f1_macro value: 0.5392877076670867 - name: Macro Precision (test_2021) type: precision_macro value: 0.5398425980592713 - name: Macro Recall (test_2021) type: recall_macro value: 0.5439768272225339 - name: Entity Span F1 (test_2021) type: f1_entity_span value: 0.7497514474530674 - name: Entity Span Precision (test_2020) type: precision_entity_span value: 0.7584003786086133 - name: Entity Span Recall (test_2021) type: recall_entity_span value: 0.7412975598473459 - name: F1 (test_2020) type: f1 value: 0.5662616558349817 - name: Precision (test_2020) type: precision value: 0.6215880893300249 - name: Recall (test_2020) type: recall value: 0.519979242345615 - name: Macro F1 (test_2020) type: f1_macro value: 0.5096985017746614 - name: Macro Precision (test_2020) type: precision_macro value: 0.5628721370469417 - name: Macro Recall (test_2020) type: recall_macro value: 0.47520198274721537 - name: Entity Span F1 (test_2020) type: f1_entity_span value: 0.7065868263473053 - name: Entity Span Precision (test_2020) type: precision_entity_span value: 0.7841772151898734 - name: Entity Span Recall (test_2020) type: recall_entity_span value: 0.6429683445770628 pipeline_tag: token-classification widget: - text: "Get the all-analog Classic Vinyl Edition of `Takin' Off` Album from {@herbiehancock@} via {@bluenoterecords@} link below: {{URL}}" example_title: "NER Example 1" --- # tner/bert-large-tweetner7-2021 This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the [tner/tweetner7](https://huggingface.co/datasets/tner/tweetner7) dataset (`train_2021` split). Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository for more detail). It achieves the following results on the test set of 2021: - F1 (micro): 0.5974718775368201 - Precision (micro): 0.5992091183996279 - Recall (micro): 0.5957446808510638 - F1 (macro): 0.5392877076670867 - Precision (macro): 0.5398425980592713 - Recall (macro): 0.5439768272225339 The per-entity breakdown of the F1 score on the test set are below: - corporation: 0.4486772486772486 - creative_work: 0.34173228346456697 - event: 0.40238450074515647 - group: 0.556795797767564 - location: 0.6394904458598726 - person: 0.7940364439536168 - product: 0.5918972332015809 For F1 scores, the confidence interval is obtained by bootstrap as below: - F1 (micro): - 90%: [0.5884763705775744, 0.6075466841645367] - 95%: [0.586724466800271, 0.6087071446445204] - F1 (macro): - 90%: [0.5884763705775744, 0.6075466841645367] - 95%: [0.586724466800271, 0.6087071446445204] Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/bert-large-tweetner7-2021/raw/main/eval/metric.json) and [metric file of entity span](https://huggingface.co/tner/bert-large-tweetner7-2021/raw/main/eval/metric_span.json). ### Usage This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip. ```shell pip install tner ``` [TweetNER7](https://huggingface.co/datasets/tner/tweetner7) pre-processed tweets where the account name and URLs are converted into special formats (see the dataset page for more detail), so we process tweets accordingly and then run the model prediction as below. ```python import re from urlextract import URLExtract from tner import TransformersNER extractor = URLExtract() def format_tweet(tweet): # mask web urls urls = extractor.find_urls(tweet) for url in urls: tweet = tweet.replace(url, "{{URL}}") # format twitter account tweet = re.sub(r"\b(\s*)(@[\S]+)\b", r'\1{\2@}', tweet) return tweet text = "Get the all-analog Classic Vinyl Edition of `Takin' Off` Album from @herbiehancock via @bluenoterecords link below: http://bluenote.lnk.to/AlbumOfTheWeek" text_format = format_tweet(text) model = TransformersNER("tner/bert-large-tweetner7-2021") model.predict([text_format]) ``` It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment. ### Training hyperparameters The following hyperparameters were used during training: - dataset: ['tner/tweetner7'] - dataset_split: train_2021 - dataset_name: None - local_dataset: None - model: bert-large-cased - crf: False - max_length: 128 - epoch: 30 - batch_size: 32 - lr: 0.0001 - random_seed: 0 - gradient_accumulation_steps: 1 - weight_decay: 1e-07 - lr_warmup_step_ratio: 0.3 - max_grad_norm: 1 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/bert-large-tweetner7-2021/raw/main/trainer_config.json). ### Reference If you use the model, please cite T-NER paper and TweetNER7 paper. - T-NER ``` @inproceedings{ushio-camacho-collados-2021-ner, title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition", author = "Ushio, Asahi and Camacho-Collados, Jose", booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations", month = apr, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.eacl-demos.7", doi = "10.18653/v1/2021.eacl-demos.7", pages = "53--62", abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.", } ``` - TweetNER7 ``` @inproceedings{ushio-etal-2022-tweet, title = "{N}amed {E}ntity {R}ecognition in {T}witter: {A} {D}ataset and {A}nalysis on {S}hort-{T}erm {T}emporal {S}hifts", author = "Ushio, Asahi and Neves, Leonardo and Silva, Vitor and Barbieri, Francesco. and Camacho-Collados, Jose", booktitle = "The 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing", month = nov, year = "2022", address = "Online", publisher = "Association for Computational Linguistics", } ```
tner/bert-base-tweetner7-2020
tner
2022-09-27T15:20:02Z
122
0
transformers
[ "transformers", "pytorch", "bert", "token-classification", "dataset:tner/tweetner7", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-07-02T18:56:46Z
--- datasets: - tner/tweetner7 metrics: - f1 - precision - recall model-index: - name: tner/bert-base-tweetner7-2020 results: - task: name: Token Classification type: token-classification dataset: name: tner/tweetner7 type: tner/tweetner7 args: tner/tweetner7 metrics: - name: F1 (test_2021) type: f1 value: 0.6008989019741707 - name: Precision (test_2021) type: precision value: 0.591443610706686 - name: Recall (test_2021) type: recall value: 0.6106614246068455 - name: Macro F1 (test_2021) type: f1_macro value: 0.5467450408285621 - name: Macro Precision (test_2021) type: precision_macro value: 0.537717358363018 - name: Macro Recall (test_2021) type: recall_macro value: 0.5582367980568581 - name: Entity Span F1 (test_2021) type: f1_entity_span value: 0.7560892328704758 - name: Entity Span Precision (test_2020) type: precision_entity_span value: 0.744313725490196 - name: Entity Span Recall (test_2021) type: recall_entity_span value: 0.7682433213831387 - name: F1 (test_2020) type: f1 value: 0.6087425796006476 - name: Precision (test_2020) type: precision value: 0.6340640809443507 - name: Recall (test_2020) type: recall value: 0.5853658536585366 - name: Macro F1 (test_2020) type: f1_macro value: 0.5648877924450979 - name: Macro Precision (test_2020) type: precision_macro value: 0.5930039411771633 - name: Macro Recall (test_2020) type: recall_macro value: 0.5426595099078766 - name: Entity Span F1 (test_2020) type: f1_entity_span value: 0.7242309767943875 - name: Entity Span Precision (test_2020) type: precision_entity_span value: 0.7543563799887577 - name: Entity Span Recall (test_2020) type: recall_entity_span value: 0.6964193046185781 pipeline_tag: token-classification widget: - text: "Get the all-analog Classic Vinyl Edition of `Takin' Off` Album from {@herbiehancock@} via {@bluenoterecords@} link below: {{URL}}" example_title: "NER Example 1" --- # tner/bert-base-tweetner7-2020 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the [tner/tweetner7](https://huggingface.co/datasets/tner/tweetner7) dataset (`train_2020` split). Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository for more detail). It achieves the following results on the test set of 2021: - F1 (micro): 0.6008989019741707 - Precision (micro): 0.591443610706686 - Recall (micro): 0.6106614246068455 - F1 (macro): 0.5467450408285621 - Precision (macro): 0.537717358363018 - Recall (macro): 0.5582367980568581 The per-entity breakdown of the F1 score on the test set are below: - corporation: 0.4411294619072989 - creative_work: 0.3751552795031057 - event: 0.40279069767441866 - group: 0.5576791808873721 - location: 0.6179921773142112 - person: 0.8051622154507977 - product: 0.6273062730627307 For F1 scores, the confidence interval is obtained by bootstrap as below: - F1 (micro): - 90%: [0.5924664556782363, 0.6106294776916564] - 95%: [0.5905572257793882, 0.6119935888266077] - F1 (macro): - 90%: [0.5924664556782363, 0.6106294776916564] - 95%: [0.5905572257793882, 0.6119935888266077] Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/bert-base-tweetner7-2020/raw/main/eval/metric.json) and [metric file of entity span](https://huggingface.co/tner/bert-base-tweetner7-2020/raw/main/eval/metric_span.json). ### Usage This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip. ```shell pip install tner ``` [TweetNER7](https://huggingface.co/datasets/tner/tweetner7) pre-processed tweets where the account name and URLs are converted into special formats (see the dataset page for more detail), so we process tweets accordingly and then run the model prediction as below. ```python import re from urlextract import URLExtract from tner import TransformersNER extractor = URLExtract() def format_tweet(tweet): # mask web urls urls = extractor.find_urls(tweet) for url in urls: tweet = tweet.replace(url, "{{URL}}") # format twitter account tweet = re.sub(r"\b(\s*)(@[\S]+)\b", r'\1{\2@}', tweet) return tweet text = "Get the all-analog Classic Vinyl Edition of `Takin' Off` Album from @herbiehancock via @bluenoterecords link below: http://bluenote.lnk.to/AlbumOfTheWeek" text_format = format_tweet(text) model = TransformersNER("tner/bert-base-tweetner7-2020") model.predict([text_format]) ``` It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment. ### Training hyperparameters The following hyperparameters were used during training: - dataset: ['tner/tweetner7'] - dataset_split: train_2020 - dataset_name: None - local_dataset: None - model: bert-base-cased - crf: True - max_length: 128 - epoch: 30 - batch_size: 32 - lr: 1e-05 - random_seed: 0 - gradient_accumulation_steps: 1 - weight_decay: 1e-07 - lr_warmup_step_ratio: 0.15 - max_grad_norm: 1 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/bert-base-tweetner7-2020/raw/main/trainer_config.json). ### Reference If you use the model, please cite T-NER paper and TweetNER7 paper. - T-NER ``` @inproceedings{ushio-camacho-collados-2021-ner, title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition", author = "Ushio, Asahi and Camacho-Collados, Jose", booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations", month = apr, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.eacl-demos.7", doi = "10.18653/v1/2021.eacl-demos.7", pages = "53--62", abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.", } ``` - TweetNER7 ``` @inproceedings{ushio-etal-2022-tweet, title = "{N}amed {E}ntity {R}ecognition in {T}witter: {A} {D}ataset and {A}nalysis on {S}hort-{T}erm {T}emporal {S}hifts", author = "Ushio, Asahi and Neves, Leonardo and Silva, Vitor and Barbieri, Francesco. and Camacho-Collados, Jose", booktitle = "The 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing", month = nov, year = "2022", address = "Online", publisher = "Association for Computational Linguistics", } ```
gabrielsgaspar/bert-base-uncased-emotions-augmented
gabrielsgaspar
2022-09-27T15:13:36Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-09-27T14:00:02Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: bert-base-uncased-emotions-augmented results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-emotions-augmented This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9815 - Accuracy: 0.7539 - F1: 0.7506 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8475 | 1.0 | 819 | 0.6336 | 0.7655 | 0.7651 | | 0.5594 | 2.0 | 1638 | 0.6109 | 0.7695 | 0.7680 | | 0.4596 | 3.0 | 2457 | 0.6528 | 0.7601 | 0.7556 | | 0.3663 | 4.0 | 3276 | 0.6992 | 0.7631 | 0.7612 | | 0.2809 | 5.0 | 4095 | 0.7773 | 0.7571 | 0.7542 | | 0.2142 | 6.0 | 4914 | 0.8879 | 0.7541 | 0.7504 | | 0.1671 | 7.0 | 5733 | 0.9476 | 0.7552 | 0.7517 | | 0.1416 | 8.0 | 6552 | 0.9815 | 0.7539 | 0.7506 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
anas-awadalla/t5-small-few-shot-k-512-finetuned-squad-seed-0
anas-awadalla
2022-09-27T15:10:37Z
108
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-09-27T15:02:06Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: t5-small-few-shot-k-512-finetuned-squad-seed-0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-few-shot-k-512-finetuned-squad-seed-0 This model is a fine-tuned version of [google/t5-v1_1-small](https://huggingface.co/google/t5-v1_1-small) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 35.0 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.11.6
sd-concepts-library/felps
sd-concepts-library
2022-09-27T15:06:32Z
0
0
null
[ "license:mit", "region:us" ]
null
2022-09-27T15:06:26Z
--- license: mit --- ### Felps on Stable Diffusion This is the `<Felps>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<Felps> 0](https://huggingface.co/sd-concepts-library/felps/resolve/main/concept_images/0.jpeg) ![<Felps> 1](https://huggingface.co/sd-concepts-library/felps/resolve/main/concept_images/3.jpeg) ![<Felps> 2](https://huggingface.co/sd-concepts-library/felps/resolve/main/concept_images/2.jpeg) ![<Felps> 3](https://huggingface.co/sd-concepts-library/felps/resolve/main/concept_images/1.jpeg)
IIIT-L/xlm-roberta-base-finetuned-TRAC-DS
IIIT-L
2022-09-27T14:41:12Z
6
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-09-09T07:50:14Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: xlm-roberta-base-finetuned-TRAC-DS results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-TRAC-DS This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0206 - Accuracy: 0.6814 - Precision: 0.6561 - Recall: 0.6528 - F1: 0.6543 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 43 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.9928 | 0.5 | 612 | 0.9026 | 0.6201 | 0.5845 | 0.5812 | 0.5809 | | 0.8756 | 1.0 | 1224 | 0.7883 | 0.6373 | 0.6358 | 0.6382 | 0.6251 | | 0.7793 | 1.5 | 1836 | 0.8551 | 0.6340 | 0.6226 | 0.6368 | 0.6020 | | 0.7667 | 2.0 | 2448 | 0.7861 | 0.6618 | 0.6518 | 0.6637 | 0.6442 | | 0.6619 | 2.5 | 3060 | 0.8597 | 0.6887 | 0.6662 | 0.6472 | 0.6503 | | 0.6786 | 3.0 | 3672 | 0.7905 | 0.6634 | 0.6587 | 0.6658 | 0.6513 | | 0.573 | 3.5 | 4284 | 0.9263 | 0.6797 | 0.6575 | 0.6488 | 0.6514 | | 0.5805 | 4.0 | 4896 | 0.8351 | 0.6944 | 0.6719 | 0.6740 | 0.6723 | | 0.5069 | 4.5 | 5508 | 0.9772 | 0.6748 | 0.6564 | 0.6572 | 0.6546 | | 0.5085 | 5.0 | 6120 | 1.0206 | 0.6814 | 0.6561 | 0.6528 | 0.6543 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.10.1+cu111 - Datasets 2.3.2 - Tokenizers 0.12.1
anas-awadalla/t5-small-few-shot-k-128-finetuned-squad-seed-2
anas-awadalla
2022-09-27T14:19:03Z
110
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-09-27T13:59:53Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: t5-small-few-shot-k-128-finetuned-squad-seed-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-few-shot-k-128-finetuned-squad-seed-2 This model is a fine-tuned version of [google/t5-v1_1-small](https://huggingface.co/google/t5-v1_1-small) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 35.0 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.11.6
anas-awadalla/t5-small-few-shot-k-128-finetuned-squad-seed-0
anas-awadalla
2022-09-27T13:58:03Z
110
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-09-27T13:54:00Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: t5-small-few-shot-k-128-finetuned-squad-seed-0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-few-shot-k-128-finetuned-squad-seed-0 This model is a fine-tuned version of [google/t5-v1_1-small](https://huggingface.co/google/t5-v1_1-small) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 35.0 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.11.6
anas-awadalla/t5-small-few-shot-k-64-finetuned-squad-seed-4
anas-awadalla
2022-09-27T13:52:23Z
108
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-09-27T13:49:20Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: t5-small-few-shot-k-64-finetuned-squad-seed-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-few-shot-k-64-finetuned-squad-seed-4 This model is a fine-tuned version of [google/t5-v1_1-small](https://huggingface.co/google/t5-v1_1-small) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 1000 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.11.6
Najeen/marian-finetuned-kde4-en-to-fr
Najeen
2022-09-27T13:50:11Z
110
0
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "translation", "generated_from_trainer", "dataset:kde4", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-09-27T02:54:56Z
--- license: apache-2.0 tags: - translation - generated_from_trainer datasets: - kde4 metrics: - bleu model-index: - name: marian-finetuned-kde4-en-to-fr results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: kde4 type: kde4 config: en-fr split: train args: en-fr metrics: - name: Bleu type: bleu value: 52.83113187001415 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.8560 - Bleu: 52.8311 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
huynguyen208/bert-base-multilingual-cased-finetuned-ner
huynguyen208
2022-09-27T12:43:41Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-09-25T12:10:18Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-base-multilingual-cased-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-multilingual-cased-finetuned-ner This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0247 - Precision: 0.9269 - Recall: 0.9509 - F1: 0.9387 - Accuracy: 0.9945 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0744 | 1.0 | 843 | 0.0266 | 0.8945 | 0.9293 | 0.9116 | 0.9920 | | 0.016 | 2.0 | 1686 | 0.0239 | 0.9279 | 0.9446 | 0.9362 | 0.9942 | | 0.0075 | 3.0 | 2529 | 0.0247 | 0.9269 | 0.9509 | 0.9387 | 0.9945 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
anas-awadalla/t5-small-few-shot-k-32-finetuned-squad-seed-0
anas-awadalla
2022-09-27T12:43:36Z
108
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-09-27T12:35:39Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: t5-small-few-shot-k-32-finetuned-squad-seed-0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-few-shot-k-32-finetuned-squad-seed-0 This model is a fine-tuned version of [google/t5-v1_1-small](https://huggingface.co/google/t5-v1_1-small) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 1000 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.11.6
anas-awadalla/t5-small-few-shot-k-16-finetuned-squad-seed-2
anas-awadalla
2022-09-27T12:24:59Z
111
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-09-27T12:18:00Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: t5-small-few-shot-k-16-finetuned-squad-seed-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-few-shot-k-16-finetuned-squad-seed-2 This model is a fine-tuned version of [google/t5-v1_1-small](https://huggingface.co/google/t5-v1_1-small) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 1000 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.11.6
Hoax0930/kyoto_marian_mod_2_1
Hoax0930
2022-09-27T11:09:17Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "translation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-09-27T09:18:33Z
--- license: apache-2.0 tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: kyoto_marian_mod_2_1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kyoto_marian_mod_2_1 This model is a fine-tuned version of [Hoax0930/kyoto_marian_mod_2_0](https://huggingface.co/Hoax0930/kyoto_marian_mod_2_0) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.2568 - Bleu: 20.9923 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
ericntay/stbl_clinical_bert_ft_rs6
ericntay
2022-09-27T09:57:00Z
116
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-09-27T09:38:14Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: stbl_clinical_bert_ft_rs6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # stbl_clinical_bert_ft_rs6 This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0876 - F1: 0.9177 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 12 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2778 | 1.0 | 101 | 0.0871 | 0.8482 | | 0.066 | 2.0 | 202 | 0.0700 | 0.8892 | | 0.031 | 3.0 | 303 | 0.0657 | 0.9053 | | 0.0152 | 4.0 | 404 | 0.0716 | 0.9057 | | 0.0099 | 5.0 | 505 | 0.0717 | 0.9105 | | 0.0049 | 6.0 | 606 | 0.0807 | 0.9145 | | 0.0042 | 7.0 | 707 | 0.0796 | 0.9140 | | 0.0028 | 8.0 | 808 | 0.0833 | 0.9140 | | 0.002 | 9.0 | 909 | 0.0836 | 0.9141 | | 0.0013 | 10.0 | 1010 | 0.0866 | 0.9177 | | 0.0011 | 11.0 | 1111 | 0.0867 | 0.9178 | | 0.001 | 12.0 | 1212 | 0.0876 | 0.9177 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
hadiqa123/XLS-R_timit_en
hadiqa123
2022-09-27T09:26:46Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-09-22T05:39:00Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: XLS-R_timit_en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # XLS-R_timit_en This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3799 - Wer: 0.3019 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.5228 | 3.3 | 1000 | 0.9889 | 0.8394 | | 0.6617 | 6.6 | 2000 | 0.3566 | 0.4027 | | 0.3177 | 9.9 | 3000 | 0.3112 | 0.3606 | | 0.2262 | 13.2 | 4000 | 0.3521 | 0.3324 | | 0.1683 | 16.5 | 5000 | 0.3563 | 0.3260 | | 0.137 | 19.8 | 6000 | 0.3605 | 0.3149 | | 0.1139 | 23.1 | 7000 | 0.3768 | 0.3069 | | 0.1068 | 26.4 | 8000 | 0.3643 | 0.3044 | | 0.0897 | 29.7 | 9000 | 0.3799 | 0.3019 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.12.1+cu113 - Datasets 1.18.3 - Tokenizers 0.13.0
Hoax0930/kyoto_marian_mod_2_0
Hoax0930
2022-09-27T09:07:08Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "translation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-09-27T07:15:28Z
--- license: apache-2.0 tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: kyoto_marian_mod_3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kyoto_marian_mod_3 This model is a fine-tuned version of [Hoax0930/kyoto_marian_mod_2](https://huggingface.co/Hoax0930/kyoto_marian_mod_2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.2477 - Bleu: 19.9506 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
sd-concepts-library/rishusei-style
sd-concepts-library
2022-09-27T08:48:42Z
0
0
null
[ "license:mit", "region:us" ]
null
2022-09-27T08:48:36Z
--- license: mit --- ### Rishusei style on Stable Diffusion This is the `<crishusei-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<crishusei-style> 0](https://huggingface.co/sd-concepts-library/rishusei-style/resolve/main/concept_images/0.jpeg) ![<crishusei-style> 1](https://huggingface.co/sd-concepts-library/rishusei-style/resolve/main/concept_images/3.jpeg) ![<crishusei-style> 2](https://huggingface.co/sd-concepts-library/rishusei-style/resolve/main/concept_images/2.jpeg) ![<crishusei-style> 3](https://huggingface.co/sd-concepts-library/rishusei-style/resolve/main/concept_images/1.jpeg)
nlpodyssey/bert-italian-uncased-iptc-headlines
nlpodyssey
2022-09-27T07:43:12Z
104
1
transformers
[ "transformers", "pytorch", "bert", "text-classification", "it", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-09-23T07:34:20Z
--- license: apache-2.0 language: - it tags: - text-classification --- # bert-italian-uncased-iptc-headlines This a bert-base-italian-uncased model fine-tuned for news headlines classification in Italian. It predicts the top-level category of the [IPTC subject taxonomy](https://cv.iptc.org/newscodes/subjectcode): | Class | English label | | -------- | ----------------- | | 01000000 | Arts, Culture & Entertainment | | 02000000 | Crime, Law & Justice | | 03000000 | Disasters & Accidents | | 04000000 | Economy, Business & Finance | | 05000000 | Education | | 06000000 | Environmental Issues | | 07000000 | Health | | 08000000 | Human Interest | | 09000000 | Labour | | 10000000 | Lifestyle & Leisure | | 11000000 | Politics | | 12000000 | Religion & Belief | | 13000000 | Science & Technology | | 14000000 | Social Issues | | 15000000 | Sport | | 16000000 | Unrest, Conflicts & War | | 17000000 | Weather | # Authors The [NLP Odyssey](https://github.com/nlpodyssey/) Authors (Matteo Grella, Marco Nicola)
crescendonow/pwa_categorical_complaint
crescendonow
2022-09-27T07:42:44Z
161
0
transformers
[ "transformers", "pytorch", "camembert", "text-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-09-27T07:24:36Z
--- license: apache-2.0 --- This Model finetunes from WangchanBERTa ("wangchanberta-base-att-spm-uncased") uses only the Provincial Waterworks Authority of Thailand. The Model classification into ten categories describe by the dictionary are {'ข้อร้องเรียน-ปริมาณน้ำ':[11,0], 'ข้อร้องเรียน-ท่อแตกรั่ว':[12,1], 'ข้อร้องเรียน-คุณภาพน้ำ':[13,2], 'ข้อร้องเรียน-การบริการ':[14,3], 'ข้อร้องเรียน-บุคลากร':[15,4], 'ข้อสอบถามทั่วไป':[2,5], 'ข้อเสนอแนะ':[3,6], 'ข้อคิดเห็น':[4,7], 'อื่นๆ':[8,8], 'ไม่เกี่ยวข้องกับกปภ.':[9,9]}
albertdestajo/distilbert-base-uncased-finetuned-mrpc
albertdestajo
2022-09-27T05:40:59Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-09-26T18:31:51Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-mrpc results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: mrpc split: train args: mrpc metrics: - name: Accuracy type: accuracy value: 0.7916666666666666 - name: F1 type: f1 value: 0.8608837970540099 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-mrpc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4502 - Accuracy: 0.7917 - F1: 0.8609 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.4474 | 1.0 | 230 | 0.4502 | 0.7917 | 0.8609 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
kerkathy/distilbert-base-uncased-finetuned-imdb
kerkathy
2022-09-27T04:57:38Z
163
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-09-27T04:50:34Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4721 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7086 | 1.0 | 157 | 2.4898 | | 2.5796 | 2.0 | 314 | 2.4230 | | 2.5269 | 3.0 | 471 | 2.4354 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
Ricardmc99/Reinforce-CartPole-v1
Ricardmc99
2022-09-27T03:46:36Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2022-09-27T03:45:38Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 90.30 +/- 49.35 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
SmilestheSad/bert-base-multilingual-uncased-sep-26
SmilestheSad
2022-09-27T03:46:06Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-09-27T01:23:12Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - f1 model-index: - name: bert-base-multilingual-uncased-sep-26 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-multilingual-uncased-sep-26 This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0483 - F1: 0.9369 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.0798 | 1.0 | 8623 | 0.0682 | 0.8979 | | 0.0498 | 2.0 | 17246 | 0.0551 | 0.9270 | | 0.0351 | 3.0 | 25869 | 0.0483 | 0.9369 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
Ricardmc99/testpyramidsrnd
Ricardmc99
2022-09-27T02:57:53Z
7
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2022-09-27T02:57:47Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids library_name: ml-agents --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: Ricardmc99/testpyramidsrnd 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
sd-concepts-library/happy-person12345
sd-concepts-library
2022-09-27T02:56:00Z
0
1
null
[ "license:mit", "region:us" ]
null
2022-09-27T02:55:55Z
--- license: mit --- ### Happy_Person12345 on Stable Diffusion This is the `<Happy-Person12345>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<Happy-Person12345> 0](https://huggingface.co/sd-concepts-library/happy-person12345/resolve/main/concept_images/0.jpeg) ![<Happy-Person12345> 1](https://huggingface.co/sd-concepts-library/happy-person12345/resolve/main/concept_images/3.jpeg) ![<Happy-Person12345> 2](https://huggingface.co/sd-concepts-library/happy-person12345/resolve/main/concept_images/2.jpeg) ![<Happy-Person12345> 3](https://huggingface.co/sd-concepts-library/happy-person12345/resolve/main/concept_images/1.jpeg)
sd-concepts-library/kawaii-girl-plus-object
sd-concepts-library
2022-09-26T22:23:30Z
0
2
null
[ "license:mit", "region:us" ]
null
2022-09-26T22:23:21Z
--- license: mit --- ### kawaii_girl_plus_object on Stable Diffusion This is the `<kawaii_girl>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<kawaii_girl> 0](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-object/resolve/main/concept_images/0.png) ![<kawaii_girl> 1](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-object/resolve/main/concept_images/1.png) ![<kawaii_girl> 2](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-object/resolve/main/concept_images/2.png) ![<kawaii_girl> 3](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-object/resolve/main/concept_images/3.png) ![<kawaii_girl> 4](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-object/resolve/main/concept_images/4.png) ![<kawaii_girl> 5](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-object/resolve/main/concept_images/5.png) ![<kawaii_girl> 6](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-object/resolve/main/concept_images/6.png) ![<kawaii_girl> 7](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-object/resolve/main/concept_images/7.png) ![<kawaii_girl> 8](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-object/resolve/main/concept_images/8.png) ![<kawaii_girl> 9](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-object/resolve/main/concept_images/9.png) ![<kawaii_girl> 10](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-object/resolve/main/concept_images/10.png) ![<kawaii_girl> 11](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-object/resolve/main/concept_images/11.png) ![<kawaii_girl> 12](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-object/resolve/main/concept_images/12.png) ![<kawaii_girl> 13](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-object/resolve/main/concept_images/13.png) ![<kawaii_girl> 14](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-object/resolve/main/concept_images/14.png) ![<kawaii_girl> 15](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-object/resolve/main/concept_images/15.png) ![<kawaii_girl> 16](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-object/resolve/main/concept_images/16.png) ![<kawaii_girl> 17](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-object/resolve/main/concept_images/17.png) ![<kawaii_girl> 18](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-object/resolve/main/concept_images/18.png) ![<kawaii_girl> 19](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-object/resolve/main/concept_images/19.png) ![<kawaii_girl> 20](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-object/resolve/main/concept_images/20.png) ![<kawaii_girl> 21](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-object/resolve/main/concept_images/21.png) ![<kawaii_girl> 22](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-object/resolve/main/concept_images/22.png) ![<kawaii_girl> 23](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-object/resolve/main/concept_images/23.png) ![<kawaii_girl> 24](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-object/resolve/main/concept_images/24.png) ![<kawaii_girl> 25](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-object/resolve/main/concept_images/25.png) ![<kawaii_girl> 26](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-object/resolve/main/concept_images/26.png) ![<kawaii_girl> 27](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-object/resolve/main/concept_images/27.png) ![<kawaii_girl> 28](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-object/resolve/main/concept_images/28.png) ![<kawaii_girl> 29](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-object/resolve/main/concept_images/29.png) ![<kawaii_girl> 30](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-object/resolve/main/concept_images/30.png) ![<kawaii_girl> 31](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-object/resolve/main/concept_images/31.png) ![<kawaii_girl> 32](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-object/resolve/main/concept_images/32.png) ![<kawaii_girl> 33](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-object/resolve/main/concept_images/33.png) ![<kawaii_girl> 34](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-object/resolve/main/concept_images/34.png) ![<kawaii_girl> 35](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-object/resolve/main/concept_images/35.png) ![<kawaii_girl> 36](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-object/resolve/main/concept_images/36.png) ![<kawaii_girl> 37](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-object/resolve/main/concept_images/37.png)
ammarpl/t5-base-finetuned-eli5-a
ammarpl
2022-09-26T22:02:48Z
109
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:eli5", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-09-26T19:36:35Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - eli5 metrics: - rouge model-index: - name: t5-base-finetuned-eli5-a results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: eli5 type: eli5 config: LFQA_reddit split: train_eli5 args: LFQA_reddit metrics: - name: Rouge1 type: rouge value: 14.6711 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-finetuned-eli5-a This model is a fine-tuned version of [ammarpl/t5-base-finetuned-xsum-a](https://huggingface.co/ammarpl/t5-base-finetuned-xsum-a) on the eli5 dataset. It achieves the following results on the evaluation set: - Loss: 3.1773 - Rouge1: 14.6711 - Rouge2: 2.2878 - Rougel: 11.3676 - Rougelsum: 13.1805 - Gen Len: 18.9892 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 3.3417 | 1.0 | 17040 | 3.1773 | 14.6711 | 2.2878 | 11.3676 | 13.1805 | 18.9892 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
enaserian/distilbert-base-uncased-finetuned
enaserian
2022-09-26T20:11:39Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-09-23T10:58:52Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 7.2813 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 8.6309 | 1.0 | 76 | 7.4774 | | 7.0806 | 2.0 | 152 | 6.9937 | | 6.6842 | 3.0 | 228 | 6.9314 | | 6.4592 | 4.0 | 304 | 6.9088 | | 6.2936 | 5.0 | 380 | 6.9135 | | 6.1301 | 6.0 | 456 | 6.9018 | | 5.9878 | 7.0 | 532 | 6.8865 | | 5.8071 | 8.0 | 608 | 6.8926 | | 5.6372 | 9.0 | 684 | 6.8750 | | 5.4791 | 10.0 | 760 | 6.9394 | | 5.3365 | 11.0 | 836 | 6.9594 | | 5.2117 | 12.0 | 912 | 6.9962 | | 5.0887 | 13.0 | 988 | 7.0570 | | 4.9288 | 14.0 | 1064 | 7.0549 | | 4.8169 | 15.0 | 1140 | 7.0971 | | 4.7008 | 16.0 | 1216 | 7.1439 | | 4.6149 | 17.0 | 1292 | 7.1320 | | 4.487 | 18.0 | 1368 | 7.1577 | | 4.364 | 19.0 | 1444 | 7.1712 | | 4.3208 | 20.0 | 1520 | 7.1959 | | 4.2492 | 21.0 | 1596 | 7.2136 | | 4.1423 | 22.0 | 1672 | 7.2304 | | 4.0873 | 23.0 | 1748 | 7.2526 | | 4.0261 | 24.0 | 1824 | 7.2681 | | 3.9598 | 25.0 | 1900 | 7.2715 | | 3.9562 | 26.0 | 1976 | 7.2648 | | 3.8951 | 27.0 | 2052 | 7.2665 | | 3.8772 | 28.0 | 2128 | 7.2781 | | 3.8403 | 29.0 | 2204 | 7.2801 | | 3.8275 | 30.0 | 2280 | 7.2813 | ### Framework versions - Transformers 4.21.3 - Pytorch 1.12.1 - Datasets 2.4.0 - Tokenizers 0.12.1
BumblingOrange/Hiten
BumblingOrange
2022-09-26T19:26:04Z
0
35
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2022-09-26T18:50:36Z
--- license: bigscience-bloom-rail-1.0 --- Uses the Waifu Diffusion model as a base, linked here: https://huggingface.co/hakurei/waifu-diffusion Custom Dreambooth model based off of the artist Hiten. Dataset was 6 training images, and 34 regularization images. Trained for 4000 steps. To use the model, simply insert the name 'Hiten' into your prompts. The class token used was 'girl_anime_8k_wallpaper'. Append the class token after Hiten for stronger results. EX: "1girl by Hiten girl_anime_8k_wallpaper" ![sCFaFd9hP4.jpg](https://s3.amazonaws.com/moonup/production/uploads/1664220354865-6303fe3cd14428368d1a4137.jpeg)
sd-concepts-library/kira-sensei
sd-concepts-library
2022-09-26T19:25:20Z
0
2
null
[ "license:mit", "region:us" ]
null
2022-09-26T19:25:07Z
--- license: mit --- ### kira-sensei on Stable Diffusion This is the `<kira-sensei>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<kira-sensei> 0](https://huggingface.co/sd-concepts-library/kira-sensei/resolve/main/concept_images/3.jpeg) ![<kira-sensei> 1](https://huggingface.co/sd-concepts-library/kira-sensei/resolve/main/concept_images/1.jpeg) ![<kira-sensei> 2](https://huggingface.co/sd-concepts-library/kira-sensei/resolve/main/concept_images/0.jpeg) ![<kira-sensei> 3](https://huggingface.co/sd-concepts-library/kira-sensei/resolve/main/concept_images/2.jpeg)
mrm8488/setfit-mpnet-base-v2-finetuned-sentEval-CR
mrm8488
2022-09-26T18:50:11Z
7
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-09-26T18:49:59Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 40 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 20, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 40, "warmup_steps": 4, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
LucasBorth/jurisbert-base-classify
LucasBorth
2022-09-26T18:16:34Z
101
1
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-09-26T18:12:23Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: jurisbert-base-classify results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # jurisbert-base-classify This model is a fine-tuned version of [juridics/jurisbert-base-portuguese-uncased](https://huggingface.co/juridics/jurisbert-base-portuguese-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4893 - Accuracy: 0.8991 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 24 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu116 - Datasets 2.4.0 - Tokenizers 0.12.1
espnet/transformer_tts_cmu_indic_hin_ab
espnet
2022-09-26T18:15:07Z
0
2
espnet
[ "espnet", "audio", "text-to-speech", "hi", "dataset:cmu_indic", "region:us" ]
text-to-speech
2022-09-26T18:02:38Z
--- tags: - espnet - audio - text-to-speech language: hi datasets: - cmu_indic ---
sd-concepts-library/fairytale
sd-concepts-library
2022-09-26T17:23:45Z
0
1
null
[ "license:mit", "region:us" ]
null
2022-09-26T17:23:44Z
--- license: mit --- ### fAIrytale on Stable Diffusion This is the `<fAIrytale>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<fAIrytale> 0](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/276.jpeg) ![<fAIrytale> 1](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/643.jpeg) ![<fAIrytale> 2](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/296.jpeg) ![<fAIrytale> 3](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/332.jpeg) ![<fAIrytale> 4](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/267.jpeg) ![<fAIrytale> 5](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/765.jpeg) ![<fAIrytale> 6](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/205.jpeg) ![<fAIrytale> 7](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/429.jpeg) ![<fAIrytale> 8](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/432.jpeg) ![<fAIrytale> 9](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/720.jpeg) ![<fAIrytale> 10](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/305.jpeg) ![<fAIrytale> 11](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/473.jpeg) ![<fAIrytale> 12](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/677.jpeg) ![<fAIrytale> 13](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/761.jpeg) ![<fAIrytale> 14](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/567.jpeg) ![<fAIrytale> 15](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/752.jpeg) ![<fAIrytale> 16](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/115.jpeg) ![<fAIrytale> 17](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/199.jpeg) ![<fAIrytale> 18](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/324.jpeg) ![<fAIrytale> 19](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/244.jpeg) ![<fAIrytale> 20](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/434.jpeg) ![<fAIrytale> 21](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/491.jpeg) ![<fAIrytale> 22](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/316.jpeg) ![<fAIrytale> 23](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/328.jpeg) ![<fAIrytale> 24](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/147.jpeg) ![<fAIrytale> 25](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/263.jpeg) ![<fAIrytale> 26](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/264.jpeg) ![<fAIrytale> 27](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/678.jpeg) ![<fAIrytale> 28](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/482.jpeg) ![<fAIrytale> 29](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/362.jpeg) ![<fAIrytale> 30](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/400.jpeg) ![<fAIrytale> 31](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/675.jpeg) ![<fAIrytale> 32](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/356.jpeg) ![<fAIrytale> 33](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/440.jpeg) ![<fAIrytale> 34](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/99.jpeg) ![<fAIrytale> 35](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/704.jpeg) ![<fAIrytale> 36](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/240.jpeg) ![<fAIrytale> 37](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/287.jpeg) ![<fAIrytale> 38](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/207.jpeg) ![<fAIrytale> 39](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/249.jpeg) ![<fAIrytale> 40](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/321.jpeg) ![<fAIrytale> 41](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/708.jpeg) ![<fAIrytale> 42](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/391.jpeg) ![<fAIrytale> 43](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/602.jpeg) ![<fAIrytale> 44](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/416.jpeg) ![<fAIrytale> 45](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/631.jpeg) ![<fAIrytale> 46](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/73.jpeg) ![<fAIrytale> 47](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/406.jpeg) ![<fAIrytale> 48](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/449.jpeg) ![<fAIrytale> 49](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/596.jpeg) ![<fAIrytale> 50](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/665.jpeg) ![<fAIrytale> 51](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/66.jpeg) ![<fAIrytale> 52](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/74.jpeg) ![<fAIrytale> 53](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/619.jpeg) ![<fAIrytale> 54](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/408.jpeg) ![<fAIrytale> 55](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/201.jpeg) ![<fAIrytale> 56](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/671.jpeg) ![<fAIrytale> 57](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/632.jpeg) ![<fAIrytale> 58](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/232.jpeg) ![<fAIrytale> 59](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/489.jpeg) ![<fAIrytale> 60](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/736.jpeg) ![<fAIrytale> 61](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/92.jpeg) ![<fAIrytale> 62](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/202.jpeg) ![<fAIrytale> 63](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/763.jpeg) ![<fAIrytale> 64](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/241.jpeg) ![<fAIrytale> 65](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/775.jpeg) ![<fAIrytale> 66](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/552.jpeg) ![<fAIrytale> 67](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/40.jpeg) ![<fAIrytale> 68](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/524.jpeg) ![<fAIrytale> 69](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/374.jpeg) ![<fAIrytale> 70](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/639.jpeg) ![<fAIrytale> 71](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/533.jpeg) ![<fAIrytale> 72](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/564.jpeg) ![<fAIrytale> 73](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/770.jpeg) ![<fAIrytale> 74](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/338.jpeg) ![<fAIrytale> 75](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/636.jpeg) ![<fAIrytale> 76](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/578.jpeg) ![<fAIrytale> 77](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/228.jpeg) ![<fAIrytale> 78](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/728.jpeg) ![<fAIrytale> 79](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/492.jpeg) ![<fAIrytale> 80](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/620.jpeg) ![<fAIrytale> 81](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/764.jpeg) ![<fAIrytale> 82](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/725.jpeg) ![<fAIrytale> 83](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/140.jpeg) ![<fAIrytale> 84](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/450.jpeg) ![<fAIrytale> 85](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/591.jpeg) ![<fAIrytale> 86](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/101.jpeg) ![<fAIrytale> 87](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/9.jpeg) ![<fAIrytale> 88](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/695.jpeg) ![<fAIrytale> 89](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/444.jpeg) ![<fAIrytale> 90](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/769.jpeg) ![<fAIrytale> 91](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/18.jpeg) ![<fAIrytale> 92](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/705.jpeg) ![<fAIrytale> 93](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/724.jpeg) ![<fAIrytale> 94](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/22.jpeg) ![<fAIrytale> 95](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/510.jpeg) ![<fAIrytale> 96](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/78.jpeg) ![<fAIrytale> 97](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/283.jpeg) ![<fAIrytale> 98](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/756.jpeg) ![<fAIrytale> 99](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/102.jpeg) ![<fAIrytale> 100](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/719.jpeg) ![<fAIrytale> 101](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/701.jpeg) ![<fAIrytale> 102](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/230.jpeg) ![<fAIrytale> 103](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/351.jpeg) ![<fAIrytale> 104](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/372.jpeg) ![<fAIrytale> 105](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/19.jpeg) ![<fAIrytale> 106](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/714.jpeg) ![<fAIrytale> 107](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/743.jpeg) ![<fAIrytale> 108](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/235.jpeg) ![<fAIrytale> 109](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/47.jpeg) ![<fAIrytale> 110](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/337.jpeg) ![<fAIrytale> 111](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/278.jpeg) ![<fAIrytale> 112](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/90.jpeg) ![<fAIrytale> 113](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/347.jpeg) ![<fAIrytale> 114](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/145.jpeg) ![<fAIrytale> 115](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/715.jpeg) ![<fAIrytale> 116](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/124.jpeg) ![<fAIrytale> 117](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/361.jpeg) ![<fAIrytale> 118](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/534.jpeg) ![<fAIrytale> 119](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/29.jpeg) ![<fAIrytale> 120](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/186.jpeg) ![<fAIrytale> 121](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/727.jpeg) ![<fAIrytale> 122](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/314.jpeg) ![<fAIrytale> 123](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/95.jpeg) ![<fAIrytale> 124](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/274.jpeg) ![<fAIrytale> 125](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/559.jpeg) ![<fAIrytale> 126](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/159.jpeg) ![<fAIrytale> 127](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/693.jpeg) ![<fAIrytale> 128](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/394.jpeg) ![<fAIrytale> 129](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/250.jpeg) ![<fAIrytale> 130](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/180.jpeg) ![<fAIrytale> 131](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/210.jpeg) ![<fAIrytale> 132](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/128.jpeg) ![<fAIrytale> 133](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/117.jpeg) ![<fAIrytale> 134](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/569.jpeg) ![<fAIrytale> 135](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/10.jpeg) ![<fAIrytale> 136](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/152.jpeg) ![<fAIrytale> 137](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/745.jpeg) ![<fAIrytale> 138](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/396.jpeg) ![<fAIrytale> 139](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/357.jpeg) ![<fAIrytale> 140](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/273.jpeg) ![<fAIrytale> 141](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/735.jpeg) ![<fAIrytale> 142](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/684.jpeg) ![<fAIrytale> 143](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/98.jpeg) ![<fAIrytale> 144](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/81.jpeg) ![<fAIrytale> 145](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/142.jpeg) ![<fAIrytale> 146](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/139.jpeg) ![<fAIrytale> 147](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/364.jpeg) ![<fAIrytale> 148](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/3.jpeg) ![<fAIrytale> 149](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/568.jpeg) ![<fAIrytale> 150](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/548.jpeg) ![<fAIrytale> 151](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/285.jpeg) ![<fAIrytale> 152](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/229.jpeg) ![<fAIrytale> 153](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/716.jpeg) ![<fAIrytale> 154](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/407.jpeg) ![<fAIrytale> 155](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/441.jpeg) ![<fAIrytale> 156](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/582.jpeg) ![<fAIrytale> 157](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/721.jpeg) ![<fAIrytale> 158](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/683.jpeg) ![<fAIrytale> 159](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/738.jpeg) ![<fAIrytale> 160](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/734.jpeg) ![<fAIrytale> 161](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/641.jpeg) ![<fAIrytale> 162](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/89.jpeg) ![<fAIrytale> 163](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/239.jpeg) ![<fAIrytale> 164](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/162.jpeg) ![<fAIrytale> 165](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/694.jpeg) ![<fAIrytale> 166](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/1.jpeg) ![<fAIrytale> 167](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/487.jpeg) ![<fAIrytale> 168](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/148.jpeg) ![<fAIrytale> 169](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/133.jpeg) ![<fAIrytale> 170](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/431.jpeg) ![<fAIrytale> 171](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/646.jpeg) ![<fAIrytale> 172](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/317.jpeg) ![<fAIrytale> 173](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/138.jpeg) ![<fAIrytale> 174](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/184.jpeg) ![<fAIrytale> 175](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/648.jpeg) ![<fAIrytale> 176](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/676.jpeg) ![<fAIrytale> 177](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/474.jpeg) ![<fAIrytale> 178](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/114.jpeg) ![<fAIrytale> 179](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/608.jpeg) ![<fAIrytale> 180](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/174.jpeg) ![<fAIrytale> 181](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/542.jpeg) ![<fAIrytale> 182](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/118.jpeg) ![<fAIrytale> 183](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/711.jpeg) ![<fAIrytale> 184](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/308.jpeg) ![<fAIrytale> 185](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/663.jpeg) ![<fAIrytale> 186](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/642.jpeg) ![<fAIrytale> 187](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/306.jpeg) ![<fAIrytale> 188](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/670.jpeg) ![<fAIrytale> 189](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/214.jpeg) ![<fAIrytale> 190](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/302.jpeg) ![<fAIrytale> 191](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/749.jpeg) ![<fAIrytale> 192](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/4.jpeg) ![<fAIrytale> 193](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/373.jpeg) ![<fAIrytale> 194](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/349.jpeg) ![<fAIrytale> 195](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/624.jpeg) ![<fAIrytale> 196](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/600.jpeg) ![<fAIrytale> 197](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/213.jpeg) ![<fAIrytale> 198](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/674.jpeg) ![<fAIrytale> 199](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/358.jpeg) ![<fAIrytale> 200](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/293.jpeg) ![<fAIrytale> 201](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/36.jpeg) ![<fAIrytale> 202](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/141.jpeg) ![<fAIrytale> 203](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/323.jpeg) ![<fAIrytale> 204](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/480.jpeg) ![<fAIrytale> 205](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/746.jpeg) ![<fAIrytale> 206](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/621.jpeg) ![<fAIrytale> 207](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/618.jpeg) ![<fAIrytale> 208](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/383.jpeg) ![<fAIrytale> 209](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/319.jpeg) ![<fAIrytale> 210](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/547.jpeg) ![<fAIrytale> 211](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/523.jpeg) ![<fAIrytale> 212](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/268.jpeg) ![<fAIrytale> 213](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/150.jpeg) ![<fAIrytale> 214](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/203.jpeg) ![<fAIrytale> 215](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/512.jpeg) ![<fAIrytale> 216](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/446.jpeg) ![<fAIrytale> 217](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/667.jpeg) ![<fAIrytale> 218](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/236.jpeg) ![<fAIrytale> 219](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/443.jpeg) ![<fAIrytale> 220](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/597.jpeg) ![<fAIrytale> 221](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/536.jpeg) ![<fAIrytale> 222](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/300.jpeg) ![<fAIrytale> 223](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/730.jpeg) ![<fAIrytale> 224](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/336.jpeg) ![<fAIrytale> 225](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/185.jpeg) ![<fAIrytale> 226](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/113.jpeg) ![<fAIrytale> 227](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/585.jpeg) ![<fAIrytale> 228](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/656.jpeg) ![<fAIrytale> 229](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/732.jpeg) ![<fAIrytale> 230](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/127.jpeg) ![<fAIrytale> 231](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/410.jpeg) ![<fAIrytale> 232](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/481.jpeg) ![<fAIrytale> 233](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/182.jpeg) ![<fAIrytale> 234](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/562.jpeg) ![<fAIrytale> 235](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/438.jpeg) ![<fAIrytale> 236](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/699.jpeg) ![<fAIrytale> 237](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/176.jpeg) ![<fAIrytale> 238](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/615.jpeg) ![<fAIrytale> 239](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/722.jpeg) ![<fAIrytale> 240](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/402.jpeg) ![<fAIrytale> 241](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/380.jpeg) ![<fAIrytale> 242](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/478.jpeg) ![<fAIrytale> 243](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/634.jpeg) ![<fAIrytale> 244](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/67.jpeg) ![<fAIrytale> 245](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/58.jpeg) ![<fAIrytale> 246](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/598.jpeg) ![<fAIrytale> 247](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/575.jpeg) ![<fAIrytale> 248](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/68.jpeg) ![<fAIrytale> 249](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/435.jpeg) ![<fAIrytale> 250](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/689.jpeg) ![<fAIrytale> 251](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/718.jpeg) ![<fAIrytale> 252](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/638.jpeg) ![<fAIrytale> 253](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/475.jpeg) ![<fAIrytale> 254](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/538.jpeg) ![<fAIrytale> 255](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/445.jpeg) ![<fAIrytale> 256](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/744.jpeg) ![<fAIrytale> 257](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/277.jpeg) ![<fAIrytale> 258](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/355.jpeg) ![<fAIrytale> 259](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/284.jpeg) ![<fAIrytale> 260](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/755.jpeg) ![<fAIrytale> 261](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/521.jpeg) ![<fAIrytale> 262](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/61.jpeg) ![<fAIrytale> 263](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/514.jpeg) ![<fAIrytale> 264](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/187.jpeg) ![<fAIrytale> 265](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/405.jpeg) ![<fAIrytale> 266](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/555.jpeg) ![<fAIrytale> 267](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/17.jpeg) ![<fAIrytale> 268](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/516.jpeg) ![<fAIrytale> 269](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/269.jpeg) ![<fAIrytale> 270](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/472.jpeg) ![<fAIrytale> 271](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/93.jpeg) ![<fAIrytale> 272](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/603.jpeg) ![<fAIrytale> 273](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/753.jpeg) ![<fAIrytale> 274](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/696.jpeg) ![<fAIrytale> 275](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/741.jpeg) ![<fAIrytale> 276](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/414.jpeg) ![<fAIrytale> 277](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/386.jpeg) ![<fAIrytale> 278](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/393.jpeg) ![<fAIrytale> 279](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/366.jpeg) ![<fAIrytale> 280](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/397.jpeg) ![<fAIrytale> 281](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/633.jpeg) ![<fAIrytale> 282](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/662.jpeg) ![<fAIrytale> 283](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/469.jpeg) ![<fAIrytale> 284](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/191.jpeg) ![<fAIrytale> 285](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/731.jpeg) ![<fAIrytale> 286](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/490.jpeg) ![<fAIrytale> 287](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/156.jpeg) ![<fAIrytale> 288](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/399.jpeg) ![<fAIrytale> 289](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/717.jpeg) ![<fAIrytale> 290](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/637.jpeg) ![<fAIrytale> 291](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/161.jpeg) ![<fAIrytale> 292](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/168.jpeg) ![<fAIrytale> 293](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/739.jpeg) ![<fAIrytale> 294](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/359.jpeg) ![<fAIrytale> 295](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/486.jpeg) ![<fAIrytale> 296](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/599.jpeg) ![<fAIrytale> 297](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/460.jpeg) ![<fAIrytale> 298](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/223.jpeg) ![<fAIrytale> 299](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/149.jpeg) ![<fAIrytale> 300](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/255.jpeg) ![<fAIrytale> 301](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/198.jpeg) ![<fAIrytale> 302](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/275.jpeg) ![<fAIrytale> 303](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/315.jpeg) ![<fAIrytale> 304](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/56.jpeg) ![<fAIrytale> 305](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/501.jpeg) ![<fAIrytale> 306](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/221.jpeg) ![<fAIrytale> 307](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/448.jpeg) ![<fAIrytale> 308](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/60.jpeg) ![<fAIrytale> 309](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/194.jpeg) ![<fAIrytale> 310](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/48.jpeg) ![<fAIrytale> 311](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/609.jpeg) ![<fAIrytale> 312](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/136.jpeg) ![<fAIrytale> 313](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/682.jpeg) ![<fAIrytale> 314](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/392.jpeg) ![<fAIrytale> 315](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/110.jpeg) ![<fAIrytale> 316](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/346.jpeg) ![<fAIrytale> 317](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/532.jpeg) ![<fAIrytale> 318](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/507.jpeg) ![<fAIrytale> 319](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/291.jpeg) ![<fAIrytale> 320](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/421.jpeg) ![<fAIrytale> 321](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/556.jpeg) ![<fAIrytale> 322](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/419.jpeg) ![<fAIrytale> 323](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/75.jpeg) ![<fAIrytale> 324](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/371.jpeg) ![<fAIrytale> 325](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/747.jpeg) ![<fAIrytale> 326](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/172.jpeg) ![<fAIrytale> 327](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/21.jpeg) ![<fAIrytale> 328](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/363.jpeg) ![<fAIrytale> 329](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/610.jpeg) ![<fAIrytale> 330](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/539.jpeg) ![<fAIrytale> 331](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/365.jpeg) ![<fAIrytale> 332](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/390.jpeg) ![<fAIrytale> 333](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/594.jpeg) ![<fAIrytale> 334](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/537.jpeg) ![<fAIrytale> 335](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/461.jpeg) ![<fAIrytale> 336](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/125.jpeg) ![<fAIrytale> 337](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/144.jpeg) ![<fAIrytale> 338](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/80.jpeg) ![<fAIrytale> 339](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/331.jpeg) ![<fAIrytale> 340](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/729.jpeg) ![<fAIrytale> 341](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/515.jpeg) ![<fAIrytale> 342](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/298.jpeg) ![<fAIrytale> 343](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/692.jpeg) ![<fAIrytale> 344](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/31.jpeg) ![<fAIrytale> 345](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/352.jpeg) ![<fAIrytale> 346](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/329.jpeg) ![<fAIrytale> 347](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/55.jpeg) ![<fAIrytale> 348](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/375.jpeg) ![<fAIrytale> 349](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/706.jpeg) ![<fAIrytale> 350](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/549.jpeg) ![<fAIrytale> 351](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/498.jpeg) ![<fAIrytale> 352](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/651.jpeg) ![<fAIrytale> 353](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/307.jpeg) ![<fAIrytale> 354](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/164.jpeg) ![<fAIrytale> 355](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/527.jpeg) ![<fAIrytale> 356](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/326.jpeg) ![<fAIrytale> 357](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/211.jpeg) ![<fAIrytale> 358](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/27.jpeg) ![<fAIrytale> 359](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/740.jpeg) ![<fAIrytale> 360](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/500.jpeg) ![<fAIrytale> 361](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/157.jpeg) ![<fAIrytale> 362](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/412.jpeg) ![<fAIrytale> 363](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/476.jpeg) ![<fAIrytale> 364](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/279.jpeg) ![<fAIrytale> 365](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/62.jpeg) ![<fAIrytale> 366](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/245.jpeg) ![<fAIrytale> 367](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/658.jpeg) ![<fAIrytale> 368](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/382.jpeg) ![<fAIrytale> 369](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/723.jpeg) ![<fAIrytale> 370](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/173.jpeg) ![<fAIrytale> 371](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/85.jpeg) ![<fAIrytale> 372](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/424.jpeg) ![<fAIrytale> 373](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/218.jpeg) ![<fAIrytale> 374](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/212.jpeg) ![<fAIrytale> 375](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/465.jpeg) ![<fAIrytale> 376](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/25.jpeg) ![<fAIrytale> 377](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/181.jpeg) ![<fAIrytale> 378](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/132.jpeg) ![<fAIrytale> 379](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/669.jpeg) ![<fAIrytale> 380](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/415.jpeg) ![<fAIrytale> 381](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/541.jpeg) ![<fAIrytale> 382](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/242.jpeg) ![<fAIrytale> 383](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/451.jpeg) ![<fAIrytale> 384](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/571.jpeg) ![<fAIrytale> 385](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/86.jpeg) ![<fAIrytale> 386](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/135.jpeg) ![<fAIrytale> 387](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/644.jpeg) ![<fAIrytale> 388](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/91.jpeg) ![<fAIrytale> 389](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/467.jpeg) ![<fAIrytale> 390](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/345.jpeg) ![<fAIrytale> 391](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/573.jpeg) ![<fAIrytale> 392](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/313.jpeg) ![<fAIrytale> 393](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/497.jpeg) ![<fAIrytale> 394](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/348.jpeg) ![<fAIrytale> 395](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/409.jpeg) ![<fAIrytale> 396](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/553.jpeg) ![<fAIrytale> 397](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/654.jpeg) ![<fAIrytale> 398](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/368.jpeg) ![<fAIrytale> 399](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/104.jpeg) ![<fAIrytale> 400](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/8.jpeg) ![<fAIrytale> 401](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/574.jpeg) ![<fAIrytale> 402](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/685.jpeg) ![<fAIrytale> 403](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/604.jpeg) ![<fAIrytale> 404](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/130.jpeg) ![<fAIrytale> 405](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/413.jpeg) ![<fAIrytale> 406](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/748.jpeg) ![<fAIrytale> 407](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/11.jpeg) ![<fAIrytale> 408](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/713.jpeg) ![<fAIrytale> 409](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/668.jpeg) ![<fAIrytale> 410](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/627.jpeg) ![<fAIrytale> 411](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/592.jpeg) ![<fAIrytale> 412](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/261.jpeg) ![<fAIrytale> 413](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/286.jpeg) ![<fAIrytale> 414](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/299.jpeg) ![<fAIrytale> 415](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/613.jpeg) ![<fAIrytale> 416](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/146.jpeg) ![<fAIrytale> 417](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/509.jpeg) ![<fAIrytale> 418](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/192.jpeg) ![<fAIrytale> 419](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/688.jpeg) ![<fAIrytale> 420](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/771.jpeg) ![<fAIrytale> 421](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/215.jpeg) ![<fAIrytale> 422](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/129.jpeg) ![<fAIrytale> 423](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/166.jpeg) ![<fAIrytale> 424](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/219.jpeg) ![<fAIrytale> 425](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/334.jpeg) ![<fAIrytale> 426](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/226.jpeg) ![<fAIrytale> 427](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/439.jpeg) ![<fAIrytale> 428](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/622.jpeg) ![<fAIrytale> 429](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/30.jpeg) ![<fAIrytale> 430](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/64.jpeg) ![<fAIrytale> 431](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/587.jpeg) ![<fAIrytale> 432](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/605.jpeg) ![<fAIrytale> 433](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/673.jpeg) ![<fAIrytale> 434](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/143.jpeg) ![<fAIrytale> 435](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/281.jpeg) ![<fAIrytale> 436](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/471.jpeg) ![<fAIrytale> 437](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/6.jpeg) ![<fAIrytale> 438](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/354.jpeg) ![<fAIrytale> 439](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/246.jpeg) ![<fAIrytale> 440](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/340.jpeg) ![<fAIrytale> 441](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/96.jpeg) ![<fAIrytale> 442](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/262.jpeg) ![<fAIrytale> 443](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/379.jpeg) ![<fAIrytale> 444](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/225.jpeg) ![<fAIrytale> 445](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/519.jpeg) ![<fAIrytale> 446](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/49.jpeg) ![<fAIrytale> 447](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/614.jpeg) ![<fAIrytale> 448](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/320.jpeg) ![<fAIrytale> 449](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/629.jpeg) ![<fAIrytale> 450](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/398.jpeg) ![<fAIrytale> 451](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/488.jpeg) ![<fAIrytale> 452](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/5.jpeg) ![<fAIrytale> 453](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/88.jpeg) ![<fAIrytale> 454](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/505.jpeg) ![<fAIrytale> 455](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/252.jpeg) ![<fAIrytale> 456](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/572.jpeg) ![<fAIrytale> 457](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/767.jpeg) ![<fAIrytale> 458](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/224.jpeg) ![<fAIrytale> 459](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/325.jpeg) ![<fAIrytale> 460](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/333.jpeg) ![<fAIrytale> 461](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/423.jpeg) ![<fAIrytale> 462](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/137.jpeg) ![<fAIrytale> 463](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/456.jpeg) ![<fAIrytale> 464](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/0.jpeg) ![<fAIrytale> 465](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/109.jpeg) ![<fAIrytale> 466](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/389.jpeg) ![<fAIrytale> 467](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/292.jpeg) ![<fAIrytale> 468](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/53.jpeg) ![<fAIrytale> 469](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/83.jpeg) ![<fAIrytale> 470](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/666.jpeg) ![<fAIrytale> 471](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/350.jpeg) ![<fAIrytale> 472](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/335.jpeg) ![<fAIrytale> 473](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/367.jpeg) ![<fAIrytale> 474](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/426.jpeg) ![<fAIrytale> 475](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/32.jpeg) ![<fAIrytale> 476](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/710.jpeg) ![<fAIrytale> 477](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/494.jpeg) ![<fAIrytale> 478](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/583.jpeg) ![<fAIrytale> 479](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/72.jpeg) ![<fAIrytale> 480](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/84.jpeg) ![<fAIrytale> 481](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/543.jpeg) ![<fAIrytale> 482](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/106.jpeg) ![<fAIrytale> 483](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/601.jpeg) ![<fAIrytale> 484](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/535.jpeg) ![<fAIrytale> 485](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/511.jpeg) ![<fAIrytale> 486](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/660.jpeg) ![<fAIrytale> 487](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/79.jpeg) ![<fAIrytale> 488](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/566.jpeg) ![<fAIrytale> 489](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/433.jpeg) ![<fAIrytale> 490](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/341.jpeg) ![<fAIrytale> 491](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/339.jpeg) ![<fAIrytale> 492](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/381.jpeg) ![<fAIrytale> 493](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/111.jpeg) ![<fAIrytale> 494](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/304.jpeg) ![<fAIrytale> 495](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/453.jpeg) ![<fAIrytale> 496](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/540.jpeg) ![<fAIrytale> 497](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/700.jpeg) ![<fAIrytale> 498](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/581.jpeg) ![<fAIrytale> 499](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/395.jpeg) ![<fAIrytale> 500](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/353.jpeg) ![<fAIrytale> 501](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/452.jpeg) ![<fAIrytale> 502](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/369.jpeg) ![<fAIrytale> 503](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/466.jpeg) ![<fAIrytale> 504](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/436.jpeg) ![<fAIrytale> 505](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/170.jpeg) ![<fAIrytale> 506](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/100.jpeg) ![<fAIrytale> 507](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/309.jpeg) ![<fAIrytale> 508](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/420.jpeg) ![<fAIrytale> 509](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/417.jpeg) ![<fAIrytale> 510](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/38.jpeg) ![<fAIrytale> 511](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/121.jpeg) ![<fAIrytale> 512](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/376.jpeg) ![<fAIrytale> 513](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/112.jpeg) ![<fAIrytale> 514](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/310.jpeg) ![<fAIrytale> 515](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/65.jpeg) ![<fAIrytale> 516](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/772.jpeg) ![<fAIrytale> 517](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/2.jpeg) ![<fAIrytale> 518](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/265.jpeg) ![<fAIrytale> 519](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/126.jpeg) ![<fAIrytale> 520](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/635.jpeg) ![<fAIrytale> 521](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/377.jpeg) ![<fAIrytale> 522](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/520.jpeg) ![<fAIrytale> 523](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/707.jpeg) ![<fAIrytale> 524](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/762.jpeg) ![<fAIrytale> 525](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/195.jpeg) ![<fAIrytale> 526](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/295.jpeg) ![<fAIrytale> 527](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/312.jpeg) ![<fAIrytale> 528](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/76.jpeg) ![<fAIrytale> 529](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/606.jpeg) ![<fAIrytale> 530](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/167.jpeg) ![<fAIrytale> 531](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/16.jpeg) ![<fAIrytale> 532](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/457.jpeg) ![<fAIrytale> 533](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/200.jpeg) ![<fAIrytale> 534](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/378.jpeg) ![<fAIrytale> 535](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/679.jpeg) ![<fAIrytale> 536](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/576.jpeg) ![<fAIrytale> 537](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/544.jpeg) ![<fAIrytale> 538](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/545.jpeg) ![<fAIrytale> 539](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/459.jpeg) ![<fAIrytale> 540](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/647.jpeg) ![<fAIrytale> 541](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/404.jpeg) ![<fAIrytale> 542](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/7.jpeg) ![<fAIrytale> 543](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/422.jpeg) ![<fAIrytale> 544](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/649.jpeg) ![<fAIrytale> 545](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/33.jpeg) ![<fAIrytale> 546](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/119.jpeg) ![<fAIrytale> 547](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/485.jpeg) ![<fAIrytale> 548](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/26.jpeg) ![<fAIrytale> 549](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/260.jpeg) ![<fAIrytale> 550](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/483.jpeg) ![<fAIrytale> 551](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/550.jpeg) ![<fAIrytale> 552](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/42.jpeg) ![<fAIrytale> 553](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/71.jpeg) ![<fAIrytale> 554](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/342.jpeg) ![<fAIrytale> 555](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/160.jpeg) ![<fAIrytale> 556](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/496.jpeg) ![<fAIrytale> 557](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/271.jpeg) ![<fAIrytale> 558](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/175.jpeg) ![<fAIrytale> 559](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/94.jpeg) ![<fAIrytale> 560](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/217.jpeg) ![<fAIrytale> 561](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/579.jpeg) ![<fAIrytale> 562](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/209.jpeg) ![<fAIrytale> 563](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/69.jpeg) ![<fAIrytale> 564](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/134.jpeg) ![<fAIrytale> 565](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/560.jpeg) ![<fAIrytale> 566](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/504.jpeg) ![<fAIrytale> 567](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/123.jpeg) ![<fAIrytale> 568](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/196.jpeg) ![<fAIrytale> 569](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/231.jpeg) ![<fAIrytale> 570](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/35.jpeg) ![<fAIrytale> 571](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/70.jpeg) ![<fAIrytale> 572](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/188.jpeg) ![<fAIrytale> 573](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/171.jpeg) ![<fAIrytale> 574](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/46.jpeg) ![<fAIrytale> 575](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/50.jpeg) ![<fAIrytale> 576](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/737.jpeg) ![<fAIrytale> 577](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/411.jpeg) ![<fAIrytale> 578](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/554.jpeg) ![<fAIrytale> 579](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/294.jpeg) ![<fAIrytale> 580](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/726.jpeg) ![<fAIrytale> 581](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/628.jpeg) ![<fAIrytale> 582](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/760.jpeg) ![<fAIrytale> 583](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/529.jpeg) ![<fAIrytale> 584](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/41.jpeg) ![<fAIrytale> 585](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/750.jpeg) ![<fAIrytale> 586](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/645.jpeg) ![<fAIrytale> 587](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/659.jpeg) ![<fAIrytale> 588](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/588.jpeg) ![<fAIrytale> 589](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/151.jpeg) ![<fAIrytale> 590](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/257.jpeg) ![<fAIrytale> 591](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/290.jpeg) ![<fAIrytale> 592](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/256.jpeg) ![<fAIrytale> 593](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/303.jpeg) ![<fAIrytale> 594](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/774.jpeg) ![<fAIrytale> 595](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/551.jpeg) ![<fAIrytale> 596](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/530.jpeg) ![<fAIrytale> 597](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/463.jpeg) ![<fAIrytale> 598](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/612.jpeg) ![<fAIrytale> 599](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/477.jpeg) ![<fAIrytale> 600](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/686.jpeg) ![<fAIrytale> 601](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/297.jpeg) ![<fAIrytale> 602](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/318.jpeg) ![<fAIrytale> 603](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/650.jpeg) ![<fAIrytale> 604](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/227.jpeg) ![<fAIrytale> 605](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/43.jpeg) ![<fAIrytale> 606](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/154.jpeg) ![<fAIrytale> 607](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/120.jpeg) ![<fAIrytale> 608](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/499.jpeg) ![<fAIrytale> 609](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/385.jpeg) ![<fAIrytale> 610](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/163.jpeg) ![<fAIrytale> 611](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/702.jpeg) ![<fAIrytale> 612](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/97.jpeg) ![<fAIrytale> 613](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/23.jpeg) ![<fAIrytale> 614](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/563.jpeg) ![<fAIrytale> 615](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/427.jpeg) ![<fAIrytale> 616](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/697.jpeg) ![<fAIrytale> 617](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/657.jpeg) ![<fAIrytale> 618](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/77.jpeg) ![<fAIrytale> 619](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/526.jpeg) ![<fAIrytale> 620](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/28.jpeg) ![<fAIrytale> 621](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/528.jpeg) ![<fAIrytale> 622](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/190.jpeg) ![<fAIrytale> 623](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/653.jpeg) ![<fAIrytale> 624](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/254.jpeg) ![<fAIrytale> 625](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/330.jpeg) ![<fAIrytale> 626](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/169.jpeg) ![<fAIrytale> 627](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/462.jpeg) ![<fAIrytale> 628](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/577.jpeg) ![<fAIrytale> 629](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/484.jpeg) ![<fAIrytale> 630](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/220.jpeg) ![<fAIrytale> 631](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/757.jpeg) ![<fAIrytale> 632](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/322.jpeg) ![<fAIrytale> 633](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/289.jpeg) ![<fAIrytale> 634](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/87.jpeg) ![<fAIrytale> 635](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/206.jpeg) ![<fAIrytale> 636](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/625.jpeg) ![<fAIrytale> 637](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/105.jpeg) ![<fAIrytale> 638](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/766.jpeg) ![<fAIrytale> 639](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/222.jpeg) ![<fAIrytale> 640](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/82.jpeg) ![<fAIrytale> 641](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/259.jpeg) ![<fAIrytale> 642](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/640.jpeg) ![<fAIrytale> 643](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/418.jpeg) ![<fAIrytale> 644](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/266.jpeg) ![<fAIrytale> 645](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/387.jpeg) ![<fAIrytale> 646](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/703.jpeg) ![<fAIrytale> 647](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/495.jpeg) ![<fAIrytale> 648](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/580.jpeg) ![<fAIrytale> 649](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/344.jpeg) ![<fAIrytale> 650](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/447.jpeg) ![<fAIrytale> 651](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/238.jpeg) ![<fAIrytale> 652](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/468.jpeg) ![<fAIrytale> 653](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/193.jpeg) ![<fAIrytale> 654](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/455.jpeg) ![<fAIrytale> 655](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/248.jpeg) ![<fAIrytale> 656](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/280.jpeg) ![<fAIrytale> 657](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/570.jpeg) ![<fAIrytale> 658](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/531.jpeg) ![<fAIrytale> 659](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/773.jpeg) ![<fAIrytale> 660](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/39.jpeg) ![<fAIrytale> 661](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/384.jpeg) ![<fAIrytale> 662](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/698.jpeg) ![<fAIrytale> 663](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/595.jpeg) ![<fAIrytale> 664](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/464.jpeg) ![<fAIrytale> 665](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/57.jpeg) ![<fAIrytale> 666](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/44.jpeg) ![<fAIrytale> 667](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/107.jpeg) ![<fAIrytale> 668](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/153.jpeg) ![<fAIrytale> 669](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/327.jpeg) ![<fAIrytale> 670](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/565.jpeg) ![<fAIrytale> 671](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/430.jpeg) ![<fAIrytale> 672](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/51.jpeg) ![<fAIrytale> 673](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/458.jpeg) ![<fAIrytale> 674](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/237.jpeg) ![<fAIrytale> 675](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/546.jpeg) ![<fAIrytale> 676](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/776.jpeg) ![<fAIrytale> 677](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/768.jpeg) ![<fAIrytale> 678](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/593.jpeg) ![<fAIrytale> 679](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/758.jpeg) ![<fAIrytale> 680](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/525.jpeg) ![<fAIrytale> 681](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/282.jpeg) ![<fAIrytale> 682](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/272.jpeg) ![<fAIrytale> 683](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/301.jpeg) ![<fAIrytale> 684](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/37.jpeg) ![<fAIrytale> 685](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/425.jpeg) ![<fAIrytale> 686](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/258.jpeg) ![<fAIrytale> 687](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/691.jpeg) ![<fAIrytale> 688](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/672.jpeg) ![<fAIrytale> 689](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/611.jpeg) ![<fAIrytale> 690](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/403.jpeg) ![<fAIrytale> 691](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/253.jpeg) ![<fAIrytale> 692](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/270.jpeg) ![<fAIrytale> 693](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/131.jpeg) ![<fAIrytale> 694](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/288.jpeg) ![<fAIrytale> 695](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/122.jpeg) ![<fAIrytale> 696](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/158.jpeg) ![<fAIrytale> 697](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/518.jpeg) ![<fAIrytale> 698](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/454.jpeg) ![<fAIrytale> 699](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/712.jpeg) ![<fAIrytale> 700](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/360.jpeg) ![<fAIrytale> 701](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/251.jpeg) ![<fAIrytale> 702](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/243.jpeg) ![<fAIrytale> 703](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/208.jpeg) ![<fAIrytale> 704](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/370.jpeg) ![<fAIrytale> 705](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/54.jpeg) ![<fAIrytale> 706](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/626.jpeg) ![<fAIrytale> 707](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/655.jpeg) ![<fAIrytale> 708](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/24.jpeg) ![<fAIrytale> 709](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/428.jpeg) ![<fAIrytale> 710](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/506.jpeg) ![<fAIrytale> 711](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/34.jpeg) ![<fAIrytale> 712](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/522.jpeg) ![<fAIrytale> 713](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/742.jpeg) ![<fAIrytale> 714](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/401.jpeg) ![<fAIrytale> 715](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/557.jpeg) ![<fAIrytale> 716](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/247.jpeg) ![<fAIrytale> 717](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/108.jpeg) ![<fAIrytale> 718](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/513.jpeg) ![<fAIrytale> 719](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/607.jpeg) ![<fAIrytale> 720](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/493.jpeg) ![<fAIrytale> 721](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/558.jpeg) ![<fAIrytale> 722](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/178.jpeg) ![<fAIrytale> 723](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/165.jpeg) ![<fAIrytale> 724](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/179.jpeg) ![<fAIrytale> 725](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/13.jpeg) ![<fAIrytale> 726](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/63.jpeg) ![<fAIrytale> 727](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/59.jpeg) ![<fAIrytale> 728](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/183.jpeg) ![<fAIrytale> 729](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/561.jpeg) ![<fAIrytale> 730](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/388.jpeg) ![<fAIrytale> 731](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/14.jpeg) ![<fAIrytale> 732](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/652.jpeg) ![<fAIrytale> 733](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/311.jpeg) ![<fAIrytale> 734](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/623.jpeg) ![<fAIrytale> 735](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/680.jpeg) ![<fAIrytale> 736](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/754.jpeg) ![<fAIrytale> 737](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/661.jpeg) ![<fAIrytale> 738](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/508.jpeg) ![<fAIrytale> 739](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/12.jpeg) ![<fAIrytale> 740](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/502.jpeg) ![<fAIrytale> 741](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/437.jpeg) ![<fAIrytale> 742](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/479.jpeg) ![<fAIrytale> 743](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/733.jpeg) ![<fAIrytale> 744](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/216.jpeg) ![<fAIrytale> 745](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/681.jpeg) ![<fAIrytale> 746](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/116.jpeg) ![<fAIrytale> 747](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/470.jpeg) ![<fAIrytale> 748](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/664.jpeg) ![<fAIrytale> 749](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/630.jpeg) ![<fAIrytale> 750](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/687.jpeg) ![<fAIrytale> 751](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/690.jpeg) ![<fAIrytale> 752](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/343.jpeg) ![<fAIrytale> 753](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/45.jpeg) ![<fAIrytale> 754](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/103.jpeg) ![<fAIrytale> 755](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/20.jpeg) ![<fAIrytale> 756](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/233.jpeg) ![<fAIrytale> 757](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/617.jpeg) ![<fAIrytale> 758](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/590.jpeg) ![<fAIrytale> 759](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/503.jpeg) ![<fAIrytale> 760](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/155.jpeg) ![<fAIrytale> 761](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/15.jpeg) ![<fAIrytale> 762](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/204.jpeg) ![<fAIrytale> 763](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/517.jpeg) ![<fAIrytale> 764](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/52.jpeg) ![<fAIrytale> 765](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/616.jpeg) ![<fAIrytale> 766](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/442.jpeg) ![<fAIrytale> 767](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/589.jpeg) ![<fAIrytale> 768](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/751.jpeg) ![<fAIrytale> 769](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/197.jpeg) ![<fAIrytale> 770](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/759.jpeg) ![<fAIrytale> 771](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/709.jpeg) ![<fAIrytale> 772](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/189.jpeg) ![<fAIrytale> 773](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/177.jpeg) ![<fAIrytale> 774](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/586.jpeg) ![<fAIrytale> 775](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/234.jpeg) ![<fAIrytale> 776](https://huggingface.co/sd-concepts-library/fairytale/resolve/main/concept_images/584.jpeg)
ammarpl/t5-small-finetuned-xsum
ammarpl
2022-09-26T16:38:17Z
109
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:xsum", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-09-25T16:48:59Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - xsum model-index: - name: t5-small-finetuned-xsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 0.01 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | No log | 0.01 | 128 | 3.0141 | 18.0313 | 2.7105 | 14.1325 | 14.3393 | 18.8882 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
pjcordero04/distilbert-base-uncased-finetuned-cola
pjcordero04
2022-09-26T16:32:49Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-09-26T14:35:56Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: cola split: train args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5442538936990396 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8348 - Matthews Correlation: 0.5443 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5236 | 1.0 | 535 | 0.5495 | 0.4205 | | 0.3505 | 2.0 | 1070 | 0.5176 | 0.4977 | | 0.2401 | 3.0 | 1605 | 0.5498 | 0.5354 | | 0.1751 | 4.0 | 2140 | 0.7975 | 0.5270 | | 0.1229 | 5.0 | 2675 | 0.8348 | 0.5443 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
shoang/wav2vec2-base-timit-demo-google-colab
shoang
2022-09-26T16:25:55Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-09-26T14:27:33Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-google-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-google-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5218 - Wer: 0.3434 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.5634 | 1.0 | 500 | 2.0727 | 1.0096 | | 0.9357 | 2.01 | 1000 | 0.6623 | 0.5634 | | 0.4536 | 3.01 | 1500 | 1.4421 | 0.4829 | | 0.3044 | 4.02 | 2000 | 0.4361 | 0.4363 | | 0.2369 | 5.02 | 2500 | 0.5098 | 0.4495 | | 0.1994 | 6.02 | 3000 | 0.4741 | 0.3711 | | 0.1699 | 7.03 | 3500 | 0.4652 | 0.3898 | | 0.1499 | 8.03 | 4000 | 0.4151 | 0.3949 | | 0.1308 | 9.04 | 4500 | 0.4685 | 0.3838 | | 0.1234 | 10.04 | 5000 | 0.5076 | 0.3794 | | 0.1055 | 11.04 | 5500 | 0.4492 | 0.3790 | | 0.0953 | 12.05 | 6000 | 0.4726 | 0.3679 | | 0.0863 | 13.05 | 6500 | 0.4797 | 0.3717 | | 0.0816 | 14.06 | 7000 | 0.4725 | 0.3655 | | 0.0842 | 15.06 | 7500 | 0.5181 | 0.3405 | | 0.0661 | 16.06 | 8000 | 0.5315 | 0.3510 | | 0.0593 | 17.07 | 8500 | 0.5024 | 0.3668 | | 0.0624 | 18.07 | 9000 | 0.5374 | 0.3663 | | 0.0535 | 19.08 | 9500 | 0.4861 | 0.3517 | | 0.0524 | 20.08 | 10000 | 0.4812 | 0.3574 | | 0.0461 | 21.08 | 10500 | 0.4976 | 0.3431 | | 0.0363 | 22.09 | 11000 | 0.5062 | 0.3476 | | 0.0351 | 23.09 | 11500 | 0.5094 | 0.3479 | | 0.0327 | 24.1 | 12000 | 0.5291 | 0.3455 | | 0.0319 | 25.1 | 12500 | 0.5209 | 0.3460 | | 0.0268 | 26.1 | 13000 | 0.5173 | 0.3481 | | 0.0263 | 27.11 | 13500 | 0.5362 | 0.3486 | | 0.0234 | 28.11 | 14000 | 0.5333 | 0.3444 | | 0.0237 | 29.12 | 14500 | 0.5218 | 0.3434 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.12.1+cu113 - Datasets 1.18.3 - Tokenizers 0.13.0
tner/deberta-v3-large-wnut2017
tner
2022-09-26T15:10:46Z
30
0
transformers
[ "transformers", "pytorch", "deberta-v2", "token-classification", "dataset:tner/wnut2017", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-09T23:14:32Z
--- datasets: - tner/wnut2017 metrics: - f1 - precision - recall model-index: - name: tner/deberta-v3-large-wnut2017 results: - task: name: Token Classification type: token-classification dataset: name: tner/wnut2017 type: tner/wnut2017 args: tner/wnut2017 metrics: - name: F1 type: f1 value: 0.5047353760445682 - name: Precision type: precision value: 0.63268156424581 - name: Recall type: recall value: 0.4198331788693234 - name: F1 (macro) type: f1_macro value: 0.4165125500830091 - name: Precision (macro) type: precision_macro value: 0.5356144444686111 - name: Recall (macro) type: recall_macro value: 0.3573954549633822 - name: F1 (entity span) type: f1_entity_span value: 0.6249999999999999 - name: Precision (entity span) type: precision_entity_span value: 0.7962697274031564 - name: Recall (entity span) type: recall_entity_span value: 0.5143651529193698 pipeline_tag: token-classification widget: - text: "Jacob Collier is a Grammy awarded artist from England." example_title: "NER Example 1" --- # tner/deberta-v3-large-wnut2017 This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the [tner/wnut2017](https://huggingface.co/datasets/tner/wnut2017) dataset. Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository for more detail). It achieves the following results on the test set: - F1 (micro): 0.5047353760445682 - Precision (micro): 0.63268156424581 - Recall (micro): 0.4198331788693234 - F1 (macro): 0.4165125500830091 - Precision (macro): 0.5356144444686111 - Recall (macro): 0.3573954549633822 The per-entity breakdown of the F1 score on the test set are below: - corporation: 0.25477707006369427 - group: 0.34309623430962344 - location: 0.6187050359712232 - person: 0.6721763085399448 - product: 0.18579234972677597 - work_of_art: 0.42452830188679247 For F1 scores, the confidence interval is obtained by bootstrap as below: - F1 (micro): - 90%: [0.4752384997212858, 0.5329114690850492] - 95%: [0.46929053844001617, 0.537282841423422] - F1 (macro): - 90%: [0.4752384997212858, 0.5329114690850492] - 95%: [0.46929053844001617, 0.537282841423422] Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/deberta-v3-large-wnut2017/raw/main/eval/metric.json) and [metric file of entity span](https://huggingface.co/tner/deberta-v3-large-wnut2017/raw/main/eval/metric_span.json). ### Usage This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip ```shell pip install tner ``` and activate model as below. ```python from tner import TransformersNER model = TransformersNER("tner/deberta-v3-large-wnut2017") model.predict(["Jacob Collier is a Grammy awarded English artist from London"]) ``` It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment. ### Training hyperparameters The following hyperparameters were used during training: - dataset: ['tner/wnut2017'] - dataset_split: train - dataset_name: None - local_dataset: None - model: microsoft/deberta-v3-large - crf: False - max_length: 128 - epoch: 15 - batch_size: 16 - lr: 1e-05 - random_seed: 42 - gradient_accumulation_steps: 4 - weight_decay: 1e-07 - lr_warmup_step_ratio: 0.1 - max_grad_norm: 10.0 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/deberta-v3-large-wnut2017/raw/main/trainer_config.json). ### Reference If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-camacho-collados-2021-ner, title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition", author = "Ushio, Asahi and Camacho-Collados, Jose", booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations", month = apr, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.eacl-demos.7", doi = "10.18653/v1/2021.eacl-demos.7", pages = "53--62", abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.", } ```
tner/deberta-v3-large-ttc
tner
2022-09-26T14:41:30Z
4
0
transformers
[ "transformers", "pytorch", "deberta-v2", "token-classification", "dataset:tner/ttc", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-17T11:20:57Z
--- datasets: - tner/ttc metrics: - f1 - precision - recall model-index: - name: tner/deberta-v3-large-ttc results: - task: name: Token Classification type: token-classification dataset: name: tner/ttc type: tner/ttc args: tner/ttc metrics: - name: F1 type: f1 value: 0.8266925817946227 - name: Precision type: precision value: 0.8264248704663213 - name: Recall type: recall value: 0.8269604666234608 - name: F1 (macro) type: f1_macro value: 0.8267742072572187 - name: Precision (macro) type: precision_macro value: 0.8278533291801137 - name: Recall (macro) type: recall_macro value: 0.8257668793195109 - name: F1 (entity span) type: f1_entity_span value: 0.8713961775186264 - name: Precision (entity span) type: precision_entity_span value: 0.8711139896373057 - name: Recall (entity span) type: recall_entity_span value: 0.8716785482825664 pipeline_tag: token-classification widget: - text: "Jacob Collier is a Grammy awarded artist from England." example_title: "NER Example 1" --- # tner/deberta-v3-large-ttc This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the [tner/ttc](https://huggingface.co/datasets/tner/ttc) dataset. Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository for more detail). It achieves the following results on the test set: - F1 (micro): 0.8266925817946227 - Precision (micro): 0.8264248704663213 - Recall (micro): 0.8269604666234608 - F1 (macro): 0.8267742072572187 - Precision (macro): 0.8278533291801137 - Recall (macro): 0.8257668793195109 The per-entity breakdown of the F1 score on the test set are below: - location: 0.7862266857962696 - organization: 0.7770320656226697 - person: 0.9170638703527169 For F1 scores, the confidence interval is obtained by bootstrap as below: - F1 (micro): - 90%: [0.8124223893760291, 0.8416139230675236] - 95%: [0.8098712905029445, 0.8440240645643514] - F1 (macro): - 90%: [0.8124223893760291, 0.8416139230675236] - 95%: [0.8098712905029445, 0.8440240645643514] Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/deberta-v3-large-ttc/raw/main/eval/metric.json) and [metric file of entity span](https://huggingface.co/tner/deberta-v3-large-ttc/raw/main/eval/metric_span.json). ### Usage This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip ```shell pip install tner ``` and activate model as below. ```python from tner import TransformersNER model = TransformersNER("tner/deberta-v3-large-ttc") model.predict(["Jacob Collier is a Grammy awarded English artist from London"]) ``` It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment. ### Training hyperparameters The following hyperparameters were used during training: - dataset: ['tner/ttc'] - dataset_split: train - dataset_name: None - local_dataset: None - model: microsoft/deberta-v3-large - crf: True - max_length: 128 - epoch: 15 - batch_size: 16 - lr: 1e-05 - random_seed: 42 - gradient_accumulation_steps: 4 - weight_decay: 1e-07 - lr_warmup_step_ratio: 0.1 - max_grad_norm: 10.0 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/deberta-v3-large-ttc/raw/main/trainer_config.json). ### Reference If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-camacho-collados-2021-ner, title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition", author = "Ushio, Asahi and Camacho-Collados, Jose", booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations", month = apr, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.eacl-demos.7", doi = "10.18653/v1/2021.eacl-demos.7", pages = "53--62", abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.", } ```
tner/deberta-v3-large-tweebank-ner
tner
2022-09-26T14:39:17Z
13
0
transformers
[ "transformers", "pytorch", "deberta-v2", "token-classification", "dataset:tner/tweebank_ner", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-10T10:07:10Z
--- datasets: - tner/tweebank_ner metrics: - f1 - precision - recall model-index: - name: tner/deberta-v3-large-tweebank-ner results: - task: name: Token Classification type: token-classification dataset: name: tner/tweebank_ner type: tner/tweebank_ner args: tner/tweebank_ner metrics: - name: F1 type: f1 value: 0.7253474520185308 - name: Precision type: precision value: 0.7201051248357424 - name: Recall type: recall value: 0.7306666666666667 - name: F1 (macro) type: f1_macro value: 0.701874697798745 - name: Precision (macro) type: precision_macro value: 0.7043005470796733 - name: Recall (macro) type: recall_macro value: 0.706915721861374 - name: F1 (entity span) type: f1_entity_span value: 0.8178343949044585 - name: Precision (entity span) type: precision_entity_span value: 0.7829268292682927 - name: Recall (entity span) type: recall_entity_span value: 0.856 pipeline_tag: token-classification widget: - text: "Jacob Collier is a Grammy awarded artist from England." example_title: "NER Example 1" --- # tner/deberta-v3-large-tweebank-ner This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the [tner/tweebank_ner](https://huggingface.co/datasets/tner/tweebank_ner) dataset. Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository for more detail). It achieves the following results on the test set: - F1 (micro): 0.7253474520185308 - Precision (micro): 0.7201051248357424 - Recall (micro): 0.7306666666666667 - F1 (macro): 0.701874697798745 - Precision (macro): 0.7043005470796733 - Recall (macro): 0.706915721861374 The per-entity breakdown of the F1 score on the test set are below: - location: 0.7289719626168224 - organization: 0.7040816326530612 - other: 0.5182926829268293 - person: 0.856152512998267 For F1 scores, the confidence interval is obtained by bootstrap as below: - F1 (micro): - 90%: [0.6978100031831928, 0.7529703029130037] - 95%: [0.691700704571692, 0.7582901338971108] - F1 (macro): - 90%: [0.6978100031831928, 0.7529703029130037] - 95%: [0.691700704571692, 0.7582901338971108] Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/deberta-v3-large-tweebank-ner/raw/main/eval/metric.json) and [metric file of entity span](https://huggingface.co/tner/deberta-v3-large-tweebank-ner/raw/main/eval/metric_span.json). ### Usage This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip ```shell pip install tner ``` and activate model as below. ```python from tner import TransformersNER model = TransformersNER("tner/deberta-v3-large-tweebank-ner") model.predict(["Jacob Collier is a Grammy awarded English artist from London"]) ``` It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment. ### Training hyperparameters The following hyperparameters were used during training: - dataset: ['tner/tweebank_ner'] - dataset_split: train - dataset_name: None - local_dataset: None - model: microsoft/deberta-v3-large - crf: True - max_length: 128 - epoch: 15 - batch_size: 16 - lr: 1e-05 - random_seed: 42 - gradient_accumulation_steps: 4 - weight_decay: 1e-07 - lr_warmup_step_ratio: 0.1 - max_grad_norm: 10.0 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/deberta-v3-large-tweebank-ner/raw/main/trainer_config.json). ### Reference If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-camacho-collados-2021-ner, title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition", author = "Ushio, Asahi and Camacho-Collados, Jose", booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations", month = apr, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.eacl-demos.7", doi = "10.18653/v1/2021.eacl-demos.7", pages = "53--62", abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.", } ```
tner/deberta-v3-large-mit-movie-trivia
tner
2022-09-26T14:30:39Z
7
0
transformers
[ "transformers", "pytorch", "deberta-v2", "token-classification", "dataset:tner/mit_movie_trivia", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-12T11:41:52Z
--- datasets: - tner/mit_movie_trivia metrics: - f1 - precision - recall model-index: - name: tner/deberta-v3-large-mit-movie-trivia results: - task: name: Token Classification type: token-classification dataset: name: tner/mit_movie_trivia type: tner/mit_movie_trivia args: tner/mit_movie_trivia metrics: - name: F1 type: f1 value: 0.7324478178368122 - name: Precision type: precision value: 0.7186865267433988 - name: Recall type: recall value: 0.746746394653535 - name: F1 (macro) type: f1_macro value: 0.6597589403836301 - name: Precision (macro) type: precision_macro value: 0.6493939604029393 - name: Recall (macro) type: recall_macro value: 0.6747458149186768 - name: F1 (entity span) type: f1_entity_span value: 0.749525289142068 - name: Precision (entity span) type: precision_entity_span value: 0.7359322033898306 - name: Recall (entity span) type: recall_entity_span value: 0.7636299683432993 pipeline_tag: token-classification widget: - text: "Jacob Collier is a Grammy awarded artist from England." example_title: "NER Example 1" --- # tner/deberta-v3-large-mit-movie-trivia This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the [tner/mit_movie_trivia](https://huggingface.co/datasets/tner/mit_movie_trivia) dataset. Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository for more detail). It achieves the following results on the test set: - F1 (micro): 0.7324478178368122 - Precision (micro): 0.7186865267433988 - Recall (micro): 0.746746394653535 - F1 (macro): 0.6597589403836301 - Precision (macro): 0.6493939604029393 - Recall (macro): 0.6747458149186768 The per-entity breakdown of the F1 score on the test set are below: - actor: 0.9590417310664605 - award: 0.4755244755244755 - character_name: 0.7391304347826086 - date: 0.9640179910044978 - director: 0.909706546275395 - genre: 0.755114693118413 - opinion: 0.4910714285714286 - origin: 0.3922518159806296 - plot: 0.4929757343550447 - quote: 0.7391304347826088 - relationship: 0.5705705705705706 - soundtrack: 0.42857142857142855 For F1 scores, the confidence interval is obtained by bootstrap as below: - F1 (micro): - 90%: [0.7213456287685677, 0.742502895519075] - 95%: [0.7198169787204788, 0.7460320515170399] - F1 (macro): - 90%: [0.7213456287685677, 0.742502895519075] - 95%: [0.7198169787204788, 0.7460320515170399] Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/deberta-v3-large-mit-movie-trivia/raw/main/eval/metric.json) and [metric file of entity span](https://huggingface.co/tner/deberta-v3-large-mit-movie-trivia/raw/main/eval/metric_span.json). ### Usage This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip ```shell pip install tner ``` and activate model as below. ```python from tner import TransformersNER model = TransformersNER("tner/deberta-v3-large-mit-movie-trivia") model.predict(["Jacob Collier is a Grammy awarded English artist from London"]) ``` It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment. ### Training hyperparameters The following hyperparameters were used during training: - dataset: ['tner/mit_movie_trivia'] - dataset_split: train - dataset_name: None - local_dataset: None - model: microsoft/deberta-v3-large - crf: True - max_length: 128 - epoch: 15 - batch_size: 16 - lr: 1e-05 - random_seed: 42 - gradient_accumulation_steps: 4 - weight_decay: 1e-07 - lr_warmup_step_ratio: 0.1 - max_grad_norm: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/deberta-v3-large-mit-movie-trivia/raw/main/trainer_config.json). ### Reference If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-camacho-collados-2021-ner, title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition", author = "Ushio, Asahi and Camacho-Collados, Jose", booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations", month = apr, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.eacl-demos.7", doi = "10.18653/v1/2021.eacl-demos.7", pages = "53--62", abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.", } ```
tner/deberta-large-wnut2017
tner
2022-09-26T14:29:19Z
5
0
transformers
[ "transformers", "pytorch", "deberta", "token-classification", "dataset:tner/wnut2017", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-09T23:25:51Z
--- datasets: - tner/wnut2017 metrics: - f1 - precision - recall model-index: - name: tner/deberta-large-wnut2017 results: - task: name: Token Classification type: token-classification dataset: name: tner/wnut2017 type: tner/wnut2017 args: tner/wnut2017 metrics: - name: F1 type: f1 value: 0.5105386416861827 - name: Precision type: precision value: 0.6931637519872814 - name: Recall type: recall value: 0.4040778498609824 - name: F1 (macro) type: f1_macro value: 0.4263428845085451 - name: Precision (macro) type: precision_macro value: 0.6003185137596864 - name: Recall (macro) type: recall_macro value: 0.35195768262641947 - name: F1 (entity span) type: f1_entity_span value: 0.5936768149882904 - name: Precision (entity span) type: precision_entity_span value: 0.8060413354531002 - name: Recall (entity span) type: recall_entity_span value: 0.46987951807228917 pipeline_tag: token-classification widget: - text: "Jacob Collier is a Grammy awarded artist from England." example_title: "NER Example 1" --- # tner/deberta-large-wnut2017 This model is a fine-tuned version of [microsoft/deberta-large](https://huggingface.co/microsoft/deberta-large) on the [tner/wnut2017](https://huggingface.co/datasets/tner/wnut2017) dataset. Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository for more detail). It achieves the following results on the test set: - F1 (micro): 0.5105386416861827 - Precision (micro): 0.6931637519872814 - Recall (micro): 0.4040778498609824 - F1 (macro): 0.4263428845085451 - Precision (macro): 0.6003185137596864 - Recall (macro): 0.35195768262641947 The per-entity breakdown of the F1 score on the test set are below: - corporation: 0.3503649635036496 - group: 0.3148148148148148 - location: 0.6029411764705882 - person: 0.6628895184135977 - product: 0.1951219512195122 - work_of_art: 0.431924882629108 For F1 scores, the confidence interval is obtained by bootstrap as below: - F1 (micro): - 90%: [0.47970650356554456, 0.5385161869734422] - 95%: [0.47475901512925966, 0.5430870496346687] - F1 (macro): - 90%: [0.47970650356554456, 0.5385161869734422] - 95%: [0.47475901512925966, 0.5430870496346687] Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/deberta-large-wnut2017/raw/main/eval/metric.json) and [metric file of entity span](https://huggingface.co/tner/deberta-large-wnut2017/raw/main/eval/metric_span.json). ### Usage This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip ```shell pip install tner ``` and activate model as below. ```python from tner import TransformersNER model = TransformersNER("tner/deberta-large-wnut2017") model.predict(["Jacob Collier is a Grammy awarded English artist from London"]) ``` It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment. ### Training hyperparameters The following hyperparameters were used during training: - dataset: ['tner/wnut2017'] - dataset_split: train - dataset_name: None - local_dataset: None - model: microsoft/deberta-large - crf: True - max_length: 128 - epoch: 15 - batch_size: 16 - lr: 1e-05 - random_seed: 42 - gradient_accumulation_steps: 4 - weight_decay: 1e-07 - lr_warmup_step_ratio: 0.1 - max_grad_norm: 10.0 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/deberta-large-wnut2017/raw/main/trainer_config.json). ### Reference If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-camacho-collados-2021-ner, title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition", author = "Ushio, Asahi and Camacho-Collados, Jose", booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations", month = apr, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.eacl-demos.7", doi = "10.18653/v1/2021.eacl-demos.7", pages = "53--62", abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.", } ```
tner/deberta-v3-large-fin
tner
2022-09-26T14:28:32Z
8
2
transformers
[ "transformers", "pytorch", "deberta-v2", "token-classification", "dataset:tner/fin", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-12T22:13:20Z
--- datasets: - tner/fin metrics: - f1 - precision - recall model-index: - name: tner/deberta-v3-large-fin results: - task: name: Token Classification type: token-classification dataset: name: tner/fin type: tner/fin args: tner/fin metrics: - name: F1 type: f1 value: 0.7060755336617406 - name: Precision type: precision value: 0.738831615120275 - name: Recall type: recall value: 0.6761006289308176 - name: F1 (macro) type: f1_macro value: 0.45092058848834204 - name: Precision (macro) type: precision_macro value: 0.45426465258085835 - name: Recall (macro) type: recall_macro value: 0.45582773707773705 - name: F1 (entity span) type: f1_entity_span value: 0.7293729372937293 - name: Precision (entity span) type: precision_entity_span value: 0.7594501718213058 - name: Recall (entity span) type: recall_entity_span value: 0.7015873015873015 pipeline_tag: token-classification widget: - text: "Jacob Collier is a Grammy awarded artist from England." example_title: "NER Example 1" --- # tner/deberta-v3-large-fin This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the [tner/fin](https://huggingface.co/datasets/tner/fin) dataset. Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository for more detail). It achieves the following results on the test set: - F1 (micro): 0.7060755336617406 - Precision (micro): 0.738831615120275 - Recall (micro): 0.6761006289308176 - F1 (macro): 0.45092058848834204 - Precision (macro): 0.45426465258085835 - Recall (macro): 0.45582773707773705 The per-entity breakdown of the F1 score on the test set are below: - location: 0.4000000000000001 - organization: 0.5762711864406779 - other: 0.0 - person: 0.8274111675126904 For F1 scores, the confidence interval is obtained by bootstrap as below: - F1 (micro): - 90%: [0.6370316240330781, 0.7718233002182738] - 95%: [0.6236274300363168, 0.7857205513784461] - F1 (macro): - 90%: [0.6370316240330781, 0.7718233002182738] - 95%: [0.6236274300363168, 0.7857205513784461] Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/deberta-v3-large-fin/raw/main/eval/metric.json) and [metric file of entity span](https://huggingface.co/tner/deberta-v3-large-fin/raw/main/eval/metric_span.json). ### Usage This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip ```shell pip install tner ``` and activate model as below. ```python from tner import TransformersNER model = TransformersNER("tner/deberta-v3-large-fin") model.predict(["Jacob Collier is a Grammy awarded English artist from London"]) ``` It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment. ### Training hyperparameters The following hyperparameters were used during training: - dataset: ['tner/fin'] - dataset_split: train - dataset_name: None - local_dataset: None - model: microsoft/deberta-v3-large - crf: True - max_length: 128 - epoch: 15 - batch_size: 16 - lr: 1e-05 - random_seed: 42 - gradient_accumulation_steps: 4 - weight_decay: None - lr_warmup_step_ratio: 0.1 - max_grad_norm: 10.0 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/deberta-v3-large-fin/raw/main/trainer_config.json). ### Reference If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-camacho-collados-2021-ner, title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition", author = "Ushio, Asahi and Camacho-Collados, Jose", booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations", month = apr, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.eacl-demos.7", doi = "10.18653/v1/2021.eacl-demos.7", pages = "53--62", abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.", } ```
tner/roberta-large-ttc
tner
2022-09-26T14:25:57Z
6
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "dataset:tner/ttc", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-12T10:49:56Z
--- datasets: - tner/ttc metrics: - f1 - precision - recall model-index: - name: tner/roberta-large-ttc results: - task: name: Token Classification type: token-classification dataset: name: tner/ttc type: tner/ttc args: tner/ttc metrics: - name: F1 type: f1 value: 0.8314534321624235 - name: Precision type: precision value: 0.8269230769230769 - name: Recall type: recall value: 0.8360337005832793 - name: F1 (macro) type: f1_macro value: 0.8317396497007042 - name: Precision (macro) type: precision_macro value: 0.8296690551538254 - name: Recall (macro) type: recall_macro value: 0.8340850231639706 - name: F1 (entity span) type: f1_entity_span value: 0.8739929100870126 - name: Precision (entity span) type: precision_entity_span value: 0.8692307692307693 - name: Recall (entity span) type: recall_entity_span value: 0.8788075178224238 pipeline_tag: token-classification widget: - text: "Jacob Collier is a Grammy awarded artist from England." example_title: "NER Example 1" --- # tner/roberta-large-ttc This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the [tner/ttc](https://huggingface.co/datasets/tner/ttc) dataset. Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository for more detail). It achieves the following results on the test set: - F1 (micro): 0.8314534321624235 - Precision (micro): 0.8269230769230769 - Recall (micro): 0.8360337005832793 - F1 (macro): 0.8317396497007042 - Precision (macro): 0.8296690551538254 - Recall (macro): 0.8340850231639706 The per-entity breakdown of the F1 score on the test set are below: - location: 0.7817403708987161 - organization: 0.7737656595431097 - person: 0.939712918660287 For F1 scores, the confidence interval is obtained by bootstrap as below: - F1 (micro): - 90%: [0.8153670265512099, 0.8476331336073506] - 95%: [0.8126974643551524, 0.8505459585794019] - F1 (macro): - 90%: [0.8153670265512099, 0.8476331336073506] - 95%: [0.8126974643551524, 0.8505459585794019] Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/roberta-large-ttc/raw/main/eval/metric.json) and [metric file of entity span](https://huggingface.co/tner/roberta-large-ttc/raw/main/eval/metric_span.json). ### Usage This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip ```shell pip install tner ``` and activate model as below. ```python from tner import TransformersNER model = TransformersNER("tner/roberta-large-ttc") model.predict(["Jacob Collier is a Grammy awarded English artist from London"]) ``` It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment. ### Training hyperparameters The following hyperparameters were used during training: - dataset: ['tner/ttc'] - dataset_split: train - dataset_name: None - local_dataset: None - model: roberta-large - crf: True - max_length: 128 - epoch: 16 - batch_size: 64 - lr: 1e-05 - random_seed: 42 - gradient_accumulation_steps: 2 - weight_decay: None - lr_warmup_step_ratio: 0.1 - max_grad_norm: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/roberta-large-ttc/raw/main/trainer_config.json). ### Reference If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-camacho-collados-2021-ner, title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition", author = "Ushio, Asahi and Camacho-Collados, Jose", booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations", month = apr, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.eacl-demos.7", doi = "10.18653/v1/2021.eacl-demos.7", pages = "53--62", abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.", } ```
tner/roberta-large-mit-restaurant
tner
2022-09-26T14:24:20Z
181
3
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "dataset:tner/mit_restaurant", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-12T00:20:40Z
--- datasets: - tner/mit_restaurant metrics: - f1 - precision - recall model-index: - name: tner/roberta-large-mit-restaurant results: - task: name: Token Classification type: token-classification dataset: name: tner/mit_restaurant type: tner/mit_restaurant args: tner/mit_restaurant metrics: - name: F1 type: f1 value: 0.8164676304211189 - name: Precision type: precision value: 0.8085901027077498 - name: Recall type: recall value: 0.8245001586797842 - name: F1 (macro) type: f1_macro value: 0.8081522050756316 - name: Precision (macro) type: precision_macro value: 0.7974927131040113 - name: Recall (macro) type: recall_macro value: 0.8199029986502094 - name: F1 (entity span) type: f1_entity_span value: 0.8557510999371464 - name: Precision (entity span) type: precision_entity_span value: 0.8474945533769063 - name: Recall (entity span) type: recall_entity_span value: 0.8641701047286575 pipeline_tag: token-classification widget: - text: "Jacob Collier is a Grammy awarded artist from England." example_title: "NER Example 1" --- # tner/roberta-large-mit-restaurant This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the [tner/mit_restaurant](https://huggingface.co/datasets/tner/mit_restaurant) dataset. Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository for more detail). It achieves the following results on the test set: - F1 (micro): 0.8164676304211189 - Precision (micro): 0.8085901027077498 - Recall (micro): 0.8245001586797842 - F1 (macro): 0.8081522050756316 - Precision (macro): 0.7974927131040113 - Recall (macro): 0.8199029986502094 The per-entity breakdown of the F1 score on the test set are below: - amenity: 0.7140221402214022 - cuisine: 0.8558052434456929 - dish: 0.829103214890017 - location: 0.8611793611793611 - money: 0.8579710144927537 - rating: 0.8 - restaurant: 0.8713375796178344 - time: 0.6757990867579908 For F1 scores, the confidence interval is obtained by bootstrap as below: - F1 (micro): - 90%: [0.8050039870241192, 0.8289531287254172] - 95%: [0.8030897272187587, 0.8312785732455824] - F1 (macro): - 90%: [0.8050039870241192, 0.8289531287254172] - 95%: [0.8030897272187587, 0.8312785732455824] Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/roberta-large-mit-restaurant/raw/main/eval/metric.json) and [metric file of entity span](https://huggingface.co/tner/roberta-large-mit-restaurant/raw/main/eval/metric_span.json). ### Usage This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip ```shell pip install tner ``` and activate model as below. ```python from tner import TransformersNER model = TransformersNER("tner/roberta-large-mit-restaurant") model.predict(["Jacob Collier is a Grammy awarded English artist from London"]) ``` It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment. ### Training hyperparameters The following hyperparameters were used during training: - dataset: ['tner/mit_restaurant'] - dataset_split: train - dataset_name: None - local_dataset: None - model: roberta-large - crf: True - max_length: 128 - epoch: 15 - batch_size: 64 - lr: 1e-05 - random_seed: 42 - gradient_accumulation_steps: 1 - weight_decay: None - lr_warmup_step_ratio: 0.1 - max_grad_norm: 10.0 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/roberta-large-mit-restaurant/raw/main/trainer_config.json). ### Reference If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-camacho-collados-2021-ner, title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition", author = "Ushio, Asahi and Camacho-Collados, Jose", booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations", month = apr, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.eacl-demos.7", doi = "10.18653/v1/2021.eacl-demos.7", pages = "53--62", abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.", } ```
tner/roberta-large-bionlp2004
tner
2022-09-26T14:23:31Z
10
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "dataset:tner/bionlp2004", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-12T00:10:08Z
--- datasets: - tner/bionlp2004 metrics: - f1 - precision - recall model-index: - name: tner/roberta-large-bionlp2004 results: - task: name: Token Classification type: token-classification dataset: name: tner/bionlp2004 type: tner/bionlp2004 args: tner/bionlp2004 metrics: - name: F1 type: f1 value: 0.7513434294088912 - name: Precision type: precision value: 0.7090462042823481 - name: Recall type: recall value: 0.7990071577003002 - name: F1 (macro) type: f1_macro value: 0.7165656135319811 - name: Precision (macro) type: precision_macro value: 0.6765580411075789 - name: Recall (macro) type: recall_macro value: 0.7685019796698731 - name: F1 (entity span) type: f1_entity_span value: 0.7936818107800032 - name: Precision (entity span) type: precision_entity_span value: 0.7490011269337158 - name: Recall (entity span) type: recall_entity_span value: 0.8440314015238974 pipeline_tag: token-classification widget: - text: "Jacob Collier is a Grammy awarded artist from England." example_title: "NER Example 1" --- # tner/roberta-large-bionlp2004 This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the [tner/bionlp2004](https://huggingface.co/datasets/tner/bionlp2004) dataset. Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository for more detail). It achieves the following results on the test set: - F1 (micro): 0.7513434294088912 - Precision (micro): 0.7090462042823481 - Recall (micro): 0.7990071577003002 - F1 (macro): 0.7165656135319811 - Precision (macro): 0.6765580411075789 - Recall (macro): 0.7685019796698731 The per-entity breakdown of the F1 score on the test set are below: - cell_line: 0.6080273270708796 - cell_type: 0.7536311318169361 - dna: 0.7150259067357512 - protein: 0.7738602374694099 - rna: 0.7322834645669293 For F1 scores, the confidence interval is obtained by bootstrap as below: - F1 (micro): - 90%: [0.7433198691879565, 0.7598437899577305] - 95%: [0.7420570442205622, 0.7606216680394585] - F1 (macro): - 90%: [0.7433198691879565, 0.7598437899577305] - 95%: [0.7420570442205622, 0.7606216680394585] Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/roberta-large-bionlp2004/raw/main/eval/metric.json) and [metric file of entity span](https://huggingface.co/tner/roberta-large-bionlp2004/raw/main/eval/metric_span.json). ### Usage This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip ```shell pip install tner ``` and activate model as below. ```python from tner import TransformersNER model = TransformersNER("tner/roberta-large-bionlp2004") model.predict(["Jacob Collier is a Grammy awarded English artist from London"]) ``` It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment. ### Training hyperparameters The following hyperparameters were used during training: - dataset: ['tner/bionlp2004'] - dataset_split: train - dataset_name: None - local_dataset: None - model: roberta-large - crf: True - max_length: 128 - epoch: 15 - batch_size: 64 - lr: 1e-05 - random_seed: 42 - gradient_accumulation_steps: 1 - weight_decay: 1e-07 - lr_warmup_step_ratio: 0.1 - max_grad_norm: 10.0 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/roberta-large-bionlp2004/raw/main/trainer_config.json). ### Reference If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-camacho-collados-2021-ner, title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition", author = "Ushio, Asahi and Camacho-Collados, Jose", booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations", month = apr, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.eacl-demos.7", doi = "10.18653/v1/2021.eacl-demos.7", pages = "53--62", abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.", } ```
tner/roberta-large-fin
tner
2022-09-26T14:22:04Z
11
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "dataset:fin", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-12T20:28:39Z
--- datasets: - fin metrics: - f1 - precision - recall model-index: - name: tner/roberta-large-fin results: - task: name: Token Classification type: token-classification dataset: name: fin type: fin args: fin metrics: - name: F1 type: f1 value: 0.6988727858293075 - name: Precision type: precision value: 0.7161716171617162 - name: Recall type: recall value: 0.6823899371069182 - name: F1 (macro) type: f1_macro value: 0.45636958249281745 - name: Precision (macro) type: precision_macro value: 0.4519134760270864 - name: Recall (macro) type: recall_macro value: 0.4705942205942206 - name: F1 (entity span) type: f1_entity_span value: 0.7087378640776698 - name: Precision (entity span) type: precision_entity_span value: 0.7227722772277227 - name: Recall (entity span) type: recall_entity_span value: 0.6952380952380952 pipeline_tag: token-classification widget: - text: "Jacob Collier is a Grammy awarded artist from England." example_title: "NER Example 1" --- # tner/roberta-large-fin This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the [tner/fin](https://huggingface.co/datasets/tner/fin) dataset. Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository for more detail). It achieves the following results on the test set: - F1 (micro): 0.6988727858293075 - Precision (micro): 0.7161716171617162 - Recall (micro): 0.6823899371069182 - F1 (macro): 0.45636958249281745 - Precision (macro): 0.4519134760270864 - Recall (macro): 0.4705942205942206 The per-entity breakdown of the F1 score on the test set are below: - location: 0.5121951219512196 - organization: 0.49624060150375937 - other: 0.0 - person: 0.8170426065162907 For F1 scores, the confidence interval is obtained by bootstrap as below: - F1 (micro): - 90%: [0.6355508274231678, 0.7613829748047737] - 95%: [0.624150263185174, 0.7724430709173716] - F1 (macro): - 90%: [0.6355508274231678, 0.7613829748047737] - 95%: [0.624150263185174, 0.7724430709173716] Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/roberta-large-fin/raw/main/eval/metric.json) and [metric file of entity span](https://huggingface.co/tner/roberta-large-fin/raw/main/eval/metric_span.json). ### Usage This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip ```shell pip install tner ``` and activate model as below. ```python from tner import TransformersNER model = TransformersNER("tner/roberta-large-fin") model.predict(["Jacob Collier is a Grammy awarded English artist from London"]) ``` It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment. ### Training hyperparameters The following hyperparameters were used during training: - dataset: ['tner/fin'] - dataset_split: train - dataset_name: None - local_dataset: None - model: roberta-large - crf: True - max_length: 128 - epoch: 15 - batch_size: 64 - lr: 1e-05 - random_seed: 42 - gradient_accumulation_steps: 1 - weight_decay: None - lr_warmup_step_ratio: 0.1 - max_grad_norm: 10.0 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/roberta-large-fin/raw/main/trainer_config.json). ### Reference If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-camacho-collados-2021-ner, title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition", author = "Ushio, Asahi and Camacho-Collados, Jose", booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations", month = apr, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.eacl-demos.7", doi = "10.18653/v1/2021.eacl-demos.7", pages = "53--62", abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.", } ```
tner/roberta-large-tweebank-ner
tner
2022-09-26T14:21:19Z
11
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "dataset:tner/tweebank_ner", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-10T10:03:35Z
--- datasets: - tner/tweebank_ner metrics: - f1 - precision - recall model-index: - name: tner/roberta-large-tweebank-ner results: - task: name: Token Classification type: token-classification dataset: name: tner/tweebank_ner type: tner/tweebank_ner args: tner/tweebank_ner metrics: - name: F1 type: f1 value: 0.7439490445859872 - name: Precision type: precision value: 0.7121951219512195 - name: Recall type: recall value: 0.7786666666666666 - name: F1 (macro) type: f1_macro value: 0.7354319457314183 - name: Precision (macro) type: precision_macro value: 0.712928566565599 - name: Recall (macro) type: recall_macro value: 0.7620465365030582 - name: F1 (entity span) type: f1_entity_span value: 0.8178343949044585 - name: Precision (entity span) type: precision_entity_span value: 0.7829268292682927 - name: Recall (entity span) type: recall_entity_span value: 0.856 pipeline_tag: token-classification widget: - text: "Jacob Collier is a Grammy awarded artist from England." example_title: "NER Example 1" --- # tner/roberta-large-tweebank-ner This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the [tner/tweebank_ner](https://huggingface.co/datasets/tner/tweebank_ner) dataset. Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository for more detail). It achieves the following results on the test set: - F1 (micro): 0.7439490445859872 - Precision (micro): 0.7121951219512195 - Recall (micro): 0.7786666666666666 - F1 (macro): 0.7354319457314183 - Precision (macro): 0.712928566565599 - Recall (macro): 0.7620465365030582 The per-entity breakdown of the F1 score on the test set are below: - location: 0.7782805429864253 - organization: 0.7377049180327869 - other: 0.5520581113801453 - person: 0.8736842105263157 For F1 scores, the confidence interval is obtained by bootstrap as below: - F1 (micro): - 90%: [0.7156413818791614, 0.771698046498159] - 95%: [0.7063867669973017, 0.7763088810979543] - F1 (macro): - 90%: [0.7156413818791614, 0.771698046498159] - 95%: [0.7063867669973017, 0.7763088810979543] Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/roberta-large-tweebank-ner/raw/main/eval/metric.json) and [metric file of entity span](https://huggingface.co/tner/roberta-large-tweebank-ner/raw/main/eval/metric_span.json). ### Usage This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip ```shell pip install tner ``` and activate model as below. ```python from tner import TransformersNER model = TransformersNER("tner/roberta-large-tweebank-ner") model.predict(["Jacob Collier is a Grammy awarded English artist from London"]) ``` It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment. ### Training hyperparameters The following hyperparameters were used during training: - dataset: ['tner/tweebank_ner'] - dataset_split: train - dataset_name: None - local_dataset: None - model: roberta-large - crf: True - max_length: 128 - epoch: 15 - batch_size: 64 - lr: 1e-05 - random_seed: 42 - gradient_accumulation_steps: 1 - weight_decay: None - lr_warmup_step_ratio: 0.1 - max_grad_norm: 10.0 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/roberta-large-tweebank-ner/raw/main/trainer_config.json). ### Reference If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-camacho-collados-2021-ner, title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition", author = "Ushio, Asahi and Camacho-Collados, Jose", booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations", month = apr, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.eacl-demos.7", doi = "10.18653/v1/2021.eacl-demos.7", pages = "53--62", abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.", } ```
tner/roberta-large-mit-movie-trivia
tner
2022-09-26T14:15:35Z
17
1
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "dataset:tner/mit_movie_trivia", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-12T10:37:29Z
--- datasets: - tner/mit_movie_trivia metrics: - f1 - precision - recall model-index: - name: tner/roberta-large-mit-movie-trivia results: - task: name: Token Classification type: token-classification dataset: name: tner/mit_movie_trivia type: tner/mit_movie_trivia args: tner/mit_movie_trivia metrics: - name: F1 type: f1 value: 0.7284025200655909 - name: Precision type: precision value: 0.7151330283002881 - name: Recall type: recall value: 0.7421737601125572 - name: F1 (macro) type: f1_macro value: 0.6502255723148889 - name: Precision (macro) type: precision_macro value: 0.6457158565124362 - name: Recall (macro) type: recall_macro value: 0.6578012664661943 - name: F1 (entity span) type: f1_entity_span value: 0.749525289142068 - name: Precision (entity span) type: precision_entity_span value: 0.7359322033898306 - name: Recall (entity span) type: recall_entity_span value: 0.7636299683432993 pipeline_tag: token-classification widget: - text: "Jacob Collier is a Grammy awarded artist from England." example_title: "NER Example 1" --- # tner/roberta-large-mit-movie-trivia This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the [tner/mit_movie_trivia](https://huggingface.co/datasets/tner/mit_movie_trivia) dataset. Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository for more detail). It achieves the following results on the test set: - F1 (micro): 0.7284025200655909 - Precision (micro): 0.7151330283002881 - Recall (micro): 0.7421737601125572 - F1 (macro): 0.6502255723148889 - Precision (macro): 0.6457158565124362 - Recall (macro): 0.6578012664661943 The per-entity breakdown of the F1 score on the test set are below: - actor: 0.9557453416149068 - award: 0.41726618705035967 - character_name: 0.7467105263157895 - date: 0.9668674698795181 - director: 0.9148936170212766 - genre: 0.7277079593058049 - opinion: 0.43478260869565216 - origin: 0.28846153846153844 - plot: 0.5132575757575758 - quote: 0.8387096774193549 - relationship: 0.5697329376854599 - soundtrack: 0.42857142857142855 For F1 scores, the confidence interval is obtained by bootstrap as below: - F1 (micro): - 90%: [0.718570586211627, 0.7387631655667131] - 95%: [0.7170135350354089, 0.7412372838115527] - F1 (macro): - 90%: [0.718570586211627, 0.7387631655667131] - 95%: [0.7170135350354089, 0.7412372838115527] Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/roberta-large-mit-movie-trivia/raw/main/eval/metric.json) and [metric file of entity span](https://huggingface.co/tner/roberta-large-mit-movie-trivia/raw/main/eval/metric_span.json). ### Usage This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip ```shell pip install tner ``` and activate model as below. ```python from tner import TransformersNER model = TransformersNER("tner/roberta-large-mit-movie-trivia") model.predict(["Jacob Collier is a Grammy awarded English artist from London"]) ``` It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment. ### Training hyperparameters The following hyperparameters were used during training: - dataset: ['tner/mit_movie_trivia'] - dataset_split: train - dataset_name: None - local_dataset: None - model: roberta-large - crf: True - max_length: 128 - epoch: 15 - batch_size: 64 - lr: 1e-05 - random_seed: 42 - gradient_accumulation_steps: 1 - weight_decay: 1e-07 - lr_warmup_step_ratio: 0.1 - max_grad_norm: 10.0 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/roberta-large-mit-movie-trivia/raw/main/trainer_config.json). ### Reference If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-camacho-collados-2021-ner, title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition", author = "Ushio, Asahi and Camacho-Collados, Jose", booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations", month = apr, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.eacl-demos.7", doi = "10.18653/v1/2021.eacl-demos.7", pages = "53--62", abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.", } ```
tner/deberta-v3-large-ontonotes5
tner
2022-09-26T14:14:54Z
336
2
transformers
[ "transformers", "pytorch", "deberta-v2", "token-classification", "dataset:tner/ontonotes5", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-19T18:22:34Z
--- datasets: - tner/ontonotes5 metrics: - f1 - precision - recall model-index: - name: tner/deberta-v3-large-ontonotes5 results: - task: name: Token Classification type: token-classification dataset: name: tner/ontonotes5 type: tner/ontonotes5 args: tner/ontonotes5 metrics: - name: F1 type: f1 value: 0.9069623608411381 - name: Precision type: precision value: 0.902100360312857 - name: Recall type: recall value: 0.9118770542773386 - name: F1 (macro) type: f1_macro value: 0.834586960779896 - name: Precision (macro) type: precision_macro value: 0.8237351069457466 - name: Recall (macro) type: recall_macro value: 0.8475169311172334 - name: F1 (entity span) type: f1_entity_span value: 0.9267538434352359 - name: Precision (entity span) type: precision_entity_span value: 0.9217857456718517 - name: Recall (entity span) type: recall_entity_span value: 0.9317757839566492 pipeline_tag: token-classification widget: - text: "Jacob Collier is a Grammy awarded artist from England." example_title: "NER Example 1" --- # tner/deberta-v3-large-ontonotes5 This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the [tner/ontonotes5](https://huggingface.co/datasets/tner/ontonotes5) dataset. Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository for more detail). It achieves the following results on the test set: - F1 (micro): 0.9069623608411381 - Precision (micro): 0.902100360312857 - Recall (micro): 0.9118770542773386 - F1 (macro): 0.834586960779896 - Precision (macro): 0.8237351069457466 - Recall (macro): 0.8475169311172334 The per-entity breakdown of the F1 score on the test set are below: - cardinal_number: 0.853475935828877 - date: 0.8815545959284392 - event: 0.8030303030303031 - facility: 0.7896678966789669 - geopolitical_area: 0.9650033867690223 - group: 0.9337209302325581 - language: 0.8372093023255814 - law: 0.6756756756756757 - location: 0.7624020887728459 - money: 0.8818897637795275 - ordinal_number: 0.8635235732009926 - organization: 0.914952751528627 - percent: 0.9 - person: 0.9609866599546942 - product: 0.7901234567901234 - quantity: 0.8161434977578474 - time: 0.674364896073903 - work_of_art: 0.7188405797101449 For F1 scores, the confidence interval is obtained by bootstrap as below: - F1 (micro): - 90%: [0.9019409960743083, 0.911751130722053] - 95%: [0.9010822890967028, 0.9125611412371442] - F1 (macro): - 90%: [0.9019409960743083, 0.911751130722053] - 95%: [0.9010822890967028, 0.9125611412371442] Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/deberta-v3-large-ontonotes5/raw/main/eval/metric.json) and [metric file of entity span](https://huggingface.co/tner/deberta-v3-large-ontonotes5/raw/main/eval/metric_span.json). ### Usage This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip ```shell pip install tner ``` and activate model as below. ```python from tner import TransformersNER model = TransformersNER("tner/deberta-v3-large-ontonotes5") model.predict(["Jacob Collier is a Grammy awarded English artist from London"]) ``` It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment. ### Training hyperparameters The following hyperparameters were used during training: - dataset: ['tner/ontonotes5'] - dataset_split: train - dataset_name: None - local_dataset: None - model: microsoft/deberta-v3-large - crf: True - max_length: 128 - epoch: 15 - batch_size: 16 - lr: 1e-05 - random_seed: 42 - gradient_accumulation_steps: 4 - weight_decay: 1e-07 - lr_warmup_step_ratio: 0.1 - max_grad_norm: 10.0 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/deberta-v3-large-ontonotes5/raw/main/trainer_config.json). ### Reference If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-camacho-collados-2021-ner, title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition", author = "Ushio, Asahi and Camacho-Collados, Jose", booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations", month = apr, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.eacl-demos.7", doi = "10.18653/v1/2021.eacl-demos.7", pages = "53--62", abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.", } ```
tner/roberta-large-bc5cdr
tner
2022-09-26T14:13:58Z
12
2
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "dataset:tner/bc5cdr", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-09T23:32:35Z
--- datasets: - tner/bc5cdr metrics: - f1 - precision - recall model-index: - name: tner/roberta-large-bc5cdr results: - task: name: Token Classification type: token-classification dataset: name: tner/bc5cdr type: tner/bc5cdr args: tner/bc5cdr metrics: - name: F1 type: f1 value: 0.8840696387239609 - name: Precision type: precision value: 0.8728266269249876 - name: Recall type: recall value: 0.8956060760526048 - name: F1 (macro) type: f1_macro value: 0.8797360472482783 - name: Precision (macro) type: precision_macro value: 0.8684274142690976 - name: Recall (macro) type: recall_macro value: 0.8913672531528037 - name: F1 (entity span) type: f1_entity_span value: 0.886283586595552 - name: Precision (entity span) type: precision_entity_span value: 0.8750124192747144 - name: Recall (entity span) type: recall_entity_span value: 0.8978489142624121 pipeline_tag: token-classification widget: - text: "Jacob Collier is a Grammy awarded artist from England." example_title: "NER Example 1" --- # tner/roberta-large-bc5cdr This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the [tner/bc5cdr](https://huggingface.co/datasets/tner/bc5cdr) dataset. Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository for more detail). It achieves the following results on the test set: - F1 (micro): 0.8840696387239609 - Precision (micro): 0.8728266269249876 - Recall (micro): 0.8956060760526048 - F1 (macro): 0.8797360472482783 - Precision (macro): 0.8684274142690976 - Recall (macro): 0.8913672531528037 The per-entity breakdown of the F1 score on the test set are below: - chemical: 0.9256943167187788 - disease: 0.8337777777777777 For F1 scores, the confidence interval is obtained by bootstrap as below: - F1 (micro): - 90%: [0.878869501707946, 0.8890795634554179] - 95%: [0.8776790106527211, 0.8897422640465147] - F1 (macro): - 90%: [0.878869501707946, 0.8890795634554179] - 95%: [0.8776790106527211, 0.8897422640465147] Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/roberta-large-bc5cdr/raw/main/eval/metric.json) and [metric file of entity span](https://huggingface.co/tner/roberta-large-bc5cdr/raw/main/eval/metric_span.json). ### Usage This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip ```shell pip install tner ``` and activate model as below. ```python from tner import TransformersNER model = TransformersNER("tner/roberta-large-bc5cdr") model.predict(["Jacob Collier is a Grammy awarded English artist from London"]) ``` It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment. ### Training hyperparameters The following hyperparameters were used during training: - dataset: ['tner/bc5cdr'] - dataset_split: train - dataset_name: None - local_dataset: None - model: roberta-large - crf: True - max_length: 128 - epoch: 15 - batch_size: 64 - lr: 1e-05 - random_seed: 42 - gradient_accumulation_steps: 1 - weight_decay: None - lr_warmup_step_ratio: 0.1 - max_grad_norm: 10.0 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/roberta-large-bc5cdr/raw/main/trainer_config.json). ### Reference If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-camacho-collados-2021-ner, title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition", author = "Ushio, Asahi and Camacho-Collados, Jose", booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations", month = apr, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.eacl-demos.7", doi = "10.18653/v1/2021.eacl-demos.7", pages = "53--62", abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.", } ```
tner/roberta-large-conll2003
tner
2022-09-26T14:13:18Z
65
1
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "dataset:tner/conll2003", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-09T23:19:06Z
--- datasets: - tner/conll2003 metrics: - f1 - precision - recall model-index: - name: tner/roberta-large-conll2003 results: - task: name: Token Classification type: token-classification dataset: name: tner/conll2003 type: tner/conll2003 args: tner/conll2003 metrics: - name: F1 type: f1 value: 0.924769027716674 - name: Precision type: precision value: 0.9191883855168795 - name: Recall type: recall value: 0.9304178470254958 - name: F1 (macro) type: f1_macro value: 0.9110950780089749 - name: Precision (macro) type: precision_macro value: 0.9030546238754271 - name: Recall (macro) type: recall_macro value: 0.9197126371122274 - name: F1 (entity span) type: f1_entity_span value: 0.9619852164730729 - name: Precision (entity span) type: precision_entity_span value: 0.9562631210636809 - name: Recall (entity span) type: recall_entity_span value: 0.9677762039660056 pipeline_tag: token-classification widget: - text: "Jacob Collier is a Grammy awarded artist from England." example_title: "NER Example 1" --- # tner/roberta-large-conll2003 This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the [tner/conll2003](https://huggingface.co/datasets/tner/conll2003) dataset. Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository for more detail). It achieves the following results on the test set: - F1 (micro): 0.924769027716674 - Precision (micro): 0.9191883855168795 - Recall (micro): 0.9304178470254958 - F1 (macro): 0.9110950780089749 - Precision (macro): 0.9030546238754271 - Recall (macro): 0.9197126371122274 The per-entity breakdown of the F1 score on the test set are below: - location: 0.9390573401380967 - organization: 0.9107142857142857 - other: 0.8247422680412372 - person: 0.9698664181422801 For F1 scores, the confidence interval is obtained by bootstrap as below: - F1 (micro): - 90%: [0.9185189408755685, 0.9309806929048586] - 95%: [0.9174010190551032, 0.9318590917100465] - F1 (macro): - 90%: [0.9185189408755685, 0.9309806929048586] - 95%: [0.9174010190551032, 0.9318590917100465] Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/roberta-large-conll2003/raw/main/eval/metric.json) and [metric file of entity span](https://huggingface.co/tner/roberta-large-conll2003/raw/main/eval/metric_span.json). ### Usage This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip ```shell pip install tner ``` and activate model as below. ```python from tner import TransformersNER model = TransformersNER("tner/roberta-large-conll2003") model.predict(["Jacob Collier is a Grammy awarded English artist from London"]) ``` It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment. ### Training hyperparameters The following hyperparameters were used during training: - dataset: ['tner/conll2003'] - dataset_split: train - dataset_name: None - local_dataset: None - model: roberta-large - crf: True - max_length: 128 - epoch: 17 - batch_size: 64 - lr: 1e-05 - random_seed: 42 - gradient_accumulation_steps: 1 - weight_decay: None - lr_warmup_step_ratio: 0.1 - max_grad_norm: 10.0 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/roberta-large-conll2003/raw/main/trainer_config.json). ### Reference If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-camacho-collados-2021-ner, title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition", author = "Ushio, Asahi and Camacho-Collados, Jose", booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations", month = apr, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.eacl-demos.7", doi = "10.18653/v1/2021.eacl-demos.7", pages = "53--62", abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.", } ```
tner/roberta-large-ontonotes5
tner
2022-09-26T14:12:05Z
30,400
16
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "dataset:tner/ontonotes5", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-12T10:33:41Z
--- datasets: - tner/ontonotes5 metrics: - f1 - precision - recall model-index: - name: tner/roberta-large-ontonotes5 results: - task: name: Token Classification type: token-classification dataset: name: tner/ontonotes5 type: tner/ontonotes5 args: tner/ontonotes5 metrics: - name: F1 type: f1 value: 0.908632361399938 - name: Precision type: precision value: 0.905148095909732 - name: Recall type: recall value: 0.9121435551212579 - name: F1 (macro) type: f1_macro value: 0.8265477704565624 - name: Precision (macro) type: precision_macro value: 0.8170668848546687 - name: Recall (macro) type: recall_macro value: 0.8387672780349001 - name: F1 (entity span) type: f1_entity_span value: 0.9284544931640193 - name: Precision (entity span) type: precision_entity_span value: 0.9248942172073342 - name: Recall (entity span) type: recall_entity_span value: 0.9320422848005685 pipeline_tag: token-classification widget: - text: "Jacob Collier is a Grammy awarded artist from England." example_title: "NER Example 1" --- # tner/roberta-large-ontonotes5 This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the [tner/ontonotes5](https://huggingface.co/datasets/tner/ontonotes5) dataset. Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository for more detail). It achieves the following results on the test set: - F1 (micro): 0.908632361399938 - Precision (micro): 0.905148095909732 - Recall (micro): 0.9121435551212579 - F1 (macro): 0.8265477704565624 - Precision (macro): 0.8170668848546687 - Recall (macro): 0.8387672780349001 The per-entity breakdown of the F1 score on the test set are below: - cardinal_number: 0.8605277329025309 - date: 0.872996300863132 - event: 0.7424242424242424 - facility: 0.7732342007434945 - geopolitical_area: 0.9687148323205043 - group: 0.9470588235294117 - language: 0.7499999999999999 - law: 0.6666666666666666 - location: 0.7593582887700535 - money: 0.901098901098901 - ordinal_number: 0.85785536159601 - organization: 0.9227360841872057 - percent: 0.9171428571428571 - person: 0.9556004036326943 - product: 0.7857142857142858 - quantity: 0.7945205479452055 - time: 0.6870588235294116 - work_of_art: 0.7151515151515151 For F1 scores, the confidence interval is obtained by bootstrap as below: - F1 (micro): - 90%: [0.9039454247544766, 0.9128956119702822] - 95%: [0.9030263216115454, 0.9138350859566045] - F1 (macro): - 90%: [0.9039454247544766, 0.9128956119702822] - 95%: [0.9030263216115454, 0.9138350859566045] Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/roberta-large-ontonotes5/raw/main/eval/metric.json) and [metric file of entity span](https://huggingface.co/tner/roberta-large-ontonotes5/raw/main/eval/metric_span.json). ### Usage This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip ```shell pip install tner ``` and activate model as below. ```python from tner import TransformersNER model = TransformersNER("tner/roberta-large-ontonotes5") model.predict(["Jacob Collier is a Grammy awarded English artist from London"]) ``` It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment. ### Training hyperparameters The following hyperparameters were used during training: - dataset: ['tner/ontonotes5'] - dataset_split: train - dataset_name: None - local_dataset: None - model: roberta-large - crf: True - max_length: 128 - epoch: 15 - batch_size: 64 - lr: 1e-05 - random_seed: 42 - gradient_accumulation_steps: 1 - weight_decay: None - lr_warmup_step_ratio: 0.1 - max_grad_norm: 10.0 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/roberta-large-ontonotes5/raw/main/trainer_config.json). ### Reference If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-camacho-collados-2021-ner, title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition", author = "Ushio, Asahi and Camacho-Collados, Jose", booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations", month = apr, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.eacl-demos.7", doi = "10.18653/v1/2021.eacl-demos.7", pages = "53--62", abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.", } ```
dartkain/newforproject
dartkain
2022-09-26T14:00:12Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2022-09-26T14:00:12Z
--- license: creativeml-openrail-m ---
kp9z2/distilgpt2-finetuned-wikitext2
kp9z2
2022-09-26T13:51:59Z
3
0
transformers
[ "transformers", "tf", "tensorboard", "gpt2", "text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-09-26T13:40:05Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: kp9z2/distilgpt2-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # kp9z2/distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.8579 - Validation Loss: 3.6750 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.8579 | 3.6750 | 0 | ### Framework versions - Transformers 4.22.1 - TensorFlow 2.8.2 - Datasets 2.5.1 - Tokenizers 0.12.1
sd-concepts-library/ba-shiroko
sd-concepts-library
2022-09-26T13:20:08Z
0
5
null
[ "license:mit", "region:us" ]
null
2022-09-26T13:20:04Z
--- license: mit --- ### BA_Shiroko on Stable Diffusion This is the `<shiroko>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<shiroko> 0](https://huggingface.co/sd-concepts-library/ba-shiroko/resolve/main/concept_images/3.jpeg) ![<shiroko> 1](https://huggingface.co/sd-concepts-library/ba-shiroko/resolve/main/concept_images/1.jpeg) ![<shiroko> 2](https://huggingface.co/sd-concepts-library/ba-shiroko/resolve/main/concept_images/4.jpeg) ![<shiroko> 3](https://huggingface.co/sd-concepts-library/ba-shiroko/resolve/main/concept_images/0.jpeg) ![<shiroko> 4](https://huggingface.co/sd-concepts-library/ba-shiroko/resolve/main/concept_images/2.jpeg)
jurabi/bert-ner-japanese
jurabi
2022-09-26T12:13:44Z
3,771
10
transformers
[ "transformers", "pytorch", "tf", "bert", "token-classification", "ja", "license:cc-by-sa-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-09-26T07:46:38Z
--- language: - ja widget: - text: 株式会社Jurabiは、東京都台東区に本社を置くIT企業である。 license: cc-by-sa-3.0 --- # BERTによる日本語固有表現抽出のモデル [BertForTokenClassification](https://huggingface.co/docs/transformers/model_doc/bert#transformers.BertForTokenClassification)を用いて、日本語の文から固有表現を抽出します。 抽出される固有表現のタイプは、以下の8種類です。 - 人名 - 法人名(法人または法人に類する組織) - 政治的組織名(政治的組織名、政党名、政府組織名、行政組織名、軍隊名、国際組織名) - その他の組織名 (競技組織名、公演組織名、その他) - 地名 - 施設名 - 製品名(商品名、番組名、映画名、書籍名、歌名、ブランド名等) - イベント名 ## 使用方法 必要なライブラリ(transformers、unidic_lite、fugashi)をpipなどでインストールして、下記のコードを実行するだけです。 ```python from transformers import BertJapaneseTokenizer, BertForTokenClassification from transformers import pipeline model = BertForTokenClassification.from_pretrained("jurabi/bert-ner-japanese") tokenizer = BertJapaneseTokenizer.from_pretrained("jurabi/bert-ner-japanese") ner_pipeline = pipeline('ner', model=model, tokenizer=tokenizer) ner_pipeline("株式会社Jurabiは、東京都台東区に本社を置くIT企業である。") ``` ## 事前学習モデル 東北大学乾研究室が公開している日本語BERTモデル([cl-tohoku/bert-base-japanese-v2](https://huggingface.co/cl-tohoku/bert-base-japanese-v2)) ## 学習データ ストックマーク株式会社が公開しているWikipediaを用いた日本語の固有表現抽出データセット([stockmarkteam/ner-wikipedia-dataset](https://github.com/stockmarkteam/ner-wikipedia-dataset)) ## ソースコード ファインチューニングに使用したプログラムは、[jurabiinc/bert-ner-japanese](https://github.com/jurabiinc/bert-ner-japanese)で公開しています。 ## ライセンス [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/)
sd-concepts-library/poring-ragnarok-online
sd-concepts-library
2022-09-26T12:11:17Z
0
0
null
[ "license:mit", "region:us" ]
null
2022-09-26T12:11:11Z
--- license: mit --- ### Poring Ragnarok Online on Stable Diffusion This is the `<poring-ro>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<poring-ro> 0](https://huggingface.co/sd-concepts-library/poring-ragnarok-online/resolve/main/concept_images/3.jpeg) ![<poring-ro> 1](https://huggingface.co/sd-concepts-library/poring-ragnarok-online/resolve/main/concept_images/1.jpeg) ![<poring-ro> 2](https://huggingface.co/sd-concepts-library/poring-ragnarok-online/resolve/main/concept_images/4.jpeg) ![<poring-ro> 3](https://huggingface.co/sd-concepts-library/poring-ragnarok-online/resolve/main/concept_images/5.jpeg) ![<poring-ro> 4](https://huggingface.co/sd-concepts-library/poring-ragnarok-online/resolve/main/concept_images/0.jpeg) ![<poring-ro> 5](https://huggingface.co/sd-concepts-library/poring-ragnarok-online/resolve/main/concept_images/2.jpeg)
huggingtweets/dolceragazza26-femdomfusion-mistressleiaa
huggingtweets
2022-09-26T10:24:27Z
121
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-09-26T10:23:11Z
--- language: en thumbnail: http://www.huggingtweets.com/dolceragazza26-femdomfusion-mistressleiaa/1664187862433/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1196458544947769345/S04dF85Y_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1548917860742729728/Kl_FyA-Y_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1574160039232868352/If7OL-Q-_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Femdom Fusion & 💞 M.Francesca ⛔️No porn🚫 & Mistress Leia</div> <div style="text-align: center; font-size: 14px;">@dolceragazza26-femdomfusion-mistressleiaa</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Femdom Fusion & 💞 M.Francesca ⛔️No porn🚫 & Mistress Leia. | Data | Femdom Fusion | 💞 M.Francesca ⛔️No porn🚫 | Mistress Leia | | --- | --- | --- | --- | | Tweets downloaded | 3248 | 3222 | 3249 | | Retweets | 0 | 2204 | 663 | | Short tweets | 355 | 168 | 367 | | Tweets kept | 2893 | 850 | 2219 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3ov6jnnk/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dolceragazza26-femdomfusion-mistressleiaa's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/15gbfzn2) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/15gbfzn2/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/dolceragazza26-femdomfusion-mistressleiaa') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
microsoft/deberta-v3-xsmall
microsoft
2022-09-26T08:59:28Z
54,424
42
transformers
[ "transformers", "pytorch", "tf", "deberta-v2", "deberta", "deberta-v3", "fill-mask", "en", "arxiv:2006.03654", "arxiv:2111.09543", "license:mit", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: en tags: - deberta - deberta-v3 - fill-mask thumbnail: https://huggingface.co/front/thumbnails/microsoft.png license: mit --- ## DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing [DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. With those two improvements, DeBERTa out perform RoBERTa on a majority of NLU tasks with 80GB training data. In [DeBERTa V3](https://arxiv.org/abs/2111.09543), we further improved the efficiency of DeBERTa using ELECTRA-Style pre-training with Gradient Disentangled Embedding Sharing. Compared to DeBERTa, our V3 version significantly improves the model performance on downstream tasks. You can find more technique details about the new model from our [paper](https://arxiv.org/abs/2111.09543). Please check the [official repository](https://github.com/microsoft/DeBERTa) for more implementation details and updates. The DeBERTa V3 xsmall model comes with 12 layers and a hidden size of 384. It has only **22M** backbone parameters with a vocabulary containing 128K tokens which introduces 48M parameters in the Embedding layer. This model was trained using the 160GB data as DeBERTa V2. #### Fine-tuning on NLU tasks We present the dev results on SQuAD 2.0 and MNLI tasks. | Model |Vocabulary(K)|Backbone #Params(M)| SQuAD 2.0(F1/EM) | MNLI-m/mm(ACC)| |-------------------|----------|-------------------|-----------|----------| | RoBERTa-base |50 |86 | 83.7/80.5 | 87.6/- | | XLNet-base |32 |92 | -/80.2 | 86.8/- | | ELECTRA-base |30 |86 | -/80.5 | 88.8/ | | DeBERTa-base |50 |100 | 86.2/83.1| 88.8/88.5| | DeBERTa-v3-large|128|304 | 91.5/89.0 | 91.8/91.9| | DeBERTa-v3-base |128|86 | 88.4/85.4 | 90.6/90.7| | DeBERTa-v3-small |128|44 | 82.8/80.4 | 88.3/87.7| | **DeBERTa-v3-xsmall** |128|**22** | **84.8/82.0** | **88.1/88.3**| | DeBERTa-v3-xsmall+SiFT|128|22 | -/- | 88.4/88.5| [#| ELECTRA-small |30 |9.6 | - | - |]:: #### Fine-tuning with HF transformers ```bash #!/bin/bash cd transformers/examples/pytorch/text-classification/ pip install datasets export TASK_NAME=mnli output_dir="ds_results" num_gpus=8 batch_size=8 python -m torch.distributed.launch --nproc_per_node=${num_gpus} \ run_glue.py \ --model_name_or_path microsoft/deberta-v3-xsmall \ --task_name $TASK_NAME \ --do_train \ --do_eval \ --evaluation_strategy steps \ --max_seq_length 256 \ --warmup_steps 1000 \ --per_device_train_batch_size ${batch_size} \ --learning_rate 4.5e-5 \ --num_train_epochs 3 \ --output_dir $output_dir \ --overwrite_output_dir \ --logging_steps 1000 \ --logging_dir $output_dir ``` ### Citation If you find DeBERTa useful for your work, please cite the following papers: ``` latex @misc{he2021debertav3, title={DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing}, author={Pengcheng He and Jianfeng Gao and Weizhu Chen}, year={2021}, eprint={2111.09543}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` latex @inproceedings{ he2021deberta, title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION}, author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen}, booktitle={International Conference on Learning Representations}, year={2021}, url={https://openreview.net/forum?id=XPZIaotutsD} } ```
microsoft/deberta-v3-small
microsoft
2022-09-26T08:59:13Z
338,854
49
transformers
[ "transformers", "pytorch", "tf", "deberta-v2", "deberta", "deberta-v3", "fill-mask", "en", "arxiv:2006.03654", "arxiv:2111.09543", "license:mit", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: en tags: - deberta - deberta-v3 - fill-mask thumbnail: https://huggingface.co/front/thumbnails/microsoft.png license: mit --- ## DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing [DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. With those two improvements, DeBERTa out perform RoBERTa on a majority of NLU tasks with 80GB training data. In [DeBERTa V3](https://arxiv.org/abs/2111.09543), we further improved the efficiency of DeBERTa using ELECTRA-Style pre-training with Gradient Disentangled Embedding Sharing. Compared to DeBERTa, our V3 version significantly improves the model performance on downstream tasks. You can find more technique details about the new model from our [paper](https://arxiv.org/abs/2111.09543). Please check the [official repository](https://github.com/microsoft/DeBERTa) for more implementation details and updates. The DeBERTa V3 small model comes with 6 layers and a hidden size of 768. It has **44M** backbone parameters with a vocabulary containing 128K tokens which introduces 98M parameters in the Embedding layer. This model was trained using the 160GB data as DeBERTa V2. #### Fine-tuning on NLU tasks We present the dev results on SQuAD 2.0 and MNLI tasks. | Model |Vocabulary(K)|Backbone #Params(M)| SQuAD 2.0(F1/EM) | MNLI-m/mm(ACC)| |-------------------|----------|-------------------|-----------|----------| | RoBERTa-base |50 |86 | 83.7/80.5 | 87.6/- | | XLNet-base |32 |92 | -/80.2 | 86.8/- | | ELECTRA-base |30 |86 | -/80.5 | 88.8/ | | DeBERTa-base |50 |100 | 86.2/83.1| 88.8/88.5| | DeBERTa-v3-large|128|304 | 91.5/89.0 | 91.8/91.9 | | DeBERTa-v3-base |128|86 | 88.4/85.4 | 90.6/90.7| | **DeBERTa-v3-small** |128|**44** | **82.8/80.4** | **88.3/87.7**| | DeBERTa-v3-small+SiFT|128|22 | -/- | 88.8/88.5| #### Fine-tuning with HF transformers ```bash #!/bin/bash cd transformers/examples/pytorch/text-classification/ pip install datasets export TASK_NAME=mnli output_dir="ds_results" num_gpus=8 batch_size=8 python -m torch.distributed.launch --nproc_per_node=${num_gpus} \ run_glue.py \ --model_name_or_path microsoft/deberta-v3-small \ --task_name $TASK_NAME \ --do_train \ --do_eval \ --evaluation_strategy steps \ --max_seq_length 256 \ --warmup_steps 1500 \ --per_device_train_batch_size ${batch_size} \ --learning_rate 4.5e-5 \ --num_train_epochs 3 \ --output_dir $output_dir \ --overwrite_output_dir \ --logging_steps 1000 \ --logging_dir $output_dir ``` ### Citation If you find DeBERTa useful for your work, please cite the following papers: ``` latex @misc{he2021debertav3, title={DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing}, author={Pengcheng He and Jianfeng Gao and Weizhu Chen}, year={2021}, eprint={2111.09543}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` latex @inproceedings{ he2021deberta, title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION}, author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen}, booktitle={International Conference on Learning Representations}, year={2021}, url={https://openreview.net/forum?id=XPZIaotutsD} } ```
microsoft/deberta-v2-xlarge
microsoft
2022-09-26T08:59:06Z
104,224
23
transformers
[ "transformers", "pytorch", "tf", "deberta-v2", "deberta", "fill-mask", "en", "arxiv:2006.03654", "license:mit", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: en tags: - deberta - fill-mask thumbnail: https://huggingface.co/front/thumbnails/microsoft.png license: mit --- ## DeBERTa: Decoding-enhanced BERT with Disentangled Attention [DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data. Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates. This is the DeBERTa V2 xlarge model with 24 layers, 1536 hidden size. The total parameters are 900M and it is trained with 160GB raw data. ### Fine-tuning on NLU tasks We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks. | Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B | |---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------| | | F1/EM | F1/EM | Acc | Acc | Acc | MCC | Acc |Acc/F1 |Acc/F1 |P/S | | BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6/- | 93.2 | 92.3 | 60.6 | 70.4 | 88.0/- | 91.3/- |90.0/- | | RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2/- | 96.4 | 93.9 | 68.0 | 86.6 | 90.9/- | 92.2/- |92.4/- | | XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8/- | 97.0 | 94.9 | 69.0 | 85.9 | 90.8/- | 92.3/- |92.5/- | | [DeBERTa-Large](https://huggingface.co/microsoft/deberta-large)<sup>1</sup> | 95.5/90.1 | 90.7/88.0 | 91.3/91.1| 96.5|95.3| 69.5| 91.0| 92.6/94.6| 92.3/- |92.8/92.5 | | [DeBERTa-XLarge](https://huggingface.co/microsoft/deberta-xlarge)<sup>1</sup> | -/- | -/- | 91.5/91.2| 97.0 | - | - | 93.1 | 92.1/94.3 | - |92.9/92.7| | [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)<sup>1</sup>|95.8/90.8| 91.4/88.9|91.7/91.6| **97.5**| 95.8|71.1|**93.9**|92.0/94.2|92.3/89.8|92.9/92.9| |**[DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)<sup>1,2</sup>**|**96.1/91.4**|**92.2/89.7**|**91.7/91.9**|97.2|**96.0**|**72.0**| 93.5| **93.1/94.9**|**92.7/90.3** |**93.2/93.1** | -------- #### Notes. - <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks. - <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, you need to specify **--sharded_ddp** ```bash cd transformers/examples/text-classification/ export TASK_NAME=mrpc python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge \\\\ --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 4 \\\\ --learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16 ``` ### Citation If you find DeBERTa useful for your work, please cite the following paper: ``` latex @inproceedings{ he2021deberta, title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION}, author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen}, booktitle={International Conference on Learning Representations}, year={2021}, url={https://openreview.net/forum?id=XPZIaotutsD} } ```
prikarsartam/Olga
prikarsartam
2022-09-26T08:17:24Z
67
0
transformers
[ "transformers", "tf", "tensorboard", "t5", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-09-26T04:59:17Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: prikarsartam/Olga results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # prikarsartam/Olga This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.8904 - Validation Loss: 2.6281 - Train Rouge1: 25.0368 - Train Rouge2: 5.6914 - Train Rougel: 19.4806 - Train Rougelsum: 19.4874 - Train Gen Len: 18.7987 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-06, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch | |:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:| | 3.0715 | 2.6854 | 23.4337 | 4.8994 | 18.1348 | 18.1316 | 18.7024 | 0 | | 2.8904 | 2.6281 | 25.0368 | 5.6914 | 19.4806 | 19.4874 | 18.7987 | 1 | ### Framework versions - Transformers 4.22.1 - TensorFlow 2.8.2 - Datasets 2.5.1 - Tokenizers 0.12.1
Immanuelraja/layoutlmv3-finetuned-cord_100
Immanuelraja
2022-09-26T07:33:00Z
78
0
transformers
[ "transformers", "pytorch", "tensorboard", "layoutlmv3", "token-classification", "generated_from_trainer", "dataset:cord-layoutlmv3", "license:cc-by-nc-sa-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-09-26T06:36:43Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer datasets: - cord-layoutlmv3 metrics: - precision - recall - f1 - accuracy model-index: - name: layoutlmv3-finetuned-cord_100 results: - task: name: Token Classification type: token-classification dataset: name: cord-layoutlmv3 type: cord-layoutlmv3 config: cord split: train args: cord metrics: - name: Precision type: precision value: 0.9478778853313478 - name: Recall type: recall value: 0.9528443113772455 - name: F1 type: f1 value: 0.950354609929078 - name: Accuracy type: accuracy value: 0.9541595925297114 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlmv3-finetuned-cord_100 This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the cord-layoutlmv3 dataset. It achieves the following results on the evaluation set: - Loss: 0.2176 - Precision: 0.9479 - Recall: 0.9528 - F1: 0.9504 - Accuracy: 0.9542 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 5 - eval_batch_size: 5 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 2500 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.56 | 250 | 1.0378 | 0.7404 | 0.7964 | 0.7674 | 0.8035 | | 1.4104 | 3.12 | 500 | 0.5605 | 0.8291 | 0.8645 | 0.8465 | 0.8790 | | 1.4104 | 4.69 | 750 | 0.3959 | 0.8728 | 0.8990 | 0.8857 | 0.9155 | | 0.4054 | 6.25 | 1000 | 0.3111 | 0.9231 | 0.9349 | 0.9290 | 0.9393 | | 0.4054 | 7.81 | 1250 | 0.2847 | 0.9135 | 0.9251 | 0.9193 | 0.9317 | | 0.2124 | 9.38 | 1500 | 0.2457 | 0.9281 | 0.9379 | 0.9330 | 0.9410 | | 0.2124 | 10.94 | 1750 | 0.2390 | 0.9371 | 0.9484 | 0.9427 | 0.9520 | | 0.1438 | 12.5 | 2000 | 0.2196 | 0.9443 | 0.9513 | 0.9478 | 0.9546 | | 0.1438 | 14.06 | 2250 | 0.2182 | 0.9478 | 0.9521 | 0.9500 | 0.9533 | | 0.1093 | 15.62 | 2500 | 0.2176 | 0.9479 | 0.9528 | 0.9504 | 0.9542 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
neeva/query2query
neeva
2022-09-26T07:11:21Z
32
8
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-09-22T18:23:35Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity license: cc-by-nc-sa-4.0 --- # query2query This is a [sentence-transformers](https://www.SBERT.net) model: It maps queries to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search over queries. Checkout this announcing blogpost for more information: https://neeva.com/blog/state-of-the-art-query2query-similarity(https://neeva.com/blog/state-of-the-art-query2query-similarity) **Note: we are releasing this under a license which prevents commercial use. If you want to use it for commercial purposes, please reach out to contact@neeva.co or rajhans@neeva.co with a brief description of what you want to use it for and we will try our best to respond very quickly.** <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer queries = ["flight cost from nyc to la", "ticket prices from nyc to la"] model = SentenceTransformer('neeva/query2query') embeddings = model.encode(queries) print(embeddings) ``` ## Training The model was trained for 1M steps with a batch size of 1024 at a learning rate of 2e-5 using a cosine learning rate scheduler with 10000 warmup steps. ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: DataParallel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ```
MGanesh29/parrot_paraphraser_on_T5-finetuned-xsum-v7
MGanesh29
2022-09-26T06:47:57Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-09-22T09:34:29Z
--- tags: - generated_from_trainer metrics: - rouge model-index: - name: parrot_paraphraser_on_T5-finetuned-xsum-v7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # parrot_paraphraser_on_T5-finetuned-xsum-v7 This model is a fine-tuned version of [prithivida/parrot_paraphraser_on_T5](https://huggingface.co/prithivida/parrot_paraphraser_on_T5) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0316 - Rouge1: 86.4178 - Rouge2: 84.901 - Rougel: 86.458 - Rougelsum: 86.4281 - Gen Len: 17.887 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 0.0752 | 1.0 | 2000 | 0.0439 | 86.0044 | 84.1284 | 86.0265 | 86.0167 | 17.895 | | 0.0454 | 2.0 | 4000 | 0.0352 | 86.2948 | 84.6092 | 86.3256 | 86.293 | 17.88 | | 0.0308 | 3.0 | 6000 | 0.0324 | 86.3316 | 84.7883 | 86.374 | 86.3355 | 17.887 | | 0.0242 | 4.0 | 8000 | 0.0316 | 86.4178 | 84.901 | 86.458 | 86.4281 | 17.887 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
jamiehuang12/t5-small-finetuned-xsum
jamiehuang12
2022-09-26T04:44:45Z
109
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:eli5", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-09-26T03:22:48Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - eli5 metrics: - rouge model-index: - name: t5-small-finetuned-xsum results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: eli5 type: eli5 config: LFQA_reddit split: train_eli5 args: LFQA_reddit metrics: - name: Rouge1 type: rouge value: 13.2843 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the eli5 dataset. It achieves the following results on the evaluation set: - Loss: 3.6744 - Rouge1: 13.2843 - Rouge2: 2.006 - Rougel: 10.6541 - Rougelsum: 12.0343 - Gen Len: 18.9984 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 3.8822 | 1.0 | 17040 | 3.6744 | 13.2843 | 2.006 | 10.6541 | 12.0343 | 18.9984 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
erikejw/swinv2-small-patch4-window16-256-finetuned-eurosat
erikejw
2022-09-26T03:31:23Z
172
0
transformers
[ "transformers", "pytorch", "tensorboard", "swinv2", "image-classification", "generated_from_trainer", "dataset:imagefolder", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-09-26T01:44:49Z
--- tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy - f1 - precision - recall model-index: - name: swinv2-small-patch4-window16-256-finetuned-eurosat results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9892592592592593 - name: F1 type: f1 value: 0.9892542163878574 - name: Precision type: precision value: 0.9892896521886161 - name: Recall type: recall value: 0.9892592592592593 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-small-patch4-window16-256-finetuned-eurosat This model is a fine-tuned version of [microsoft/swinv2-small-patch4-window16-256](https://huggingface.co/microsoft/swinv2-small-patch4-window16-256) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0328 - Accuracy: 0.9893 - F1: 0.9893 - Precision: 0.9893 - Recall: 0.9893 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 96 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.2326 | 1.0 | 253 | 0.0870 | 0.9715 | 0.9716 | 0.9720 | 0.9715 | | 0.1955 | 2.0 | 506 | 0.0576 | 0.9789 | 0.9788 | 0.9794 | 0.9789 | | 0.1229 | 3.0 | 759 | 0.0450 | 0.9837 | 0.9837 | 0.9839 | 0.9837 | | 0.0797 | 4.0 | 1012 | 0.0332 | 0.9889 | 0.9889 | 0.9889 | 0.9889 | | 0.0826 | 5.0 | 1265 | 0.0328 | 0.9893 | 0.9893 | 0.9893 | 0.9893 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
sahajrajmalla/patrakar
sahajrajmalla
2022-09-26T02:06:00Z
107
1
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "nepali-nlp", "nepali-news-classificiation", "nlp", "deep-learning", "transfer-learning", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-09-15T07:05:22Z
--- license: mit tags: - nepali-nlp - nepali-news-classificiation - nlp - transformers - deep-learning - pytorch - transfer-learning model-index: - name: patrakar results: [] widget: - text: "नेकपा (एमाले)का नेता गोकर्णराज विष्टले सहमति र सहकार्यबाटै संविधान बनाउने तथा जनताको जीवनस्तर उकास्ने काम गर्नु नै अबको मुख्य काम रहेको बताएका छन् ।" example_title: "Example 1" - text: "राजनीतिक स्थिरता नहुँदा विकास निर्माणले गति लिन सकेन ।" example_title: "Example 2" - text: "ठूलो उद्योग खोल्न महिलालाई ऋण दिइन्न" example_title: "Example 3" --- # patrakar/ पत्रकार (Nepali News Classifier) Last updated: September 2022 ## Model Details **patrakar** is a DistilBERT pre-trained sequence classification transformer model which classifies Nepali language news into 9 newsgroup category, such as: - politics - opinion - bank - entertainment - economy - health - literature - sports - tourism It is developed by Sahaj Raj Malla to be generally usefuly for general public and so that others could explore them for commercial and scientific purposes. This model was trained on [Sakonii/distilgpt2-nepali](https://huggingface.co/Sakonii/distilgpt2-nepali) model. It achieves the following results on the test dataset: | Total Number of samples | Accuracy(%) |:-------------:|:---------------: | 5670 | 95.475 ### Model date September 2022 ### Model type Sequence classification model ### Model version 1.0.0 ## Model Usage This model can be used directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python from transformers import pipeline, set_seed set_seed(42) model_name = "sahajrajmalla/patrakar" classifier = pipeline('text-classification', model=model_name) text = "नेकपा (एमाले)का नेता गोकर्णराज विष्टले सहमति र सहकार्यबाटै संविधान बनाउने तथा जनताको जीवनस्तर उकास्ने काम गर्नु नै अबको मुख्य काम रहेको बताएका छन् ।" classifier(text) ``` Here is how we can use the model to get the features of a given text in PyTorch: ```python !pip install transformers torch from transformers import AutoTokenizer from transformers import AutoModelForSequenceClassification import torch import torch.nn.functional as F # initializing model and tokenizer model_name = "sahajrajmalla/patrakar" # downloading tokenizer tokenizer = AutoTokenizer.from_pretrained(model_name) # downloading model model = AutoModelForSequenceClassification.from_pretrained(model_name) def tokenize_function(examples): return tokenizer(examples["data"], padding="max_length", truncation=True) # predicting with the model sequence_i_want_to_predict = "राजनीतिक स्थिरता नहुँदा विकास निर्माणले गति लिन सकेन" # initializing our labels label_list = [ "bank", "economy", "entertainment", "health", "literature", "opinion", "politics", "sports", "tourism" ] batch = tokenizer(sequence_i_want_to_predict, padding=True, truncation=True, max_length=512, return_tensors='pt') with torch.no_grad(): outputs = model(**batch) predictions = F.softmax(outputs.logits, dim=1) labels = torch.argmax(predictions, dim=1) print(f"The sequence: \n\n {word_i_want_to_predict} \n\n is predicted to be of newsgroup {label_list[labels.item()]}") ``` ## Training data This model is trained on 50,945 rows of Nepali language news grouped [dataset](https://www.kaggle.com/competitions/text-it-meet-22/data?select=train.csv) found on Kaggle which was also used in IT Meet 2022 Text challenge. ## Framework versions - Transformers 4.20.1 - Pytorch 1.9.1 - Datasets 2.0.0 - Tokenizers 0.11.6
ramsformers/shoes-brand
ramsformers
2022-09-26T01:56:03Z
226
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-09-26T01:55:52Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: shoes-brand results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.6716417670249939 --- # shoes-brand Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### adidas shoes ![adidas shoes](images/adidas_shoes.jpg) #### nike shoes ![nike shoes](images/nike_shoes.jpg) #### puma shoes ![puma shoes](images/puma_shoes.jpg)
jamiehuang/t5-small-finetuned-xsum
jamiehuang
2022-09-26T01:29:12Z
109
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:eli5", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-09-24T21:08:14Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - eli5 metrics: - rouge model-index: - name: t5-small-finetuned-xsum results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: eli5 type: eli5 config: LFQA_reddit split: train_eli5 args: LFQA_reddit metrics: - name: Rouge1 type: rouge value: 13.2962 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the eli5 dataset. It achieves the following results on the evaluation set: - Loss: 3.6746 - Rouge1: 13.2962 - Rouge2: 2.0081 - Rougel: 10.6529 - Rougelsum: 12.049 - Gen Len: 18.9985 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 3.8901 | 1.0 | 17040 | 3.6746 | 13.2962 | 2.0081 | 10.6529 | 12.049 | 18.9985 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
ammarpl/t5-base-finetuned-elif-attempt1
ammarpl
2022-09-26T01:14:32Z
111
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:eli5", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-09-25T21:01:55Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - eli5 metrics: - rouge model-index: - name: t5-base-finetuned-elif-attempt1 results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: eli5 type: eli5 config: LFQA_reddit split: train_eli5 args: LFQA_reddit metrics: - name: Rouge1 type: rouge value: 3.9675 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-finetuned-elif-attempt1 This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the eli5 dataset. It achieves the following results on the evaluation set: - Loss: 5.3889 - Rouge1: 3.9675 - Rouge2: 0.248 - Rougel: 3.454 - Rougelsum: 3.765 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 5.8271 | 1.0 | 17040 | 5.3889 | 3.9675 | 0.248 | 3.454 | 3.765 | 19.0 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
farleyknight/patent-summarization-allen-led-large-2022-09-20
farleyknight
2022-09-25T23:46:58Z
96
1
transformers
[ "transformers", "pytorch", "led", "text2text-generation", "generated_from_trainer", "dataset:farleyknight/big_patent_5_percent", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-09-20T16:22:35Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - farleyknight/big_patent_5_percent metrics: - rouge model-index: - name: patent-summarization-allen-led-large-2022-09-20 results: - task: name: Summarization type: summarization dataset: name: farleyknight/big_patent_5_percent type: farleyknight/big_patent_5_percent config: all split: train args: all metrics: - name: Rouge1 type: rouge value: 0.0 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # patent-summarization-allen-led-large-2022-09-20 This model is a fine-tuned version of [allenai/led-large-16384-arxiv](https://huggingface.co/allenai/led-large-16384-arxiv) on the farleyknight/big_patent_5_percent dataset. It achieves the following results on the evaluation set: - Loss: 2.8233 - Rouge1: 0.0 - Rouge2: 0.0 - Rougel: 0.0 - Rougelsum: 0.0 - Gen Len: 128.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 3.4766 | 0.08 | 5000 | 3.4240 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 | | 3.2549 | 0.17 | 10000 | 3.2908 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 | | 3.2295 | 0.25 | 15000 | 3.1862 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 | | 3.1455 | 0.33 | 20000 | 3.1291 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 | | 3.0526 | 0.41 | 25000 | 3.0684 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 | | 3.0024 | 0.5 | 30000 | 3.0134 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 | | 2.9671 | 0.58 | 35000 | 2.9696 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 | | 2.9862 | 0.66 | 40000 | 2.9431 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 | | 2.9168 | 0.75 | 45000 | 2.8989 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 | | 2.9063 | 0.83 | 50000 | 2.8559 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 | | 2.8417 | 0.91 | 55000 | 2.8398 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 | | 2.7853 | 0.99 | 60000 | 2.8240 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 | ### Framework versions - Transformers 4.23.0.dev0 - Pytorch 1.12.0 - Datasets 2.4.0 - Tokenizers 0.12.1
amirabbas/wav2vec2-large-xls-r-300m-turkish-demo-colab-1
amirabbas
2022-09-25T23:11:58Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-09-25T19:40:10Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-turkish-demo-colab-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-turkish-demo-colab-1 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.3487 - Wer: 0.3000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.0425 | 3.67 | 400 | 0.7168 | 0.6650 | | 0.4365 | 7.34 | 800 | 0.4498 | 0.4695 | | 0.2103 | 11.01 | 1200 | 0.3975 | 0.3840 | | 0.1257 | 14.68 | 1600 | 0.3655 | 0.3341 | | 0.0828 | 18.35 | 2000 | 0.3487 | 0.3000 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
BumblingOrange/Shalltear_Bloodfallen
BumblingOrange
2022-09-25T22:25:26Z
0
1
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2022-09-25T19:31:46Z
--- license: bigscience-bloom-rail-1.0 --- Uses the Waifu Diffusion model as a base, linked here: https://huggingface.co/hakurei/waifu-diffusion Custom Dreambooth model based off of the likeness of Shalltear Bloodfallen from Overlord. Dataset was 15 training images, and 13 regularization images. Trained for 3000 steps. To use the model, simply insert the name 'Shalltear Bloodfallen' into your prompts. The class token used was 'vampire_girl_hair_bow_white_hair'. Append the class token after Shalltear Bloodfallen for stronger result. EX: "A photo of Shalltear Bloodfallen vampire_girl_hair_bow_white_hair"
sd-concepts-library/remert
sd-concepts-library
2022-09-25T20:50:59Z
0
1
null
[ "license:mit", "region:us" ]
null
2022-09-25T20:50:05Z
--- license: mit --- ### remert on Stable Diffusion This is the `<Remert>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`:
BumblingOrange/Cirno
BumblingOrange
2022-09-25T20:08:39Z
0
4
null
[ "region:us" ]
null
2022-09-25T19:37:51Z
Uses the Waifu Diffusion model as a base, linked here: https://huggingface.co/hakurei/waifu-diffusion Custom Dreambooth model based off of the likeness of Cirno from Touhou. Dataset was 16 training images, and 18 regularization images. Trained for 3000 steps. To use the model, simply insert the phrase 'A photo of sks' into your prompts. The class token used was 'ice_fairy'. Append the class token after 'A photo of sks' for stronger results. EX: "A photo of sks ice_fairy"
quecopiones/distillbert-base-spanish-uncased-finetuned-full-suicidios
quecopiones
2022-09-25T19:52:22Z
90
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-09-25T14:14:14Z
--- tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distillbert-base-spanish-uncased-finetuned-full-suicidios results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distillbert-base-spanish-uncased-finetuned-full-suicidios This model is a fine-tuned version of [CenIA/distillbert-base-spanish-uncased](https://huggingface.co/CenIA/distillbert-base-spanish-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0825 - Accuracy: 0.9814 - F1: 0.9814 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:| | 0.2059 | 1.0 | 32058 | 0.1142 | 0.9694 | 0.9694 | | 0.1229 | 2.0 | 64116 | 0.0825 | 0.9814 | 0.9814 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
monakth/distilbert-base-multilingual-cased-finetuned-squad
monakth
2022-09-25T19:18:13Z
121
1
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-09-25T16:02:50Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-multilingual-cased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-multilingual-cased-finetuned-squad This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1954 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2983 | 1.0 | 5555 | 1.2202 | | 1.0252 | 2.0 | 11110 | 1.1583 | | 0.8078 | 3.0 | 16665 | 1.1954 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
eliwill/stoic-generator-10e
eliwill
2022-09-25T18:37:19Z
59
0
transformers
[ "transformers", "tf", "tensorboard", "gpt2", "text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-09-25T18:25:15Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: eliwill/stoic-generator-10e results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # eliwill/stoic-generator-10e This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.4753 - Validation Loss: 3.7980 - Epoch: 9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 4.0230 | 3.9474 | 0 | | 3.8580 | 3.8982 | 1 | | 3.7757 | 3.8721 | 2 | | 3.7149 | 3.8489 | 3 | | 3.6640 | 3.8343 | 4 | | 3.6210 | 3.8152 | 5 | | 3.5796 | 3.8088 | 6 | | 3.5429 | 3.8038 | 7 | | 3.5061 | 3.7967 | 8 | | 3.4753 | 3.7980 | 9 | ### Framework versions - Transformers 4.22.1 - TensorFlow 2.8.2 - Datasets 2.5.1 - Tokenizers 0.12.1
sharonpeng/distilbert-base-uncased-finetuned-squad
sharonpeng
2022-09-25T18:31:46Z
117
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-09-06T06:04:35Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1456 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.215 | 1.0 | 5533 | 1.1619 | | 0.9533 | 2.0 | 11066 | 1.1257 | | 0.7566 | 3.0 | 16599 | 1.1456 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
amirabbas/wav2vec2-large-xls-r-300m-turkish-demo-colab
amirabbas
2022-09-25T18:23:15Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-09-25T12:17:01Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-turkish-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-turkish-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
BumblingOrange/Holo
BumblingOrange
2022-09-25T17:15:45Z
0
2
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2022-09-25T05:01:30Z
--- license: bigscience-bloom-rail-1.0 --- Uses the Waifu Diffusion model as a base, linked here: https://huggingface.co/hakurei/waifu-diffusion Custom Dreambooth model based off of the likeness of Holo from Spice and Wolf. Dataset was 7 training images, and 14 regularization images. Trained for 3000 steps. To use the model, simply insert the name "Holo" into your prompts. The class token was "wolf_girl". Add the class token after "Holo" from stronger results. EXAMPLE: "A photo of Holo wolf_girl"
kevinbram/nyfin
kevinbram
2022-09-25T17:13:57Z
112
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-09-25T15:28:32Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: nyfin results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nyfin This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.2155 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.26 | 1.0 | 5533 | 1.2155 | ### Framework versions - Transformers 4.22.0 - Pytorch 1.11.0 - Datasets 2.4.0 - Tokenizers 0.12.1
Lagstill/GPT-2-Hindi
Lagstill
2022-09-25T16:48:30Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2022-09-25T16:48:30Z
--- license: bigscience-openrail-m ---
simecek/DNADebertaK6_Worm
simecek
2022-09-25T14:28:30Z
162
0
transformers
[ "transformers", "pytorch", "deberta", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-09-19T08:02:15Z
--- tags: - generated_from_trainer model-index: - name: DNADebertaK6_Worm results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DNADebertaK6_Worm This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6161 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 600001 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:------:|:---------------:| | 4.5653 | 7.26 | 20000 | 1.8704 | | 1.8664 | 14.53 | 40000 | 1.7762 | | 1.7803 | 21.79 | 60000 | 1.7429 | | 1.7502 | 29.06 | 80000 | 1.7305 | | 1.7329 | 36.32 | 100000 | 1.7185 | | 1.7191 | 43.59 | 120000 | 1.7073 | | 1.7065 | 50.85 | 140000 | 1.6925 | | 1.6945 | 58.12 | 160000 | 1.6877 | | 1.6862 | 65.38 | 180000 | 1.6792 | | 1.6788 | 72.65 | 200000 | 1.6712 | | 1.6729 | 79.91 | 220000 | 1.6621 | | 1.6679 | 87.18 | 240000 | 1.6608 | | 1.6632 | 94.44 | 260000 | 1.6586 | | 1.6582 | 101.71 | 280000 | 1.6585 | | 1.6551 | 108.97 | 300000 | 1.6564 | | 1.6507 | 116.24 | 320000 | 1.6449 | | 1.6481 | 123.5 | 340000 | 1.6460 | | 1.6448 | 130.77 | 360000 | 1.6411 | | 1.6425 | 138.03 | 380000 | 1.6408 | | 1.6387 | 145.3 | 400000 | 1.6358 | | 1.6369 | 152.56 | 420000 | 1.6373 | | 1.6337 | 159.83 | 440000 | 1.6364 | | 1.6312 | 167.09 | 460000 | 1.6303 | | 1.6298 | 174.36 | 480000 | 1.6346 | | 1.6273 | 181.62 | 500000 | 1.6272 | | 1.6244 | 188.88 | 520000 | 1.6268 | | 1.6225 | 196.15 | 540000 | 1.6295 | | 1.6207 | 203.41 | 560000 | 1.6206 | | 1.6186 | 210.68 | 580000 | 1.6277 | | 1.6171 | 217.94 | 600000 | 1.6161 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0 - Datasets 2.2.2 - Tokenizers 0.12.1
simecek/DNADebertaK6_Arabidopsis
simecek
2022-09-25T14:27:59Z
178
1
transformers
[ "transformers", "pytorch", "deberta", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-09-19T07:42:31Z
--- tags: - generated_from_trainer model-index: - name: DNADebertaK6_Arabidopsis results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DNADebertaK6_Arabidopsis This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7194 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 600001 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:------:|:---------------:| | 4.6174 | 6.12 | 20000 | 1.9257 | | 1.8873 | 12.24 | 40000 | 1.8098 | | 1.8213 | 18.36 | 60000 | 1.7952 | | 1.8042 | 24.48 | 80000 | 1.7888 | | 1.7945 | 30.6 | 100000 | 1.7861 | | 1.7873 | 36.72 | 120000 | 1.7772 | | 1.782 | 42.84 | 140000 | 1.7757 | | 1.7761 | 48.96 | 160000 | 1.7632 | | 1.7714 | 55.08 | 180000 | 1.7685 | | 1.7677 | 61.2 | 200000 | 1.7568 | | 1.7637 | 67.32 | 220000 | 1.7570 | | 1.7585 | 73.44 | 240000 | 1.7442 | | 1.7554 | 79.56 | 260000 | 1.7556 | | 1.7515 | 85.68 | 280000 | 1.7505 | | 1.7483 | 91.8 | 300000 | 1.7463 | | 1.745 | 97.92 | 320000 | 1.7425 | | 1.7427 | 104.04 | 340000 | 1.7425 | | 1.7398 | 110.16 | 360000 | 1.7359 | | 1.7377 | 116.28 | 380000 | 1.7369 | | 1.7349 | 122.4 | 400000 | 1.7340 | | 1.7325 | 128.52 | 420000 | 1.7313 | | 1.731 | 134.64 | 440000 | 1.7256 | | 1.7286 | 140.76 | 460000 | 1.7238 | | 1.7267 | 146.88 | 480000 | 1.7324 | | 1.7247 | 153.0 | 500000 | 1.7247 | | 1.7228 | 159.12 | 520000 | 1.7185 | | 1.7209 | 165.24 | 540000 | 1.7166 | | 1.7189 | 171.36 | 560000 | 1.7206 | | 1.7181 | 177.48 | 580000 | 1.7190 | | 1.7159 | 183.6 | 600000 | 1.7194 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0 - Datasets 2.2.2 - Tokenizers 0.12.1
apipond/ppo-LunarLander-v2
apipond
2022-09-25T13:37:42Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-09-25T13:37:18Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 226.89 +/- 17.19 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Okyx/fillmaskmodel
Okyx
2022-09-25T12:36:49Z
59
0
transformers
[ "transformers", "tf", "xlm-roberta", "fill-mask", "generated_from_keras_callback", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-09-25T12:32:35Z
--- license: mit tags: - generated_from_keras_callback model-index: - name: fillmaskmodel results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # fillmaskmodel This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 4400, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results ### Framework versions - Transformers 4.22.1 - TensorFlow 2.8.2 - Tokenizers 0.12.1
weijiahaha/t5-small-summarization
weijiahaha
2022-09-25T12:21:01Z
112
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:cnn_dailymail", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-07-26T07:38:48Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - cnn_dailymail model-index: - name: t5-small-summarization results: [] --- # t5-small-summarization This model is a fine-tuned version of t5-small (https://huggingface.co/t5-small) on the cnn_dailymail dataset. It achieves the following results on the evaluation set: - Loss: 1.6477 ## Model description The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.9195 | 1.0 | 718 | 1.6477 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
rram12/Pixelcopter-PLE-v0
rram12
2022-09-25T11:56:34Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2022-09-25T11:56:26Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 7.10 +/- 5.39 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
huynguyen208/distilbert-base-uncased-finetuned-ner
huynguyen208
2022-09-25T11:51:11Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-09-25T11:36:57Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0984 - Precision: 0.6767 - Recall: 0.6693 - F1: 0.6730 - Accuracy: 0.9711 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2551 | 1.0 | 843 | 0.1340 | 0.5996 | 0.5465 | 0.5718 | 0.9623 | | 0.1126 | 2.0 | 1686 | 0.1039 | 0.6635 | 0.6331 | 0.6480 | 0.9699 | | 0.0857 | 3.0 | 2529 | 0.0984 | 0.6767 | 0.6693 | 0.6730 | 0.9711 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
ShadowTwin41/distilbert-base-uncased-finetuned-imdb
ShadowTwin41
2022-09-25T10:07:17Z
161
0
transformers
[ "transformers", "pytorch", "distilbert", "fill-mask", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-09-25T09:54:34Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.7181 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 384 - eval_batch_size: 384 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 27 | 0.7668 | | No log | 2.0 | 54 | 0.7282 | | No log | 3.0 | 81 | 0.7165 | ### Framework versions - Transformers 4.22.0 - Pytorch 1.12.1+cu116 - Datasets 2.4.0 - Tokenizers 0.12.1
shed-e/thucnews
shed-e
2022-09-25T08:29:09Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:load_train", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-09-25T07:46:35Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - load_train metrics: - accuracy model-index: - name: thucnews results: - task: name: Text Classification type: text-classification dataset: name: load_train type: load_train config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9433 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # thucnews This model is a fine-tuned version of [hfl/rbt6](https://huggingface.co/hfl/rbt6) on the load_train dataset. It achieves the following results on the evaluation set: - Loss: 0.3191 - Accuracy: 0.9433 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 256 - eval_batch_size: 256 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2038 | 1.0 | 704 | 0.2018 | 0.9332 | | 0.1403 | 2.0 | 1408 | 0.1829 | 0.9406 | | 0.0894 | 3.0 | 2112 | 0.2073 | 0.9419 | | 0.056 | 4.0 | 2816 | 0.2228 | 0.9408 | | 0.0321 | 5.0 | 3520 | 0.2689 | 0.9417 | | 0.0209 | 6.0 | 4224 | 0.2819 | 0.9431 | | 0.0099 | 7.0 | 4928 | 0.3131 | 0.9421 | | 0.0057 | 8.0 | 5632 | 0.3191 | 0.9433 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
jamescalam/mpnet-snli-negatives
jamescalam
2022-09-25T07:33:28Z
13
1
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "en", "dataset:snli", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-09-22T08:27:42Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers language: - en license: mit datasets: - snli --- # MPNet NLI This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. It has been fine-tuned using the **S**tanford **N**atural **L**anguage **I**nference (SNLI) dataset (including negatives) and returns MRR@10 and MAP scores of ~0.95 on the SNLI test set. Find more info from [James Briggs on YouTube](https://youtube.com/c/jamesbriggs) or in the [**free** NLP for Semantic Search ebook](https://pinecone.io/learn/nlp). <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('jamescalam/mpnet-snli-negatives') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('jamescalam/mpnet-snli-negatives') model = AutoModel.from_pretrained('jamescalam/mpnet-snli-negatives') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Training The model was trained with the parameters: **DataLoader**: `sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 4660 with parameters: ``` {'batch_size': 32} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 466, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ```
jamescalam/mpnet-nli-sts
jamescalam
2022-09-25T07:28:38Z
5
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "en", "dataset:snli", "dataset:stsb", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-09-25T07:13:18Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers language: - en license: mit datasets: - snli - stsb --- # MPNet NLI and STS This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. It uses the [jamescalam/mpnet-snli-negatives](https://huggingface.co/jamescalam/mpnet-snli-negatives) model as a starting point, and is fine-tuned further on the **S**emantic **T**extual **S**imilarity **b**enchmark (STSb) dataset. Returning evaluation scores of ~0.9 cosine Pearson correlation using the STSb test set. Find more info from [James Briggs on YouTube](https://youtube.com/c/jamesbriggs) or in the [**free** NLP for Semantic Search ebook](https://pinecone.io/learn/nlp). <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('jamescalam/mpnet-nli-sts') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('jamescalam/mpnet-nli-sts') model = AutoModel.from_pretrained('jamescalam/mpnet-nli-sts') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 180 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 5, "evaluation_steps": 25, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 90, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ```
ShadowTwin41/bert-finetuned-ner
ShadowTwin41
2022-09-25T07:26:43Z
105
0
transformers
[ "transformers", "pytorch", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-09-25T07:18:07Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 config: conll2003 split: train args: conll2003 metrics: - name: Precision type: precision value: 0.9127878490935816 - name: Recall type: recall value: 0.9405923931336251 - name: F1 type: f1 value: 0.9264815582262743 - name: Accuracy type: accuracy value: 0.9841937952551951 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0586 - Precision: 0.9128 - Recall: 0.9406 - F1: 0.9265 - Accuracy: 0.9842 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 293 | 0.0844 | 0.8714 | 0.9123 | 0.8914 | 0.9760 | | 0.1765 | 2.0 | 586 | 0.0601 | 0.9109 | 0.9357 | 0.9231 | 0.9834 | | 0.1765 | 3.0 | 879 | 0.0586 | 0.9128 | 0.9406 | 0.9265 | 0.9842 | ### Framework versions - Transformers 4.22.0 - Pytorch 1.12.1+cu116 - Datasets 2.4.0 - Tokenizers 0.12.1