modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-04 00:37:20
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
537 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-04 00:37:04
card
stringlengths
11
1.01M
olpa/bert-base-uncased-issues-128
olpa
2022-11-11T20:26:00Z
109
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-11-11T04:09:26Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-base-uncased-issues-128 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-issues-128 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1389 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 16 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.9863 | 1.0 | 1163 | 1.5917 | | 1.579 | 2.0 | 2326 | 1.4668 | | 1.4305 | 3.0 | 3489 | 1.3947 | | 1.3367 | 4.0 | 4652 | 1.2889 | | 1.2748 | 5.0 | 5815 | 1.3377 | | 1.2184 | 6.0 | 6978 | 1.3370 | | 1.1708 | 7.0 | 8141 | 1.2595 | | 1.1309 | 8.0 | 9304 | 1.3044 | | 1.0896 | 9.0 | 10467 | 1.2314 | | 1.0614 | 10.0 | 11630 | 1.1699 | | 1.0296 | 11.0 | 12793 | 1.2066 | | 0.9896 | 12.0 | 13956 | 1.2158 | | 0.983 | 13.0 | 15119 | 1.1289 | | 0.9619 | 14.0 | 16282 | 1.1427 | | 0.9387 | 15.0 | 17445 | 1.0767 | | 0.9205 | 16.0 | 18608 | 1.1389 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu102 - Datasets 2.2.1 - Tokenizers 0.12.1
cyburn/the_white_queen
cyburn
2022-11-11T20:18:03Z
0
0
null
[ "license:unknown", "region:us" ]
null
2022-11-04T13:14:01Z
--- license: unknown --- # The White Queen Dreambooth Model Produced from 36 publicly available pictures in landscape, portrait and square format. ## Model info The models included was trained on "multi-resolution" images of "The White Queen" ## Using the model * common subject prompt tokens: `the white queen` * optional prompt tokens: `crown` ## Example prompts To obtain The White Queen without a crown simply use: `the white queen` To obtain The White Queen wearing her crown use: `the white queen crown` You can add mode info to the prompt to get, gor example, a watercolor image of her with: `the white queen crown watercolor`
princeton-nlp/unsup-simcse-bert-base-uncased
princeton-nlp
2022-11-11T20:04:07Z
20,199
5
transformers
[ "transformers", "pytorch", "jax", "bert", "feature-extraction", "arxiv:2104.08821", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- tags: - feature-extraction - bert --- # Model Card for unsup-simcse-bert-base-uncased # Model Details ## Model Description More information needed - **Developed by:** Princeton NLP group - **Shared by [Optional]:** Hugging Face - **Model type:** Feature Extraction - **Language(s) (NLP):** More information needed - **License:** More information needed - **Related Models:** - **Parent Model:** BERT - **Resources for more information:** - [GitHub Repo](https://github.com/princeton-nlp/SimCSE) - [Model Space](https://huggingface.co/spaces/mteb/leaderboard) - [Associated Paper](https://arxiv.org/abs/2104.08821) # Uses ## Direct Use This model can be used for the task of Feature Engineering. ## Downstream Use [Optional] More information needed ## Out-of-Scope Use The model should not be used to intentionally create hostile or alienating environments for people. # Bias, Risks, and Limitations Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. ## Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. # Training Details ## Training Data The model craters note in the [Github Repository](https://github.com/princeton-nlp/SimCSE/blob/main/README.md) > We train unsupervised SimCSE on 106 randomly sampled sentences from English Wikipedia, and train supervised SimCSE on the combination of MNLI and SNLI datasets (314k). ## Training Procedure ### Preprocessing More information needed ### Speeds, Sizes, Times More information needed # Evaluation ## Testing Data, Factors & Metrics ### Testing Data The model craters note in the [associated paper](https://arxiv.org/pdf/2104.08821.pdf) > Our evaluation code for sentence embeddings is based on a modified version of [SentEval](https://github.com/facebookresearch/SentEval). It evaluates sentence embeddings on semantic textual similarity (STS) tasks and downstream transfer tasks. For STS tasks, our evaluation takes the "all" setting, and report Spearman's correlation. See [associated paper](https://arxiv.org/pdf/2104.08821.pdf) (Appendix B) for evaluation details. ### Factors More information needed ### Metrics More information needed ## Results More information needed # Model Examination The model craters note in the [associated paper](https://arxiv.org/pdf/2104.08821.pdf) > **Uniformity and alignment.** We also observe that (1) though pre-trained embeddings have good alignment, their uniformity is poor (i.e., the embeddings are highly anisotropic); (2) post-processing methods like BERT-flow and BERT-whitening greatly improve uniformity but also suffer a degeneration in alignment; (3) unsupervised SimCSE effectively improves uniformity of pre-trained embeddings whereas keeping a good alignment;(4) incorporating supervised data in SimCSE further amends alignment. # Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** Nvidia 3090 GPUs with CUDA 11 - **Hours used:** More information needed - **Cloud Provider:** More information needed - **Compute Region:** More information needed - **Carbon Emitted:** More information needed # Technical Specifications [optional] ## Model Architecture and Objective More information needed ## Compute Infrastructure More information needed ### Hardware More information needed ### Software More information needed # Citation **BibTeX:** ```bibtex @inproceedings{gao2021simcse, title={{SimCSE}: Simple Contrastive Learning of Sentence Embeddings}, author={Gao, Tianyu and Yao, Xingcheng and Chen, Danqi}, booktitle={Empirical Methods in Natural Language Processing (EMNLP)}, year={2021} } ``` # Glossary [optional] More information needed # More Information [optional] More information needed # Model Card Authors [optional] Princeton NLP group in collaboration with Ezi Ozoani and the Hugging Face team # Model Card Contact If you have any questions related to the code or the paper, feel free to email Tianyu (`tianyug@cs.princeton.edu`) and Xingcheng (`yxc18@mails.tsinghua.edu.cn`). If you encounter any problems when using the code, or want to report a bug, you can open an issue. Please try to specify the problem with details so we can help you better and quicker! # How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("princeton-nlp/unsup-simcse-bert-base-uncased") model = AutoModel.from_pretrained("princeton-nlp/unsup-simcse-bert-base-uncased") ``` </details>
anton-l/wav2vec2-base-superb-sv
anton-l
2022-11-11T19:30:49Z
821
3
transformers
[ "transformers", "pytorch", "wav2vec2", "audio-xvector", "speech", "audio", "audio-classification", "en", "dataset:superb", "arxiv:2105.01051", "arxiv:1910.09700", "arxiv:2006.11477", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2022-03-02T23:29:05Z
--- language: en datasets: - superb tags: - speech - audio - wav2vec2 - audio-classification license: apache-2.0 --- # Model Card for wav2vec2-base-superb-sv # Model Details ## Model Description - **Developed by:** Shu-wen Yang et al. - **Shared by:** Anton Lozhkov - **Model type:** Wav2Vec2 with an XVector head - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Related Models:** - **Parent Model:** wav2vec2-large-lv60 - **Resources for more information:** - [GitHub Repo](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream/sv_voxceleb1) - [Associated Paper](https://arxiv.org/abs/2105.010517) # Uses ## Direct Use This is a ported version of [S3PRL's Wav2Vec2 for the SUPERB Speaker Verification task](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream/sv_voxceleb1). The base model is [wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60), which is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. For more information refer to [SUPERB: Speech processing Universal PERformance Benchmark](https://arxiv.org/abs/2105.01051) ## Out-of-Scope Use The model should not be used to intentionally create hostile or alienating environments for people. # Bias, Risks, and Limitations Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. ## Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. # Training Details ## Training Data See the [superb dataset card](https://huggingface.co/datasets/superb) ## Training Procedure ### Preprocessing More information needed ### Speeds, Sizes, Times More information needed # Evaluation ## Testing Data, Factors & Metrics ### Testing Data See the [superb dataset card](https://huggingface.co/datasets/superb) ### Factors ### Metrics More information needed ## Results More information needed # Model Examination More information needed # Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** More information needed - **Hours used:** More information needed - **Cloud Provider:** More information needed - **Compute Region:** More information needed - **Carbon Emitted:** More information needed # Technical Specifications [optional] ## Model Architecture and Objective More information needed ## Compute Infrastructure More information needed ### Hardware More information needed ### Software More information needed # Citation **BibTeX:** ``` @misc{https://doi.org/10.48550/arxiv.2006.11477, doi = {10.48550/ARXIV.2006.11477}, url = {https://arxiv.org/abs/2006.11477}, author = {Baevski, Alexei and Zhou, Henry and Mohamed, Abdelrahman and Auli, Michael}, keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), Sound (cs.SD), Audio and Speech Processing (eess.AS), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering}, title = {wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations}, publisher = {arXiv}, @misc{https://doi.org/10.48550/arxiv.2105.01051, doi = {10.48550/ARXIV.2105.01051}, url = {https://arxiv.org/abs/2105.01051}, author = {Yang, Shu-wen and Chi, Po-Han and Chuang, Yung-Sung and Lai, Cheng-I Jeff and Lakhotia, Kushal and Lin, Yist Y. and Liu, Andy T. and Shi, Jiatong and Chang, Xuankai and Lin, Guan-Ting and Huang, Tzu-Hsien and Tseng, Wei-Cheng and Lee, Ko-tik and Liu, Da-Rong and Huang, Zili and Dong, Shuyan and Li, Shang-Wen and Watanabe, Shinji and Mohamed, Abdelrahman and Lee, Hung-yi}, keywords = {Computation and Language (cs.CL), Sound (cs.SD), Audio and Speech Processing (eess.AS), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering}, title = {SUPERB: Speech processing Universal PERformance Benchmark}, publisher = {arXiv}, year = {2021}, } ``` # Glossary [optional] More information needed # More Information [optional] More information needed # Model Card Authors [optional] Anton Lozhkov in collaboration with Ezi Ozoani and the Hugging Face team # Model Card Contact More information needed # How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> ```python from transformers import AutoProcessor, AutoModelForAudioXVector processor = AutoProcessor.from_pretrained("anton-l/wav2vec2-base-superb-sv") model = AutoModelForAudioXVector.from_pretrained("anton-l/wav2vec2-base-superb-sv") ``` </details>
jbdaniel/distilroberta-base-finetuned-wikitext2
jbdaniel
2022-11-11T19:28:28Z
194
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-11-11T18:52:30Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilroberta-base-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-finetuned-wikitext2 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8489 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0843 | 1.0 | 2406 | 1.9226 | | 1.9913 | 2.0 | 4812 | 1.8820 | | 1.9597 | 3.0 | 7218 | 1.8214 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
KVBAF/VTSWE703
KVBAF
2022-11-11T19:12:03Z
0
0
null
[ "license:cc-by-nc-sa-4.0", "region:us" ]
null
2022-11-11T19:12:03Z
--- license: cc-by-nc-sa-4.0 ---
lgris/whisper-small-cv11-pt
lgris
2022-11-11T18:37:51Z
7
1
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "pt", "dataset:mozilla-foundation/common_voice_11_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-11-06T22:58:18Z
--- language: - pt license: apache-2.0 tags: - hf-asr-leaderboard - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: Whisper Small PT with Common Voice 11 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 11.0 type: mozilla-foundation/common_voice_11_0 args: 'config: pt, split: test' metrics: - name: Wer type: wer value: 14.380154024398555 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small PT with Common Voice 11 This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.3487 - Wer: 14.3802 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:-------:| | 0.1202 | 0.88 | 1000 | 0.2225 | 15.5847 | | 0.1024 | 1.76 | 2000 | 0.2160 | 15.0651 | | 0.0832 | 2.64 | 3000 | 0.2259 | 15.0923 | | 0.0081 | 3.51 | 4000 | 0.2519 | 14.7345 | | 0.0387 | 4.39 | 5000 | 0.2718 | 14.7311 | | 0.0039 | 5.27 | 6000 | 0.3031 | 14.5914 | | 0.001 | 6.15 | 7000 | 0.3238 | 14.5710 | | 0.0007 | 7.03 | 8000 | 0.3285 | 14.5113 | | 0.0009 | 7.91 | 9000 | 0.3467 | 14.3580 | | 0.0008 | 8.79 | 10000 | 0.3487 | 14.3802 | ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.12.1
Signorlimone/Clazy
Signorlimone
2022-11-11T18:37:37Z
0
13
null
[ "region:us" ]
null
2022-11-11T17:46:58Z
This is Clazy! a new model I trained using Shivam's repo of Dreambooth. Use "clazy style" (without quotation marks) in prompt for the model to kick-in. At the moment need more finetuning, seems to have problems with mouths. Put "open mouth" in the negative prompt to mitigate this effect. It should work with animals too, but since it was trained on human subjects you need to add brackets. Like this for example: [clazy style:10] where 10 is the sample you want the model to kick-in, the first 10 samples will be dedicated to generate some noise of the actual animal. Also you can use this to generate better likeness on some subjects if you aren't satisfied with the results. Use a CFG scale between 3 and 7 to obtain a smoother model, raise the steps number accordingly (between 20 and 50 should be enough).
ntsema/wav2vec2-xlsr-53-espeak-cv-ft-evn4-ntsema-colab
ntsema
2022-11-11T18:17:23Z
97
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:audiofolder", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-11-11T16:19:53Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - audiofolder metrics: - wer model-index: - name: wav2vec2-xlsr-53-espeak-cv-ft-evn4-ntsema-colab results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: audiofolder type: audiofolder config: default split: train args: default metrics: - name: Wer type: wer value: 0.9833333333333333 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xlsr-53-espeak-cv-ft-evn4-ntsema-colab This model is a fine-tuned version of [ntsema/wav2vec2-xlsr-53-espeak-cv-ft-sah2-ntsema-colab](https://huggingface.co/ntsema/wav2vec2-xlsr-53-espeak-cv-ft-sah2-ntsema-colab) on the audiofolder dataset. It achieves the following results on the evaluation set: - Loss: 2.0821 - Wer: 0.9833 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.3115 | 6.15 | 400 | 1.6416 | 0.9867 | | 0.9147 | 12.3 | 800 | 1.6538 | 0.9867 | | 0.5301 | 18.46 | 1200 | 1.8461 | 0.98 | | 0.2865 | 24.61 | 1600 | 2.0821 | 0.9833 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
SpiteAnon/Pepestyle
SpiteAnon
2022-11-11T18:05:47Z
0
12
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2022-11-11T17:21:26Z
--- license: creativeml-openrail-m --- A Dreambooth model created with the sole purpose of generating the rarest and dankest pepes. StableDiffusion 1.5 was used as a base for this model. 22 instance images, 400 class images, 2.2k steps at a 1.3e-6 learning rate. Use the phrase 'pepestyle person' <img src="https://huggingface.co/SpiteAnon/Pepestyle/resolve/main/pepestylev2.png" alt="pepestylev2" width="400"/> <img src="https://huggingface.co/SpiteAnon/Pepestyle/resolve/main/pepestylev2-drawing.png" alt="pepestylev2-drawing" width="400"/> <img src="https://huggingface.co/SpiteAnon/Pepestyle/resolve/main/pepestylev2-suit-hat.png" alt="pepestylev2-suit" width="400"/>
dn-gh/gpt2_pubmed_tokenizer
dn-gh
2022-11-11T17:32:33Z
0
1
null
[ "tokenizer", "en", "dataset:pubmed_qa", "license:mit", "region:us" ]
null
2022-11-11T13:16:42Z
--- language: en license: mit datasets: pubmed_qa tags: - tokenizer ---
amitjohn007/electra-finetuned-squad
amitjohn007
2022-11-11T16:54:22Z
59
0
transformers
[ "transformers", "tf", "electra", "question-answering", "generated_from_keras_callback", "endpoints_compatible", "region:us" ]
question-answering
2022-11-11T14:50:14Z
--- tags: - generated_from_keras_callback model-index: - name: amitjohn007/electra-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # amitjohn007/electra-finetuned-squad This model is a fine-tuned version of [ahotrod/electra_large_discriminator_squad2_512](https://huggingface.co/ahotrod/electra_large_discriminator_squad2_512) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2298 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16599, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 ### Training results | Train Loss | Epoch | |:----------:|:-----:| | 0.5733 | 0 | | 0.3829 | 1 | | 0.2298 | 2 | ### Framework versions - Transformers 4.24.0 - TensorFlow 2.9.2 - Datasets 2.6.1 - Tokenizers 0.13.2
huggingtweets/fede_boss
huggingtweets
2022-11-11T16:13:06Z
108
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-11T15:24:35Z
--- language: en thumbnail: http://www.huggingtweets.com/fede_boss/1668183181917/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1588112047065239554/SAkHUQW-_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Federico Lucca 🎩​</div> <div style="text-align: center; font-size: 14px;">@fede_boss</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Federico Lucca 🎩​. | Data | Federico Lucca 🎩​ | | --- | --- | | Tweets downloaded | 1444 | | Retweets | 1086 | | Short tweets | 60 | | Tweets kept | 298 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1tnwg748/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @fede_boss's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2ix9k2wo) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2ix9k2wo/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/fede_boss') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
DigitalUmuganda/sentiment_analysis_kinyarwanda
DigitalUmuganda
2022-11-11T15:35:17Z
4
0
BERT
[ "BERT", "pytorch", "bert", "KinyaBert", "sentiment analysis", "rw", "region:us" ]
null
2022-11-11T12:22:50Z
--- library_name: BERT task: sentiment-analysis tags: - KinyaBert - sentiment analysis language: rw datasets: - https://github.com/afrisenti-semeval/afrisent-semeval-2023/tree/main/SubtaskA --- # Kinyarwanda twitter sentiment analysis A sentiment analysis model in Kinyarwanda trained using tweets and trained using a BERT model trained on Kinyarwanda dataset # Dataset information The dataset was trained on 3000 annotated tweets, here is the [link to the dataset](https://github.com/afrisenti-semeval/afrisent-semeval-2023/blob/main/SubtaskA/train/kr_train.tsv) # Model limitations The model was trained on a small dataset of tweets, which can present limitations in accuracy on other datasets # Result Accuracy: 85%
tglcourse/ddpm-butterflies-64px
tglcourse
2022-11-11T15:27:34Z
16
0
diffusers
[ "diffusers", "diffusers:DDPMPipeline", "region:us" ]
null
2022-11-11T15:19:30Z
Diffusion model trained on https://huggingface.co/datasets/huggan/smithsonian_butterflies_subset Example outputs: ![examples](https://huggingface.co/tglcourse/ddpm-butterflies-64px/resolve/main/example_outputs.png) Usage: ``` pipeline = DDPMPipeline.from_pretrained('tglcourse/ddpm-ema-butterflies-64').to(device) images = pipeline(batch_size=8).images ``` More info in course soon.
PrimeQA/joint-tydi-nq-list-task-xlm-roberta-large
PrimeQA
2022-11-11T14:54:56Z
32
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "MRC", "TyDiQA", "Natural Questions List", "xlm-roberta-large", "multilingual", "arxiv:1911.02116", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-11-11T14:43:26Z
--- license: apache-2.0 tags: - MRC - TyDiQA - Natural Questions List - xlm-roberta-large language: - multilingual --- *Task*: MRC # Model description An XLM-RoBERTa reading comprehension model with joint training for TyDi, NQ, and List Question Answering (NQ list data) using a fine-tuned [SQuAD xlm-roberta-large](https://huggingface.co/PrimeQA/squad-v1-xlm-roberta-large) model ## Intended uses & limitations You can use the raw model for the reading comprehension task. Biases associated with the pre-existing language model, xlm-roberta-large, that we used may be present in our fine-tuned model, joint-tydi-nq-list-task-xlm-roberta-large. ## Usage You can use this model directly with the [PrimeQA](https://github.com/primeqa/primeqa) pipeline for reading comprehension [listqa.ipynb](https://github.com/primeqa/primeqa/blob/main/notebooks/mrc/listqa.ipynb). ### BibTeX entry and citation info ```bibtex @article{kwiatkowski-etal-2019-natural, title = "Natural Questions: A Benchmark for Question Answering Research", author = "Kwiatkowski, Tom and Palomaki, Jennimaria and Redfield, Olivia and Collins, Michael and Parikh, Ankur and Alberti, Chris and Epstein, Danielle and Polosukhin, Illia and Devlin, Jacob and Lee, Kenton and Toutanova, Kristina and Jones, Llion and Kelcey, Matthew and Chang, Ming-Wei and Dai, Andrew M. and Uszkoreit, Jakob and Le, Quoc and Petrov, Slav", journal = "Transactions of the Association for Computational Linguistics", volume = "7", year = "2019", address = "Cambridge, MA", publisher = "MIT Press", url = "https://aclanthology.org/Q19-1026", doi = "10.1162/tacl_a_00276", pages = "452--466", } ``` ```bibtex @article{DBLP:journals/corr/abs-1911-02116, author = {Alexis Conneau and Kartikay Khandelwal and Naman Goyal and Vishrav Chaudhary and Guillaume Wenzek and Francisco Guzm{\'{a}}n and Edouard Grave and Myle Ott and Luke Zettlemoyer and Veselin Stoyanov}, title = {Unsupervised Cross-lingual Representation Learning at Scale}, journal = {CoRR}, volume = {abs/1911.02116}, year = {2019}, url = {http://arxiv.org/abs/1911.02116}, eprinttype = {arXiv}, eprint = {1911.02116}, timestamp = {Mon, 11 Nov 2019 18:38:09 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1911-02116.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
Terence3927/Reinforce-CartPole-v1
Terence3927
2022-11-11T13:54:07Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2022-11-11T13:49:22Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 92.90 +/- 34.69 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
epec254/my-awesome-setfit-model
epec254
2022-11-11T13:48:01Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-11-11T13:47:55Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 40 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 40, "warmup_steps": 4, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Marre-Barre/bubblydubbly
Marre-Barre
2022-11-11T13:23:07Z
0
7
null
[ "region:us" ]
null
2022-11-10T12:51:24Z
Prompt: {{replace this with subject}}, soft colors, art by bubblydubbly negative prompt: heavy contrast, red eyes, blue hair blue hair as negative prompt if you want some normal hair scale: 8 steps: 50 art by bubblydubbly is the keyword bubblydubbly_7k is trained at 7000 steps, but tbh, I like the designs from 11,5k steps more, but the style is better for 7k steps. Pick your own and decide for yourself. :) Note: this is trained at 11500 steps, for 115 images, but seems a bit overtrained. Does some funky stuff with teeth. Will make a new version at 10k steps to see whether there is any difference while still retaining the style.
aayu/bert-large-uncased-finetuned-JD_CV
aayu
2022-11-11T12:04:48Z
56
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-11-11T11:37:58Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-large-uncased-finetuned-JD_CV results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-uncased-finetuned-JD_CV This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 7.3896 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 1 | 8.2520 | | No log | 2.0 | 2 | 7.5931 | | No log | 3.0 | 3 | 7.3896 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
bigmorning/whisper_havest_0035
bigmorning
2022-11-11T11:30:44Z
60
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-11-11T11:30:35Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: whisper_havest_0035 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_havest_0035 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.5897 - Train Accuracy: 0.0150 - Train Do Wer: 1.0 - Validation Loss: 4.5822 - Validation Accuracy: 0.0130 - Validation Do Wer: 1.0 - Epoch: 34 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Do Wer | Validation Loss | Validation Accuracy | Validation Do Wer | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 9.9191 | 0.0046 | 1.0 | 8.5836 | 0.0067 | 1.0 | 0 | | 8.0709 | 0.0083 | 1.0 | 7.4667 | 0.0089 | 1.0 | 1 | | 7.1652 | 0.0100 | 1.0 | 6.8204 | 0.0112 | 1.0 | 2 | | 6.7196 | 0.0114 | 1.0 | 6.5192 | 0.0114 | 1.0 | 3 | | 6.4115 | 0.0115 | 1.0 | 6.2357 | 0.0115 | 1.0 | 4 | | 6.1085 | 0.0115 | 1.0 | 5.9657 | 0.0115 | 1.0 | 5 | | 5.8206 | 0.0115 | 1.0 | 5.7162 | 0.0115 | 1.0 | 6 | | 5.5567 | 0.0115 | 1.0 | 5.4963 | 0.0115 | 1.0 | 7 | | 5.3223 | 0.0116 | 1.0 | 5.3096 | 0.0116 | 1.0 | 8 | | 5.1222 | 0.0117 | 1.0 | 5.1600 | 0.0117 | 1.0 | 9 | | 4.9580 | 0.0117 | 1.0 | 5.0391 | 0.0118 | 1.0 | 10 | | 4.8251 | 0.0119 | 1.0 | 4.9427 | 0.0118 | 1.0 | 11 | | 4.7171 | 0.0119 | 1.0 | 4.8691 | 0.0119 | 1.0 | 12 | | 4.6284 | 0.0121 | 1.0 | 4.8123 | 0.0120 | 1.0 | 13 | | 4.5508 | 0.0121 | 1.0 | 4.7620 | 0.0121 | 1.0 | 14 | | 4.4855 | 0.0123 | 1.0 | 4.7260 | 0.0121 | 1.0 | 15 | | 4.4305 | 0.0124 | 1.0 | 4.7018 | 0.0123 | 1.0 | 16 | | 4.3788 | 0.0125 | 1.0 | 4.6738 | 0.0123 | 1.0 | 17 | | 4.3305 | 0.0127 | 1.0 | 4.6525 | 0.0124 | 1.0 | 18 | | 4.2860 | 0.0128 | 1.0 | 4.6401 | 0.0125 | 1.0 | 19 | | 4.2451 | 0.0130 | 1.0 | 4.6234 | 0.0126 | 1.0 | 20 | | 4.1994 | 0.0132 | 1.0 | 4.6077 | 0.0128 | 1.0 | 21 | | 4.1521 | 0.0133 | 1.0 | 4.6098 | 0.0129 | 1.0 | 22 | | 4.1148 | 0.0134 | 1.0 | 4.5919 | 0.0129 | 1.0 | 23 | | 4.0701 | 0.0135 | 1.0 | 4.6038 | 0.0128 | 1.0 | 24 | | 4.0199 | 0.0137 | 1.0 | 4.5777 | 0.0130 | 1.0 | 25 | | 3.9631 | 0.0138 | 1.0 | 4.5734 | 0.0131 | 1.0 | 26 | | 3.9175 | 0.0140 | 1.0 | 4.5866 | 0.0129 | 1.0 | 27 | | 3.8690 | 0.0142 | 1.0 | 4.5900 | 0.0129 | 1.0 | 28 | | 3.8276 | 0.0143 | 1.0 | 4.5602 | 0.0131 | 1.0 | 29 | | 3.7499 | 0.0145 | 1.0 | 4.5619 | 0.0132 | 1.0 | 30 | | 3.6968 | 0.0147 | 1.0 | 4.6203 | 0.0133 | 1.0 | 31 | | 3.6714 | 0.0149 | 1.0 | 4.7075 | 0.0133 | 1.0 | 32 | | 3.6318 | 0.0149 | 1.0 | 4.6638 | 0.0125 | 1.0 | 33 | | 3.5897 | 0.0150 | 1.0 | 4.5822 | 0.0130 | 1.0 | 34 | ### Framework versions - Transformers 4.25.0.dev0 - TensorFlow 2.9.2 - Datasets 2.6.1 - Tokenizers 0.13.2
bigmorning/whisper_havest_0015
bigmorning
2022-11-11T11:26:27Z
62
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-11-09T18:52:58Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: whisper_havest_0015 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_havest_0015 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 4.5508 - Train Accuracy: 0.0121 - Train Do Wer: 1.0 - Validation Loss: 4.7620 - Validation Accuracy: 0.0121 - Validation Do Wer: 1.0 - Epoch: 14 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Do Wer | Validation Loss | Validation Accuracy | Validation Do Wer | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 9.9191 | 0.0046 | 1.0 | 8.5836 | 0.0067 | 1.0 | 0 | | 8.0709 | 0.0083 | 1.0 | 7.4667 | 0.0089 | 1.0 | 1 | | 7.1652 | 0.0100 | 1.0 | 6.8204 | 0.0112 | 1.0 | 2 | | 6.7196 | 0.0114 | 1.0 | 6.5192 | 0.0114 | 1.0 | 3 | | 6.4115 | 0.0115 | 1.0 | 6.2357 | 0.0115 | 1.0 | 4 | | 6.1085 | 0.0115 | 1.0 | 5.9657 | 0.0115 | 1.0 | 5 | | 5.8206 | 0.0115 | 1.0 | 5.7162 | 0.0115 | 1.0 | 6 | | 5.5567 | 0.0115 | 1.0 | 5.4963 | 0.0115 | 1.0 | 7 | | 5.3223 | 0.0116 | 1.0 | 5.3096 | 0.0116 | 1.0 | 8 | | 5.1222 | 0.0117 | 1.0 | 5.1600 | 0.0117 | 1.0 | 9 | | 4.9580 | 0.0117 | 1.0 | 5.0391 | 0.0118 | 1.0 | 10 | | 4.8251 | 0.0119 | 1.0 | 4.9427 | 0.0118 | 1.0 | 11 | | 4.7171 | 0.0119 | 1.0 | 4.8691 | 0.0119 | 1.0 | 12 | | 4.6284 | 0.0121 | 1.0 | 4.8123 | 0.0120 | 1.0 | 13 | | 4.5508 | 0.0121 | 1.0 | 4.7620 | 0.0121 | 1.0 | 14 | ### Framework versions - Transformers 4.25.0.dev0 - TensorFlow 2.9.2 - Datasets 2.6.1 - Tokenizers 0.13.2
bigmorning/whisper_havest_0005
bigmorning
2022-11-11T11:24:14Z
59
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-11-09T18:09:11Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: whisper_havest_0005 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_havest_0005 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 6.4115 - Train Accuracy: 0.0115 - Train Do Wer: 1.0 - Validation Loss: 6.2357 - Validation Accuracy: 0.0115 - Validation Do Wer: 1.0 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Do Wer | Validation Loss | Validation Accuracy | Validation Do Wer | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 9.9191 | 0.0046 | 1.0 | 8.5836 | 0.0067 | 1.0 | 0 | | 8.0709 | 0.0083 | 1.0 | 7.4667 | 0.0089 | 1.0 | 1 | | 7.1652 | 0.0100 | 1.0 | 6.8204 | 0.0112 | 1.0 | 2 | | 6.7196 | 0.0114 | 1.0 | 6.5192 | 0.0114 | 1.0 | 3 | | 6.4115 | 0.0115 | 1.0 | 6.2357 | 0.0115 | 1.0 | 4 | ### Framework versions - Transformers 4.25.0.dev0 - TensorFlow 2.9.2 - Datasets 2.6.1 - Tokenizers 0.13.2
ctu-aic/m2m100-418M-multilingual-summarization-multilarge-cs
ctu-aic
2022-11-11T11:17:40Z
110
1
transformers
[ "transformers", "pytorch", "m2m_100", "text2text-generation", "Summarization", "abstractive summarization", "multilingual summarization", "m2m100_418M", "Czech", "text2text generation", "text generation", "cs", "en", "de", "fr", "tu", "zh", "es", "ru", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-10-23T21:38:12Z
--- language: - cs - en - de - fr - tu - zh - es - ru tags: - Summarization - abstractive summarization - multilingual summarization - m2m100_418M - Czech - text2text generation - text generation license: cc-by-sa-4.0 datasets: - Multilingual_large_dataset_(multilarge) - cnc/dm - xsum - mlsum - cnewsum - cnc - sumeczech metrics: - rouge - rougeraw - MemesCS --- # m2m100-418M-multilingual-summarization-multilarge-cs This model is a fine-tuned checkpoint of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on the Multilingual large summarization dataset focused on Czech texts to produce multilingual summaries. ## Task The model deals with a multi-sentence summary in eight different languages. With the idea of adding other foreign language documents, and by having a considerable amount of Czech documents, we aimed to improve model summarization in the Czech language. Supported languages: ''cs', 'en', 'de', 'es', 'fr', 'ru', 'tu', 'zh' #Usage Assume that you are using the provided MultilingualSummarizer.ipynb file and included files from git repository. ```python ## Configuration of summarization pipeline # def summ_config(): cfg = OrderedDict([ ## summarization model - checkpoint # ctu-aic/m2m100-418M-multilingual-summarization-multilarge-cs # ctu-aic/mt5-base-multilingual-summarization-multilarge-cs # ctu-aic/mbart25-multilingual-summarization-multilarge-cs ("model_name", "ctu-aic/mbart25-multilingual-summarization-multilarge-cs"), ## language of summarization task # language : string : cs, en, de, fr, es, tr, ru, zh ("language", "en"), ## generation method parameters in dictionary # ("inference_cfg", OrderedDict([ ("num_beams", 4), ("top_k", 40), ("top_p", 0.92), ("do_sample", True), ("temperature", 0.95), ("repetition_penalty", 1.23), ("no_repeat_ngram_size", None), ("early_stopping", True), ("max_length", 128), ("min_length", 10), ])), #texts to summarize values = (list of strings, string, dataset) ("texts", [ "english text1 to summarize", "english text2 to summarize", ] ), #OPTIONAL: Target summaries values = (list of strings, string, None) ('golds', [ "target english text1", "target english text2", ]), #('golds', None), ]) return cfg cfg = summ_config() mSummarize = MultiSummarizer(**cfg) summaries,scores = mSummarize(**cfg) ``` ## Dataset Multilingual large summarization dataset consists of 10 sub-datasets mainly based on news and daily mails. For the training, it was used the entire training set and 72% of the validation set. ``` Train set: 3 464 563 docs Validation set: 121 260 docs ``` | Stats | fragment | | | avg document length | | avg summary length | | Documents | |-------------|----------|---------------------|--------------------|--------|---------|--------|--------|--------| | __dataset__ |__compression__ | __density__ | __coverage__ | __nsent__ | __nwords__ | __nsent__ | __nwords__ | __count__ | | cnc | 7.388 | 0.303 | 0.088 | 16.121 | 316.912 | 3.272 | 46.805 | 750K | | sumeczech | 11.769 | 0.471 | 0.115 | 27.857 | 415.711 | 2.765 | 38.644 | 1M | | cnndm | 13.688 | 2.983 | 0.538 | 32.783 | 676.026 | 4.134 | 54.036 | 300K | | xsum | 18.378 | 0.479 | 0.194 | 18.607 | 369.134 | 1.000 | 21.127 | 225K| | mlsum/tu | 8.666 | 5.418 | 0.461 | 14.271 | 214.496 | 1.793 | 25.675 | 274K | | mlsum/de | 24.741 | 8.235 | 0.469 | 32.544 | 539.653 | 1.951 | 23.077 | 243K| | mlsum/fr | 24.388 | 2.688 | 0.424 | 24.533 | 612.080 | 1.320 | 26.93 | 425K | | mlsum/es | 36.185 | 3.705 | 0.510 | 31.914 | 746.927 | 1.142 | 21.671 | 291K | | mlsum/ru | 78.909 | 1.194 | 0.246 | 62.141 | 948.079 | 1.012 | 11.976 | 27K| | cnewsum | 20.183 | 0.000 | 0.000 | 16.834 | 438.271 | 1.109 | 21.926 | 304K | #### Tokenization Truncation and padding were set to 512 tokens for the encoder (input text) and 128 for the decoder (summary). ## Training Trained based on cross-entropy loss. ``` Time: 3 days 10 hours Epochs: 1072K steps = 10 (from 10) GPUs: 4x NVIDIA A100-SXM4-40GB eloss: 2.824 - 1.745 tloss: 4.559 - 1.615 ``` ### ROUGE results per individual dataset test set: | ROUGE | ROUGE-1 | | | ROUGE-2 | | | ROUGE-L | | | |------------|---------|---------|-----------|--------|--------|-----------|--------|--------|---------| | dataset | Precision | Recall | Fscore | Precision | Recall | Fscore | Precision | Recall | Fscore | | cnc | 30.13 | 22.56 | 25.21 | 10.53 | 8.01 | 8.9 | 22.47 | 16.92 | 18.86 | | sumeczech- | 26.6 | 19.66 | 22.01 | 8.17 | 6.12 | 6.82 | 19.93 | 14.81 | 16.54 | | cnndm | 41.8 | 38.41 | 38.94 | 18.74 | 17.14 | 17.4 | 29.69 | 27.33 | 27.68 | | xsum | 38.27 | 33.62 | 35.16 | 14.39 | 12.69 | 13.25 | 30.77 | 27.05 | 28.29 | | mlsum-tu | 52.44 | 44.36 | 46.39 | 36.98 | 31.51 | 32.86 | 46.04 | 39.04 | 40.8 | | mlsum-de | 42.19 | 40.5 | 40.7 | 28.8 | 28.51 | 28.37 | 38.95 | 37.7 | 37.79 | | mlsum-fr | 34.57 | 27.74 | 29.95 | 16.27 | 13.04 | 14.08 | 27.18 | 21.89 | 23.6 | | mlsum-es | 30.93 | 26.41 | 27.66 | 11.42 | 9.85 | 10.28 | 25.12 | 21.59 | 22.55 | | mlsum-ru | 0.65 | 0.52 | 0.56 | 0.15 | 0.15 | 0.15 | 0.65 | 0.52 | 0.56 | | cnewsum | 25.14 | 26.56 | 24.45 | 6.89 | 7.54 | 6.78 | 24.77 | 26.15 | 24.08 | # USAGE ``` soon ```
pe65374/PromptCLUE-base
pe65374
2022-11-11T11:13:21Z
107
4
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "license:creativeml-openrail-m", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-11-11T11:06:36Z
--- license: creativeml-openrail-m --- THE Model is forked from ClueAI's PromptClue for easily deployment only. Please visit ClueAI's space on huggingface.co. Thank you. <a href="https://colab.research.google.com/drive/1noyBA_JrYO6Lk6cwxsNZ_jdJ-Jtaf82G?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg"></a> PromptCLUE:全中文任务零样本学习模型 这个模型是基于1000亿token中文语料上预训练,累计学习1.5万亿中文token,并且在数百种任务上进行Prompt任务式训练。针对理解类任务,如分类、情感分析、抽取等,可以自定义标签体系;针对多种生成任务,可以进行采样自由生成。 <a href='https://www.cluebenchmarks.com/clueai.html'>在线Demo</a> &nbsp; | &nbsp; <a href='https://github.com/clue-ai/PromptCLUE'>Github项目地址</a>&nbsp; | &nbsp;<a href='https://colab.research.google.com/drive/1noyBA_JrYO6Lk6cwxsNZ_jdJ-Jtaf82G?usp=sharing#scrollTo=Nk2tSi3vnSN0'>Colab试用</a> 加载模型: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("ClueAI/PromptCLUE-base") model = AutoModelForSeq2SeqLM.from_pretrained("ClueAI/PromptCLUE-base") ``` 使用模型进行预测推理方法: ```python import torch #device = torch.device('cpu') device = torch.device('cuda') model.to(device) def preprocess(text): return text.replace("\n", "_") def postprocess(text): return text.replace("_", "\n") def answer(text, sample=False, top_p=0.6): '''sample:是否抽样。生成任务,可以设置为True; top_p:0-1之间,生成的内容越多样''' text = preprocess(text) encoding = tokenizer(text=[text], truncation=True, padding=True, max_length=768, return_tensors="pt").to(device) if not sample: out = model.generate(**encoding, return_dict_in_generate=True, output_scores=False, max_length=128, num_beams=4, length_penalty=0.6) else: out = model.generate(**encoding, return_dict_in_generate=True, output_scores=False, max_length=64, do_sample=True, top_p=top_p) out_text = tokenizer.batch_decode(out["sequences"], skip_special_tokens=True) return postprocess(out_text[0]) ``` ### 示例输入 #### 新闻分类(classify) ```bash Input: 分类任务: 折价率过低遭抛售基金泰和跌7.15%,证券时报记者 朱景锋本报讯 由于折价率在大盘封基中处于最低水平,基金泰和昨日遭到投资者大举抛售,跌幅达到7.15%,远超大盘。盘面显示,基金泰和随大盘高开,之后开始震荡走低,午后开始加速下行,几乎没有像样反弹。截至收盘时,在沪深300指数仅下跌2.56%的情况下,基金泰和收盘跌幅高达7.15%,在所有封基中跌幅最大,而昨日多数封基跌幅在2%左右。 选项:财经,娱乐,时政,股票 答案: Model output: 财经 ``` #### 意图分类(classify) ```bash Input: 意图分类: 帮我定一个周日上海浦东的房间 选项:闹钟,文学,酒店,艺术,体育,健康,天气,其他 答案: Model output: 酒店 ``` #### 情感分析(classify) ```bash Input: 情感分析: 这个看上去还可以,但其实我不喜欢 选项:积极,消极 答案: Model output: 消极 ``` #### 推理(generate) ```bash Input: 请推理出上下文的关系: 前提:对不起事情就是这样。 假设:事情就是这样,不需要道歉。 选项:中立,蕴涵,矛盾 答案: Model output: 矛盾 ``` #### 阅读理解(generate) ```bash Input: 阅读文章,给出答案: 段落: 港汇指数,全称港元实际汇兑指数(Effective Exchange Rate Index for the Hong Kong Dollar)是由香港政府统计处编制的一项指数,以反映港元与香港主要贸易伙伴之货币的名义有效汇率加权平均数的变动情况。加权比重是按1999年至2000年平均贸易模式所制定,但政府并未有公布详细的计算公式。旧港汇指数基准日为2000年1月1日,基数为100点。由2012年1月3日起,新系列港汇指数 (包括15种货币及以2010年1月 = 100) 已取代旧港汇指数系列。港汇指数的作用,主要是用于反映香港的货品及服务的价格相对于其主要贸易伙伴的变动,并通常被视作反映香港价格竞争力的指标。 问题:港汇指数的加权比重如何制定? 答案: Model output: 按1999年至2000年平均贸易模式所制定 ``` #### 阅读理解-自由式(generate) ```bash Input: 阅读以下对话并回答问题。 男:今天怎么这么晚才来上班啊?女:昨天工作到很晚,而且我还感冒了。男:那你回去休息吧,我帮你请假。女:谢谢你。 问题:女的怎么样? 选项:正在工作,感冒了,在打电话,要出差。 答案: Model output: 感冒了 ``` #### 摘要(generate) ```bash Input: 为下面的文章生成摘要: 北京时间9月5日12时52分,四川甘孜藏族自治州泸定县发生6.8级地震。地震发生后,领导高度重视并作出重要指示,要求把抢救生命作为首要任务,全力救援受灾群众,最大限度减少人员伤亡 答案: Model output: 四川甘孜发生6.8级地震 ``` #### 翻译-中英(generate) ```bash Input: 翻译成英文: 议长去了台湾,中国人民很愤怒。 答案: Model output: The secretary went to Taiwan and the Chinese people were angry. ``` #### 翻译-英中(generate) ```bash Input: 翻译成中文: This is a dialogue robot that can talk to people. 答案: Model output: 这是一台可以与人交谈的对话机器人。 ``` #### 通用信息抽取(generate) ```bash Input: 信息抽取: 据新华社电广东省清远市清城区政府昨日对外发布信息称,日前被实名举报涉嫌勒索企业、说“分分钟可以搞垮一间厂”的清城区环保局局长陈柏,已被免去清城区区委委员 问题:机构名,人名,职位 答案: Model output: 机构名:新华社,清城区政府,清城区环保局,清城区区委 人名:陈柏 职位:局长,区委委员 ``` #### 简历信息抽取(generate) ```bash Input: 阅读文本抽取关键信息: 张玄武2000年出生中国国籍无境外居留权博士学历现任杭州线锁科技技术总监。 问题:机构,人名,职位,籍贯,专业,国籍,学历,种族 答案: Model output: 人名:张玄武 职位:杭州线锁科技技术总监 国籍:中国国籍 学历:博士学历 ``` #### 医疗信息抽取(generate) ```bash Input: 从文本中抽取信息: 患者精神可,饮食可,睡眠可,二便正常。患者通过综合治疗头晕症状较前减轻,患者继续口服改善脑血管及调整血压变化药物。 问题:症状,治疗,检查,身体部位,疾病 答案: Model output: 症状:头晕 治疗:改善脑血管及调整血压变化药物 身体部位:二便 ``` #### 电商客户需求分析(classify) ```bash Input: 电商客户诉求分类: 收到但不太合身,可以退换吗 选项:买家咨询商品是否支持花呗付款,买家表示收藏关注店铺,买家咨询退换货规则,买家需要商品推荐 答案: Model output: 买家咨询退换货规则 ``` #### 医疗语义相似度(classify) ```bash Input: 下面句子是否表示了相同的语义: 文本1:糖尿病腿麻木怎么办? 文本2:糖尿病怎样控制生活方式 选项:相似,不相似 答案: Model output: 不相似 ``` #### 问题生成(generate) ```bash Input: 问题生成: 中新网2022年9月22日电 22日,商务部召开例行新闻发布会,商务部新闻发言人束珏婷表示,今年1-8月,中国实际使用外资1384亿美元,增长20.2%;其中,欧盟对华投资增长123.7%(含通过自由港投资数据)。这充分表明,包括欧盟在内的外国投资者持续看好中国市场,希望继续深化对华投资合作。 答案: Model output: 1、商务部发言人束珏婷表示,中国去年1-8月实际使用外资1384亿美元,增长20.2%,其中欧盟对华投资增长多少? 2、欧盟对华投资的增长是否达到预期? 3、欧盟在对华投资合作方面取得什么样的成果? ``` #### 指代消解(generate) ```bash Input: 指代消解: 段落: 少平跟润叶进了她二爸家的院子,润生走过来对他(代词)说:“我到宿舍找了你两回,你到哪里去了?” 问题:代词“他”指代的是? 答案: Model output: 少平 ``` #### 关键词抽取(generate) ```bash Input: 抽取关键词: 当地时间21日,美国联邦储备委员会宣布加息75个基点,将联邦基金利率目标区间上调到3.00%至3.25%之间,符合市场预期。这是美联储今年以来第五次加息,也是连续第三次加息,创自1981年以来的最大密集加息幅度。 关键词: Model output: 美联储,利率目标区间,加息,基点 ``` #### 情感倾向(classify) ```bash 文字中包含了怎样的情感: 超可爱的帅哥,爱了。。。 选项:厌恶,喜欢,开心,悲伤,惊讶,生气,害怕 答案: Model output: 喜欢 ``` 更多示例功能和模型见 [ClueAI](https://github.com/clue-ai/PromptCLUE)
BearlyWorkingYT/OPT-125M-Lovecraft
BearlyWorkingYT
2022-11-11T10:53:19Z
129
0
transformers
[ "transformers", "pytorch", "tf", "opt", "text-generation", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-08-09T10:52:28Z
--- license: other --- This is the OPT model trained in this [video](https://www.youtube.com/watch?v=_GLixmhmdZc) It has been trained on the complete works of H. P. Lovecraft. It is both highly overfit and extremely racist. It can and will use racial slurs given the chance. To use it, you will have to download it. Check out my [YouTube Channel](https://www.youtube.com/channel/UCLXxfueCPZRZnyGFWJ07uqA) **Edit: You probably shouldn't use this model.**
amit3love/distilbert-base-uncased-finetuned-emotion
amit3love
2022-11-11T10:14:54Z
111
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-05T09:00:54Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2209 - Accuracy: 0.9225 - F1: 0.9226 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8477 | 1.0 | 250 | 0.3204 | 0.9025 | 0.9000 | | 0.2559 | 2.0 | 500 | 0.2209 | 0.9225 | 0.9226 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Tokenizers 0.13.2
cjvt/sloberta-si-nli
cjvt
2022-11-11T10:11:45Z
107
0
transformers
[ "transformers", "pytorch", "camembert", "text-classification", "sl", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-11T09:19:06Z
--- language: - sl license: cc-by-sa-4.0 --- # sloberta-si-nli SloBERTa model finetuned on the SI-NLI dataset for Slovene natural language inference. Fine-tuned in a classic sequence pair classification setting on the official training/validation/test split for 10 epochs, using validation set accuracy for model selection. Optimized using the AdamW optimizer (learning rate 2e-5) and cross-entropy loss. Using batch size `82` (selected based on the available GPU memory) and maximum sequence length `102` (99th percentile of the lengths in the training set). Achieves the following metrics: - best validation accuracy: `0.720` - test accuracy = `0.735`
thisisHJLee/wav2vec2-large-xls-r-1b-korean-convsen5
thisisHJLee
2022-11-11T09:29:43Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-11-11T02:32:15Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-large-xls-r-1b-korean-convsen5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-1b-korean-convsen5 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0655 - Cer: 0.0105 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Cer | Validation Loss | |:-------------:|:-----:|:----:|:------:|:---------------:| | 0.2312 | 1.0 | 1408 | 0.0869 | 0.4450 | | 0.109 | 2.0 | 2816 | 0.0789 | 0.4756 | | 0.0457 | 3.0 | 4224 | 0.0696 | 0.5013 | | 0.0334 | 4.0 | 5632 | 0.0628 | 0.4815 | | 0.0222 | 5.0 | 7040 | 0.0655 | 0.0105 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.13.0 - Datasets 2.6.1 - Tokenizers 0.11.0
sd-concepts-library/obama-based-on-xi
sd-concepts-library
2022-11-11T08:45:43Z
0
0
null
[ "license:mit", "region:us" ]
null
2022-10-30T14:45:22Z
--- license: mit --- ### obama_based_on_xi on Stable Diffusion This is the `<obama>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<obama> 0](https://huggingface.co/sd-concepts-library/obama-based-on-xi/resolve/main/concept_images/1.jpg) ![<obama> 1](https://huggingface.co/sd-concepts-library/obama-based-on-xi/resolve/main/concept_images/0.jpg) ![<obama> 2](https://huggingface.co/sd-concepts-library/obama-based-on-xi/resolve/main/concept_images/2.jpg) ![<obama> 3](https://huggingface.co/sd-concepts-library/obama-based-on-xi/resolve/main/concept_images/3.jpg)
reza-aditya/q-Taxi-v3
reza-aditya
2022-11-11T08:32:14Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-11-11T08:32:06Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="reza-aditya/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
popaqy/pegasus-base-qag-bg-finetuned-spelling6-bg
popaqy
2022-11-11T08:28:28Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "pegasus", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-11-09T07:11:03Z
--- license: mit tags: - generated_from_trainer metrics: - rouge model-index: - name: pegasus-base-qag-bg-finetuned-spelling6-bg results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-base-qag-bg-finetuned-spelling6-bg This model is a fine-tuned version of [rmihaylov/pegasus-base-qag-bg](https://huggingface.co/rmihaylov/pegasus-base-qag-bg) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5122 - Rouge1: 84.87 - Rouge2: 76.3663 - Rougel: 84.835 - Rougelsum: 84.8254 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:| | 0.6248 | 1.0 | 1563 | 0.5306 | 84.4974 | 75.7212 | 84.4591 | 84.45 | | 0.4855 | 2.0 | 3126 | 0.5122 | 84.87 | 76.3663 | 84.835 | 84.8254 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
pravesh/bert-sentiment-analysis-nncf
pravesh
2022-11-11T07:38:31Z
31
0
transformers
[ "transformers", "pytorch", "distilbert", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-11-10T11:11:17Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: distilbert_sst2_int8_xml results: - task: name: Text Classification type: text-classification dataset: name: GLUE SST2 type: glue args: sst2 metrics: - name: Accuracy type: accuracy value: 0.9036697247706422 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_sst2_int8_xml This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the GLUE SST2 dataset. It achieves the following results on the evaluation set: - Loss: 0.4463 - Accuracy: 0.9037 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.23.1 - Pytorch 1.9.1+cu111 - Datasets 2.6.1 - Tokenizers 0.13.2
amitjohn007/roberta-base-finetuned-squad
amitjohn007
2022-11-11T07:26:14Z
59
0
transformers
[ "transformers", "tf", "roberta", "question-answering", "generated_from_keras_callback", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
question-answering
2022-11-11T06:35:49Z
--- license: cc-by-4.0 tags: - generated_from_keras_callback model-index: - name: amitjohn007/roberta-base-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # amitjohn007/roberta-base-finetuned-squad This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4173 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16608, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Epoch | |:----------:|:-----:| | 0.7396 | 0 | | 0.5461 | 1 | | 0.4173 | 2 | ### Framework versions - Transformers 4.24.0 - TensorFlow 2.9.2 - Datasets 2.6.1 - Tokenizers 0.13.2
ntsema/wav2vec2-xlsr-53-espeak-cv-ft-evn3-ntsema-colab
ntsema
2022-11-11T06:14:11Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:audiofolder", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-11-10T20:11:49Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - audiofolder metrics: - wer model-index: - name: wav2vec2-xlsr-53-espeak-cv-ft-evn3-ntsema-colab results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: audiofolder type: audiofolder config: default split: train args: default metrics: - name: Wer type: wer value: 0.97 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xlsr-53-espeak-cv-ft-evn3-ntsema-colab This model is a fine-tuned version of [facebook/wav2vec2-xlsr-53-espeak-cv-ft](https://huggingface.co/facebook/wav2vec2-xlsr-53-espeak-cv-ft) on the audiofolder dataset. It achieves the following results on the evaluation set: - Loss: 1.5004 - Wer: 0.97 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.8078 | 7.14 | 400 | 1.3558 | 0.9933 | | 0.7854 | 14.28 | 800 | 1.2786 | 0.98 | | 0.3685 | 21.43 | 1200 | 1.4606 | 0.9733 | | 0.1912 | 28.57 | 1600 | 1.5004 | 0.97 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
NineArtsDragon/bert-finetuned-ner2
NineArtsDragon
2022-11-11T05:03:39Z
124
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-11-11T04:08:10Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner2 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0015 - Precision: 0.9709 - Recall: 0.9904 - F1: 0.9806 - Accuracy: 0.9995 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 108 | 0.0035 | 0.9565 | 0.9876 | 0.9718 | 0.9990 | | No log | 2.0 | 216 | 0.0019 | 0.9628 | 0.9904 | 0.9764 | 0.9994 | | No log | 3.0 | 324 | 0.0016 | 0.9764 | 0.9904 | 0.9834 | 0.9995 | | No log | 4.0 | 432 | 0.0017 | 0.9736 | 0.9866 | 0.9801 | 0.9995 | | 0.0166 | 5.0 | 540 | 0.0015 | 0.9709 | 0.9904 | 0.9806 | 0.9995 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
zhenyueyu/distilbert-base-uncased-finetuned-squad
zhenyueyu
2022-11-11T04:12:56Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-11-11T03:44:37Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7093 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.2755 | 1.0 | 553 | 2.1210 | | 1.8766 | 2.0 | 1106 | 1.7363 | | 1.5381 | 3.0 | 1659 | 1.7093 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
f2io/ner-roles-openapi
f2io
2022-11-11T03:48:56Z
112
0
transformers
[ "transformers", "pytorch", "distilbert", "token-classification", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-15T17:26:01Z
--- language: en widget: - text: "As manager, I want to Find pet by ID through /pet/{petId}. Returns a single pet." - text: "As operator, I want to Create user through /user. #/definitions/User This can only be done by the logged in user." license: mit --- # ner-roles-openapi: model fine-tuned from distilbert-base-uncased for NER task. ## Introduction [ner-roles-openapi] is a NER model that was fine-tuned from distilbert-base-uncased on synthetic dataset based SCRUM user story. Model was trained on synthetic dataset (~80 sentences). Model was validated on openapi/swagger data and overperformed other models on this type of data specifically. ## Training data Training data was classified as follow: Abbreviation|Description -|- O|Outside of a named entity ENTITY|Entity provided by OpenAPI ACTION|Action required by a user/role ROLE|Role predefined to use APIs
kakaobrain/coyo-align-b7-base
kakaobrain
2022-11-11T03:42:56Z
0
0
null
[ "align", "clip", "en", "dataset:kakaobrain/coyo-700m", "arxiv:2102.05918", "license:apache-2.0", "region:us" ]
null
2022-11-09T07:13:12Z
--- language: - en tags: - align - clip license: apache-2.0 datasets: - kakaobrain/coyo-700m inference: false --- # Model Details This is an unofficial implementation of [ALIGN](https://arxiv.org/abs/2102.05918) trained on [COYO-700M](https://github.com/kakaobrain/coyo-dataset). The official ALIGN is trained on its dataset of 1.8B samples. That dataset is not released to the public. Instead, we trained our implementation of ALIGN model on [COYO-700M](https://github.com/kakaobrain/coyo-dataset). It's developed by Kakao Brain to validate the performance of COYO-700M dataset on a large-scale model. The training took about 8 days on TPU V3-512. ## Model Date April 2022 ## Model Type This is dual encoder model where - image encoder is using EfficientNet-B7 architecture - text encoder is using BERT-base architecture # Training data This model is trained on [COYO-700M](https://github.com/kakaobrain/coyo-dataset) dataset. # Evaluation results | | Dataset | ImageNet | Flickr30k | | MsCOCO | | |----------------------------------|:----------:|:--------:|:---------:|:-------:|:-------:|:-------:| | | | KNN | I2T R@1 | T2I R@1 | I2T R@1 | T2I R@1 | | ALIGN-L2-Large(Google) | ALIGN 1.8B | 76.4 | 88.6 | 75.7 | 58.6 | 45.6 | | ALIGN-B7-Base(Google) | ALIGN 1.8B | 69.3 | - | - | 55.4 | 41.7 | | COYO-ALIGN-B7-Base(Kakao Brain) | COYO-700M | 68.6 | 88.1 | 73.2 | 61.2 | 43.1 |
wilcomply/xlm-roberta-base-finetuned-panx-all
wilcomply
2022-11-11T03:15:09Z
121
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-11-11T02:43:49Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-all results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1731 - F1: 0.8525 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2992 | 1.0 | 835 | 0.1936 | 0.8164 | | 0.1588 | 2.0 | 1670 | 0.1711 | 0.8466 | | 0.1022 | 3.0 | 2505 | 0.1731 | 0.8525 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
alextoyment/ppo-LunarLander-v2
alextoyment
2022-11-11T03:03:46Z
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-11-11T03:03:14Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 165.65 +/- 21.36 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
shafin/chemical-bert-uncased-finetuned-cust-c1-cust
shafin
2022-11-11T02:39:18Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-11-11T00:16:08Z
--- tags: - generated_from_trainer model-index: - name: chemical-bert-uncased-finetuned-cust-c1-cust results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # chemical-bert-uncased-finetuned-cust-c1-cust This model is a fine-tuned version of [shafin/chemical-bert-uncased-finetuned-cust](https://huggingface.co/shafin/chemical-bert-uncased-finetuned-cust) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5420 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 200 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.96 | 1.0 | 63 | 1.6719 | | 1.7095 | 2.0 | 126 | 1.5305 | | 1.5634 | 3.0 | 189 | 1.2972 | | 1.4785 | 4.0 | 252 | 1.3354 | | 1.3991 | 5.0 | 315 | 1.2542 | | 1.3482 | 6.0 | 378 | 1.1870 | | 1.2984 | 7.0 | 441 | 1.1844 | | 1.2589 | 8.0 | 504 | 1.1262 | | 1.1762 | 9.0 | 567 | 1.1176 | | 1.1724 | 10.0 | 630 | 1.0312 | | 1.1222 | 11.0 | 693 | 1.0113 | | 1.1021 | 12.0 | 756 | 1.0518 | | 1.0646 | 13.0 | 819 | 1.0433 | | 1.0273 | 14.0 | 882 | 0.9634 | | 1.0187 | 15.0 | 945 | 0.9299 | | 0.9854 | 16.0 | 1008 | 0.9458 | | 0.9799 | 17.0 | 1071 | 0.9733 | | 0.95 | 18.0 | 1134 | 0.9169 | | 0.934 | 19.0 | 1197 | 0.9246 | | 0.907 | 20.0 | 1260 | 0.8939 | | 0.8974 | 21.0 | 1323 | 0.8575 | | 0.8749 | 22.0 | 1386 | 0.8513 | | 0.8526 | 23.0 | 1449 | 0.8089 | | 0.8359 | 24.0 | 1512 | 0.8600 | | 0.8292 | 25.0 | 1575 | 0.8517 | | 0.8263 | 26.0 | 1638 | 0.8293 | | 0.8033 | 27.0 | 1701 | 0.7747 | | 0.7999 | 28.0 | 1764 | 0.8169 | | 0.7778 | 29.0 | 1827 | 0.7981 | | 0.7574 | 30.0 | 1890 | 0.7457 | | 0.7581 | 31.0 | 1953 | 0.7504 | | 0.7404 | 32.0 | 2016 | 0.7637 | | 0.7332 | 33.0 | 2079 | 0.7902 | | 0.7314 | 34.0 | 2142 | 0.7185 | | 0.7209 | 35.0 | 2205 | 0.7534 | | 0.6902 | 36.0 | 2268 | 0.7334 | | 0.6973 | 37.0 | 2331 | 0.7069 | | 0.687 | 38.0 | 2394 | 0.6820 | | 0.6658 | 39.0 | 2457 | 0.7155 | | 0.6697 | 40.0 | 2520 | 0.7149 | | 0.6584 | 41.0 | 2583 | 0.7413 | | 0.6638 | 42.0 | 2646 | 0.7245 | | 0.6282 | 43.0 | 2709 | 0.7177 | | 0.6418 | 44.0 | 2772 | 0.6653 | | 0.6323 | 45.0 | 2835 | 0.7715 | | 0.6256 | 46.0 | 2898 | 0.7269 | | 0.6109 | 47.0 | 2961 | 0.6744 | | 0.6133 | 48.0 | 3024 | 0.6816 | | 0.595 | 49.0 | 3087 | 0.6969 | | 0.6058 | 50.0 | 3150 | 0.6965 | | 0.5961 | 51.0 | 3213 | 0.6988 | | 0.587 | 52.0 | 3276 | 0.6727 | | 0.5861 | 53.0 | 3339 | 0.6327 | | 0.5758 | 54.0 | 3402 | 0.6538 | | 0.5692 | 55.0 | 3465 | 0.6612 | | 0.567 | 56.0 | 3528 | 0.5989 | | 0.5514 | 57.0 | 3591 | 0.6776 | | 0.5526 | 58.0 | 3654 | 0.6440 | | 0.556 | 59.0 | 3717 | 0.6682 | | 0.5476 | 60.0 | 3780 | 0.6254 | | 0.536 | 61.0 | 3843 | 0.6239 | | 0.526 | 62.0 | 3906 | 0.6606 | | 0.532 | 63.0 | 3969 | 0.6565 | | 0.5189 | 64.0 | 4032 | 0.6586 | | 0.5075 | 65.0 | 4095 | 0.6286 | | 0.5131 | 66.0 | 4158 | 0.6646 | | 0.498 | 67.0 | 4221 | 0.6486 | | 0.4979 | 68.0 | 4284 | 0.6313 | | 0.4885 | 69.0 | 4347 | 0.6419 | | 0.4875 | 70.0 | 4410 | 0.6313 | | 0.4904 | 71.0 | 4473 | 0.6602 | | 0.4712 | 72.0 | 4536 | 0.6200 | | 0.4798 | 73.0 | 4599 | 0.5912 | | 0.4802 | 74.0 | 4662 | 0.6001 | | 0.4704 | 75.0 | 4725 | 0.6303 | | 0.4709 | 76.0 | 4788 | 0.5871 | | 0.465 | 77.0 | 4851 | 0.6344 | | 0.4651 | 78.0 | 4914 | 0.6030 | | 0.4501 | 79.0 | 4977 | 0.5998 | | 0.4584 | 80.0 | 5040 | 0.5926 | | 0.4651 | 81.0 | 5103 | 0.6134 | | 0.438 | 82.0 | 5166 | 0.6254 | | 0.448 | 83.0 | 5229 | 0.6260 | | 0.4295 | 84.0 | 5292 | 0.5866 | | 0.434 | 85.0 | 5355 | 0.5740 | | 0.4261 | 86.0 | 5418 | 0.5691 | | 0.4312 | 87.0 | 5481 | 0.6243 | | 0.4289 | 88.0 | 5544 | 0.5781 | | 0.4255 | 89.0 | 5607 | 0.6226 | | 0.4254 | 90.0 | 5670 | 0.5538 | | 0.4231 | 91.0 | 5733 | 0.5874 | | 0.4107 | 92.0 | 5796 | 0.6054 | | 0.4082 | 93.0 | 5859 | 0.5898 | | 0.4144 | 94.0 | 5922 | 0.5826 | | 0.4225 | 95.0 | 5985 | 0.5501 | | 0.3964 | 96.0 | 6048 | 0.5886 | | 0.3972 | 97.0 | 6111 | 0.5831 | | 0.4165 | 98.0 | 6174 | 0.5164 | | 0.4024 | 99.0 | 6237 | 0.5714 | | 0.4013 | 100.0 | 6300 | 0.5734 | | 0.3933 | 101.0 | 6363 | 0.5727 | | 0.3821 | 102.0 | 6426 | 0.5985 | | 0.3904 | 103.0 | 6489 | 0.5571 | | 0.3965 | 104.0 | 6552 | 0.5837 | | 0.3789 | 105.0 | 6615 | 0.5989 | | 0.3733 | 106.0 | 6678 | 0.5405 | | 0.3907 | 107.0 | 6741 | 0.6059 | | 0.3794 | 108.0 | 6804 | 0.5602 | | 0.3689 | 109.0 | 6867 | 0.5590 | | 0.3603 | 110.0 | 6930 | 0.5886 | | 0.3747 | 111.0 | 6993 | 0.5294 | | 0.3667 | 112.0 | 7056 | 0.5759 | | 0.3754 | 113.0 | 7119 | 0.5821 | | 0.3676 | 114.0 | 7182 | 0.5653 | | 0.3524 | 115.0 | 7245 | 0.5537 | | 0.3624 | 116.0 | 7308 | 0.5523 | | 0.3527 | 117.0 | 7371 | 0.5799 | | 0.3588 | 118.0 | 7434 | 0.6346 | | 0.3539 | 119.0 | 7497 | 0.5116 | | 0.3553 | 120.0 | 7560 | 0.5716 | | 0.3483 | 121.0 | 7623 | 0.5721 | | 0.3625 | 122.0 | 7686 | 0.5393 | | 0.3354 | 123.0 | 7749 | 0.5800 | | 0.3392 | 124.0 | 7812 | 0.5389 | | 0.344 | 125.0 | 7875 | 0.5455 | | 0.3451 | 126.0 | 7938 | 0.5428 | | 0.3374 | 127.0 | 8001 | 0.5580 | | 0.3428 | 128.0 | 8064 | 0.5339 | | 0.3386 | 129.0 | 8127 | 0.5447 | | 0.3318 | 130.0 | 8190 | 0.5738 | | 0.3388 | 131.0 | 8253 | 0.5667 | | 0.3335 | 132.0 | 8316 | 0.5407 | | 0.3383 | 133.0 | 8379 | 0.5679 | | 0.3299 | 134.0 | 8442 | 0.5846 | | 0.327 | 135.0 | 8505 | 0.5511 | | 0.3354 | 136.0 | 8568 | 0.5649 | | 0.32 | 137.0 | 8631 | 0.5358 | | 0.3265 | 138.0 | 8694 | 0.5528 | | 0.319 | 139.0 | 8757 | 0.5926 | | 0.3304 | 140.0 | 8820 | 0.5531 | | 0.3191 | 141.0 | 8883 | 0.5379 | | 0.3298 | 142.0 | 8946 | 0.5468 | | 0.3134 | 143.0 | 9009 | 0.5623 | | 0.3186 | 144.0 | 9072 | 0.5162 | | 0.3179 | 145.0 | 9135 | 0.5570 | | 0.3175 | 146.0 | 9198 | 0.5379 | | 0.3051 | 147.0 | 9261 | 0.5437 | | 0.312 | 148.0 | 9324 | 0.5301 | | 0.3093 | 149.0 | 9387 | 0.5393 | | 0.3227 | 150.0 | 9450 | 0.5531 | | 0.3125 | 151.0 | 9513 | 0.5794 | | 0.3162 | 152.0 | 9576 | 0.5677 | | 0.3006 | 153.0 | 9639 | 0.5668 | | 0.3011 | 154.0 | 9702 | 0.5797 | | 0.3208 | 155.0 | 9765 | 0.5450 | | 0.3048 | 156.0 | 9828 | 0.5465 | | 0.3092 | 157.0 | 9891 | 0.5358 | | 0.3125 | 158.0 | 9954 | 0.5043 | | 0.3083 | 159.0 | 10017 | 0.5321 | | 0.3 | 160.0 | 10080 | 0.5526 | | 0.2968 | 161.0 | 10143 | 0.5324 | | 0.3068 | 162.0 | 10206 | 0.5471 | | 0.3129 | 163.0 | 10269 | 0.5575 | | 0.3061 | 164.0 | 10332 | 0.5796 | | 0.2943 | 165.0 | 10395 | 0.5544 | | 0.2967 | 166.0 | 10458 | 0.5422 | | 0.2959 | 167.0 | 10521 | 0.5149 | | 0.2987 | 168.0 | 10584 | 0.5685 | | 0.3045 | 169.0 | 10647 | 0.5176 | | 0.2975 | 170.0 | 10710 | 0.5044 | | 0.2948 | 171.0 | 10773 | 0.5264 | | 0.3 | 172.0 | 10836 | 0.5174 | | 0.2967 | 173.0 | 10899 | 0.5658 | | 0.2873 | 174.0 | 10962 | 0.4988 | | 0.2939 | 175.0 | 11025 | 0.5512 | | 0.2954 | 176.0 | 11088 | 0.5139 | | 0.301 | 177.0 | 11151 | 0.6007 | | 0.2948 | 178.0 | 11214 | 0.5167 | | 0.2898 | 179.0 | 11277 | 0.5443 | | 0.2869 | 180.0 | 11340 | 0.5544 | | 0.2973 | 181.0 | 11403 | 0.5644 | | 0.2985 | 182.0 | 11466 | 0.5153 | | 0.2904 | 183.0 | 11529 | 0.5561 | | 0.2872 | 184.0 | 11592 | 0.5610 | | 0.2894 | 185.0 | 11655 | 0.5511 | | 0.297 | 186.0 | 11718 | 0.5408 | | 0.2904 | 187.0 | 11781 | 0.5574 | | 0.2818 | 188.0 | 11844 | 0.5182 | | 0.2873 | 189.0 | 11907 | 0.5425 | | 0.2973 | 190.0 | 11970 | 0.5198 | | 0.2913 | 191.0 | 12033 | 0.5119 | | 0.2931 | 192.0 | 12096 | 0.5585 | | 0.2859 | 193.0 | 12159 | 0.5368 | | 0.2853 | 194.0 | 12222 | 0.5274 | | 0.294 | 195.0 | 12285 | 0.5685 | | 0.2885 | 196.0 | 12348 | 0.5581 | | 0.295 | 197.0 | 12411 | 0.4987 | | 0.2807 | 198.0 | 12474 | 0.5168 | | 0.289 | 199.0 | 12537 | 0.5284 | | 0.2893 | 200.0 | 12600 | 0.5420 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
huggingtweets/queenofbithynia
huggingtweets
2022-11-11T00:37:45Z
105
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/queenofbithynia/1668126937466/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1010627358879932416/0xVVQg3X_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">the needle-felted head of joyce carol oates</div> <div style="text-align: center; font-size: 14px;">@queenofbithynia</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from the needle-felted head of joyce carol oates. | Data | the needle-felted head of joyce carol oates | | --- | --- | | Tweets downloaded | 3186 | | Retweets | 1 | | Short tweets | 64 | | Tweets kept | 3121 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1pdmfti8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @queenofbithynia's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/hmbsp4tx) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/hmbsp4tx/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/queenofbithynia') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
robertbogdon/model_tuning_mindalleeu83oz7r-labels-classification
robertbogdon
2022-11-11T00:29:29Z
0
0
sklearn
[ "sklearn", "tabular-classification", "baseline-trainer", "license:apache-2.0", "region:us" ]
tabular-classification
2022-11-11T00:29:26Z
--- license: apache-2.0 library_name: sklearn tags: - tabular-classification - baseline-trainer --- ## Baseline Model trained on model_tuning_mindalleeu83oz7r to apply classification on labels **Metrics of the best model:** accuracy 0.732672 recall_macro 0.630156 precision_macro 0.439732 f1_macro 0.455558 Name: LogisticRegression(C=0.1, class_weight='balanced', max_iter=1000), dtype: float64 **See model plot below:** <style>#sk-container-id-2 {color: black;background-color: white;}#sk-container-id-2 pre{padding: 0;}#sk-container-id-2 div.sk-toggleable {background-color: white;}#sk-container-id-2 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-2 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-2 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-2 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-2 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-2 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-2 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-2 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-2 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-2 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-2 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-2 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-2 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-2 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-2 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-2 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-2 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-2 div.sk-item {position: relative;z-index: 1;}#sk-container-id-2 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-2 div.sk-item::before, #sk-container-id-2 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-2 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-2 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-2 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-2 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-2 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-2 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-2 div.sk-label-container {text-align: center;}#sk-container-id-2 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-2 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-2" class="sk-top-container"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[(&#x27;easypreprocessor&#x27;,EasyPreprocessor(types= continuous dirty_float ... free_string useless temperatures False False ... False False superconditions True False ... False False is_megas False False ... False False feature_0 True False ... False False feature_1 True False ... False False ... ... ... ... ... ... feature_763 True False ... False False feature_764 True False ... False False feature_765 True False ... False False feature_766 True False ... False False feature_767 True False ... False False[771 rows x 7 columns])),(&#x27;logisticregression&#x27;,LogisticRegression(C=0.1, class_weight=&#x27;balanced&#x27;,max_iter=1000))])</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-4" type="checkbox" ><label for="sk-estimator-id-4" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[(&#x27;easypreprocessor&#x27;,EasyPreprocessor(types= continuous dirty_float ... free_string useless temperatures False False ... False False superconditions True False ... False False is_megas False False ... False False feature_0 True False ... False False feature_1 True False ... False False ... ... ... ... ... ... feature_763 True False ... False False feature_764 True False ... False False feature_765 True False ... False False feature_766 True False ... False False feature_767 True False ... False False[771 rows x 7 columns])),(&#x27;logisticregression&#x27;,LogisticRegression(C=0.1, class_weight=&#x27;balanced&#x27;,max_iter=1000))])</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-5" type="checkbox" ><label for="sk-estimator-id-5" class="sk-toggleable__label sk-toggleable__label-arrow">EasyPreprocessor</label><div class="sk-toggleable__content"><pre>EasyPreprocessor(types= continuous dirty_float ... free_string useless temperatures False False ... False False superconditions True False ... False False is_megas False False ... False False feature_0 True False ... False False feature_1 True False ... False False ... ... ... ... ... ... feature_763 True False ... False False feature_764 True False ... False False feature_765 True False ... False False feature_766 True False ... False False feature_767 True False ... False False[771 rows x 7 columns])</pre></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-6" type="checkbox" ><label for="sk-estimator-id-6" class="sk-toggleable__label sk-toggleable__label-arrow">LogisticRegression</label><div class="sk-toggleable__content"><pre>LogisticRegression(C=0.1, class_weight=&#x27;balanced&#x27;, max_iter=1000)</pre></div></div></div></div></div></div></div> **Disclaimer:** This model is trained with dabl library as a baseline, for better results, use [AutoTrain](https://huggingface.co/autotrain). **Logs of training** including the models tried in the process can be found in logs.txt
EhtashamNQ/mt5-small-finetuned-amazon-en-es
EhtashamNQ
2022-11-11T00:24:55Z
3
0
transformers
[ "transformers", "tf", "mt5", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-11-10T16:33:31Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: EhtashamNQ/mt5-small-finetuned-amazon-en-es results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # EhtashamNQ/mt5-small-finetuned-amazon-en-es This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.5396 - Validation Loss: 2.8061 - Epoch: 5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 5208, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 20.5300 | 7.1775 | 0 | | 5.5220 | 3.7545 | 1 | | 3.4137 | 3.5929 | 2 | | 2.9827 | 3.0892 | 3 | | 2.7228 | 2.8718 | 4 | | 2.5396 | 2.8061 | 5 | ### Framework versions - Transformers 4.24.0 - TensorFlow 2.9.2 - Datasets 2.6.1 - Tokenizers 0.13.2
evelynerhuan/distilbert-base-uncased-model-1
evelynerhuan
2022-11-11T00:05:42Z
120
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-11-10T23:31:19Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-model-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-model-1 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.6472 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.0114 | 1.0 | 554 | 1.9485 | | 1.6658 | 2.0 | 1108 | 1.6325 | | 1.2555 | 3.0 | 1662 | 1.6071 | | 1.038 | 4.0 | 2216 | 1.6472 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
Berdy96/finetuning-sentiment-model-3000-samples
Berdy96
2022-11-10T23:27:14Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-09T22:36:57Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: train args: plain_text metrics: - name: Accuracy type: accuracy value: 0.88 - name: F1 type: f1 value: 0.880794701986755 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3118 - Accuracy: 0.88 - F1: 0.8808 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
ChutianTao/distilbert-base-uncased-finetuned-squad-2
ChutianTao
2022-11-10T22:55:29Z
101
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-11-10T22:28:08Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-finetuned-squad-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad-2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.6620 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.5443 | 1.0 | 554 | 1.6070 | | 1.2504 | 2.0 | 1108 | 1.5107 | | 0.8091 | 3.0 | 1662 | 1.6620 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
mrojas/spanish-clinical-ner
mrojas
2022-11-10T21:44:59Z
141
1
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "dataset:wl", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-11-10T21:22:35Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wl metrics: - precision - recall - f1 - accuracy model-index: - name: spanish-clinical-ner results: - task: name: Token Classification type: token-classification dataset: name: wl type: wl config: WL split: train args: WL metrics: - name: Precision type: precision value: 0.6868542362104594 - name: Recall type: recall value: 0.7348639455782313 - name: F1 type: f1 value: 0.7100484758853013 - name: Accuracy type: accuracy value: 0.8262735659847573 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spanish-clinical-ner This model is a fine-tuned version of [plncmm/roberta-clinical-wl-es](https://huggingface.co/plncmm/roberta-clinical-wl-es) on the wl dataset. It achieves the following results on the evaluation set: - Loss: 0.6181 - Precision: 0.6869 - Recall: 0.7349 - F1: 0.7100 - Accuracy: 0.8263 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 1.0283 | 1.0 | 500 | 0.6862 | 0.6690 | 0.6959 | 0.6822 | 0.8091 | | 0.599 | 2.0 | 1000 | 0.6198 | 0.6856 | 0.7276 | 0.7059 | 0.8252 | | 0.4973 | 3.0 | 1500 | 0.6181 | 0.6869 | 0.7349 | 0.7100 | 0.8263 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
AlekseyKorshuk/dalio-all-io-1.3b-2-epoch
AlekseyKorshuk
2022-11-10T20:45:37Z
103
0
transformers
[ "transformers", "pytorch", "opt", "text-generation", "generated_from_trainer", "dataset:AlekseyKorshuk/dalio-all-io", "license:other", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-10T20:29:53Z
--- license: other tags: - generated_from_trainer datasets: - AlekseyKorshuk/dalio-all-io metrics: - accuracy model-index: - name: dalio-all-io-1.3b-2-epoch results: - task: name: Causal Language Modeling type: text-generation dataset: name: AlekseyKorshuk/dalio-all-io type: AlekseyKorshuk/dalio-all-io metrics: - name: Accuracy type: accuracy value: 0.057553854065481976 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dalio-all-io-1.3b-2-epoch This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on the AlekseyKorshuk/dalio-all-io dataset. It achieves the following results on the evaluation set: - Loss: 2.2949 - Accuracy: 0.0576 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 16 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 2.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.6543 | 0.03 | 1 | 2.6113 | 0.0513 | | 2.6077 | 0.07 | 2 | 2.6113 | 0.0513 | | 2.5964 | 0.1 | 3 | 2.5605 | 0.0519 | | 2.7302 | 0.14 | 4 | 2.5234 | 0.0527 | | 2.7002 | 0.17 | 5 | 2.5078 | 0.0529 | | 2.5674 | 0.21 | 6 | 2.4941 | 0.0533 | | 2.6399 | 0.24 | 7 | 2.4883 | 0.0534 | | 2.533 | 0.28 | 8 | 2.4805 | 0.0536 | | 2.7202 | 0.31 | 9 | 2.4746 | 0.0536 | | 2.5137 | 0.34 | 10 | 2.4648 | 0.0534 | | 2.499 | 0.38 | 11 | 2.4512 | 0.0536 | | 2.7026 | 0.41 | 12 | 2.4414 | 0.0539 | | 2.5254 | 0.45 | 13 | 2.4336 | 0.0543 | | 2.5667 | 0.48 | 14 | 2.4238 | 0.0545 | | 2.5715 | 0.52 | 15 | 2.4160 | 0.0548 | | 2.3739 | 0.55 | 16 | 2.4102 | 0.0550 | | 2.4756 | 0.59 | 17 | 2.4043 | 0.0549 | | 2.4783 | 0.62 | 18 | 2.3984 | 0.0550 | | 2.5665 | 0.66 | 19 | 2.3906 | 0.0549 | | 2.4888 | 0.69 | 20 | 2.3906 | 0.0549 | | 2.4476 | 0.72 | 21 | 2.3828 | 0.0550 | | 2.604 | 0.76 | 22 | 2.375 | 0.0552 | | 2.3416 | 0.79 | 23 | 2.3652 | 0.0554 | | 2.6028 | 0.83 | 24 | 2.3555 | 0.0555 | | 2.3425 | 0.86 | 25 | 2.3477 | 0.0558 | | 2.4142 | 0.9 | 26 | 2.3398 | 0.0558 | | 2.5317 | 0.93 | 27 | 2.3340 | 0.0559 | | 2.4119 | 0.97 | 28 | 2.3301 | 0.0561 | | 2.4048 | 1.0 | 29 | 2.3262 | 0.0563 | | 1.9646 | 1.03 | 30 | 2.3242 | 0.0564 | | 1.9233 | 1.07 | 31 | 2.3203 | 0.0563 | | 1.9276 | 1.1 | 32 | 2.3203 | 0.0564 | | 1.8702 | 1.14 | 33 | 2.3281 | 0.0565 | | 2.0997 | 1.17 | 34 | 2.3340 | 0.0565 | | 1.7943 | 1.21 | 35 | 2.3320 | 0.0568 | | 1.8579 | 1.24 | 36 | 2.3242 | 0.0567 | | 1.8844 | 1.28 | 37 | 2.3145 | 0.0568 | | 1.9288 | 1.31 | 38 | 2.3086 | 0.0569 | | 1.6616 | 1.34 | 39 | 2.3047 | 0.0570 | | 1.6443 | 1.38 | 40 | 2.3047 | 0.0571 | | 1.7616 | 1.41 | 41 | 2.3027 | 0.0572 | | 1.7904 | 1.45 | 42 | 2.3027 | 0.0571 | | 1.8762 | 1.48 | 43 | 2.3027 | 0.0573 | | 1.6569 | 1.52 | 44 | 2.3027 | 0.0573 | | 1.647 | 1.55 | 45 | 2.3027 | 0.0573 | | 1.8168 | 1.59 | 46 | 2.3027 | 0.0574 | | 1.7194 | 1.62 | 47 | 2.3027 | 0.0573 | | 1.7667 | 1.66 | 48 | 2.3027 | 0.0572 | | 1.7621 | 1.69 | 49 | 2.3027 | 0.0573 | | 1.7269 | 1.72 | 50 | 2.3008 | 0.0573 | | 1.7815 | 1.76 | 51 | 2.3008 | 0.0574 | | 1.8318 | 1.79 | 52 | 2.2988 | 0.0574 | | 1.9366 | 1.83 | 53 | 2.2988 | 0.0575 | | 1.736 | 1.86 | 54 | 2.2969 | 0.0576 | | 1.9984 | 1.9 | 55 | 2.2969 | 0.0575 | | 1.7203 | 1.93 | 56 | 2.2949 | 0.0575 | | 1.7391 | 1.97 | 57 | 2.2949 | 0.0576 | | 1.6611 | 2.0 | 58 | 2.2949 | 0.0576 | ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
mrojas/roberta-clinical-wl-es-finetuned-ner
mrojas
2022-11-10T20:38:26Z
110
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "dataset:wl", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-11-10T20:16:12Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wl metrics: - precision - recall - f1 - accuracy model-index: - name: roberta-clinical-wl-es-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: wl type: wl config: WL split: train args: WL metrics: - name: Precision type: precision value: 0.6865079365079365 - name: Recall type: recall value: 0.7355442176870748 - name: F1 type: f1 value: 0.7101806239737274 - name: Accuracy type: accuracy value: 0.8267950260730044 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-clinical-wl-es-finetuned-ner This model is a fine-tuned version of [plncmm/roberta-clinical-wl-es](https://huggingface.co/plncmm/roberta-clinical-wl-es) on the wl dataset. It achieves the following results on the evaluation set: - Loss: 0.6227 - Precision: 0.6865 - Recall: 0.7355 - F1: 0.7102 - Accuracy: 0.8268 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 1.028 | 1.0 | 500 | 0.6870 | 0.6558 | 0.6855 | 0.6703 | 0.8035 | | 0.5923 | 2.0 | 1000 | 0.6248 | 0.6851 | 0.7235 | 0.7038 | 0.8244 | | 0.4928 | 3.0 | 1500 | 0.6227 | 0.6865 | 0.7355 | 0.7102 | 0.8268 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
AlekseyKorshuk/dalio-all-io-1.3b
AlekseyKorshuk
2022-11-10T20:11:48Z
96
0
transformers
[ "transformers", "pytorch", "opt", "text-generation", "generated_from_trainer", "dataset:AlekseyKorshuk/dalio-all-io", "license:other", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-10T19:59:40Z
--- license: other tags: - generated_from_trainer datasets: - AlekseyKorshuk/dalio-all-io metrics: - accuracy model-index: - name: dalio-all-io-1.3b results: - task: name: Causal Language Modeling type: text-generation dataset: name: AlekseyKorshuk/dalio-all-io type: AlekseyKorshuk/dalio-all-io metrics: - name: Accuracy type: accuracy value: 0.05582538140677676 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dalio-all-io-1.3b This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on the AlekseyKorshuk/dalio-all-io dataset. It achieves the following results on the evaluation set: - Loss: 2.3652 - Accuracy: 0.0558 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 16 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 1.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.6543 | 0.03 | 1 | 2.6113 | 0.0513 | | 2.6077 | 0.07 | 2 | 2.6113 | 0.0513 | | 2.5964 | 0.1 | 3 | 2.5605 | 0.0519 | | 2.7302 | 0.14 | 4 | 2.5234 | 0.0527 | | 2.7 | 0.17 | 5 | 2.5078 | 0.0528 | | 2.5674 | 0.21 | 6 | 2.4941 | 0.0532 | | 2.6406 | 0.24 | 7 | 2.4883 | 0.0534 | | 2.5315 | 0.28 | 8 | 2.4805 | 0.0536 | | 2.7202 | 0.31 | 9 | 2.4727 | 0.0537 | | 2.5144 | 0.34 | 10 | 2.4648 | 0.0536 | | 2.4983 | 0.38 | 11 | 2.4512 | 0.0537 | | 2.7029 | 0.41 | 12 | 2.4414 | 0.0539 | | 2.5198 | 0.45 | 13 | 2.4336 | 0.0540 | | 2.5706 | 0.48 | 14 | 2.4258 | 0.0545 | | 2.5688 | 0.52 | 15 | 2.4180 | 0.0548 | | 2.3793 | 0.55 | 16 | 2.4102 | 0.0552 | | 2.4785 | 0.59 | 17 | 2.4043 | 0.0554 | | 2.4688 | 0.62 | 18 | 2.3984 | 0.0553 | | 2.5674 | 0.66 | 19 | 2.3984 | 0.0553 | | 2.5054 | 0.69 | 20 | 2.3945 | 0.0554 | | 2.452 | 0.72 | 21 | 2.3887 | 0.0555 | | 2.5999 | 0.76 | 22 | 2.3828 | 0.0556 | | 2.3665 | 0.79 | 23 | 2.3789 | 0.0556 | | 2.6223 | 0.83 | 24 | 2.375 | 0.0557 | | 2.3562 | 0.86 | 25 | 2.3711 | 0.0557 | | 2.429 | 0.9 | 26 | 2.3691 | 0.0557 | | 2.563 | 0.93 | 27 | 2.3672 | 0.0558 | | 2.4573 | 0.97 | 28 | 2.3652 | 0.0558 | | 2.4883 | 1.0 | 29 | 2.3652 | 0.0558 | ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
yunocchi/swin-large-patch4-window12-384-in22k-respirator
yunocchi
2022-11-10T20:08:26Z
204
0
transformers
[ "transformers", "pytorch", "tensorboard", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-11-10T19:09:39Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swin-large-patch4-window12-384-in22k-respirator results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 1.0 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-large-patch4-window12-384-in22k-respirator This model is a fine-tuned version of [microsoft/swin-large-patch4-window12-384-in22k](https://huggingface.co/microsoft/swin-large-patch4-window12-384-in22k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4272 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 48 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 5 | 0.9598 | 0.4074 | | 0.9359 | 2.0 | 10 | 0.4272 | 1.0 | | 0.9359 | 3.0 | 15 | 0.2660 | 0.8889 | | 0.3813 | 4.0 | 20 | 0.1257 | 1.0 | | 0.3813 | 5.0 | 25 | 0.1451 | 1.0 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
lmvasque/prompt-ls-es-1
lmvasque
2022-11-10T19:27:28Z
107
1
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-11-08T14:02:07Z
--- license: cc-by-4.0 --- ## UoM&MMU at TSAR-2022 Shared Task - Prompt Learning for Lexical Simplification: prompt-ls-es-1 We present **PromptLS**, a method for fine-tuning large pre-trained masked language models to perform the task of Lexical Simplification. This model is part of a series of models presented at the [TSAR-2022 Shared Task](https://taln.upf.edu/pages/tsar2022-st/) by the University of Manchester and Manchester Metropolitan University (UoM&MMU) Team in English, Spanish and Portuguese. You can find more details about the project in our [GitHub](https://github.com/lmvasque/ls-prompt-tsar2022). ## Models Our models were fine-tuned using prompt-learning for **Lexical Simplification**. These are the available models you can use (current model page in bold): | Model Name | Run # | Language | Setting | |----------------------------------------------------------------------|-------|:-----------:|---------------| | [prompt-ls-en-1](https://huggingface.co/lmvasque/prompt-ls-en-1) | 1 | English | fine-tune | | [prompt-ls-en-2](https://huggingface.co/lmvasque/prompt-ls-en-2) | 2 | English | fine-tune | | [roberta-large](https://huggingface.co/roberta-large) | 3 | English | zero-shot | | **[prompt-ls-es-1](https://huggingface.co/lmvasque/prompt-ls-es-1)** | **1** | **Spanish** | **fine-tune** | | [prompt-ls-es-2](https://huggingface.co/lmvasque/prompt-ls-es-2) | 2 | Spanish | fine-tune | | [prompt-ls-es-3](https://huggingface.co/lmvasque/prompt-ls-es-3) | 3 | Spanish | fine-tune | | [prompt-ls-pt-1](https://huggingface.co/lmvasque/prompt-ls-pt-1) | 1 | Portuguese | fine-tune | | [prompt-ls-pt-2](https://huggingface.co/lmvasque/prompt-ls-pt-2) | 2 | Portuguese | fine-tune | | [prompt-ls-pt-3](https://huggingface.co/lmvasque/prompt-ls-pt-3) | 3 | Portuguese | fine-tune | For the zero-shot setting, we used the original models with no further training. Links to these models are also updated in the table above. ## Results We include the [official results](https://github.com/LaSTUS-TALN-UPF/TSAR-2022-Shared-Task/tree/main/results/official) from the competition test set as a reference. However, we encourage the users to also check our results in the development set, which show an increased performance for Spanish and Portuguese. You can find more details in our [paper](https://drive.google.com/file/d/1x5dRxgcSGAaCCrjsgpCHnYek9G-TmZff/view?usp=share_link). | Language | # | Model | Setting | Prompt1 | Prompt2 | w | k | Acc@1 | A@3 | M@3 | P@3 | |------------|---|-------|--------------|---------|---------|---|---|-------|-----|-----|-------------| | English | 1 | RoBERTa-L | fine-tune | simple | word | 5 | 5 | **0.6353** | **0.5308** | **0.4244** | **0.8739** | | English | 2 | mBERT | multilingual | easier | word | 10 | 10 | 0.4959 | 0.4235 | 0.3273 | 0.7560 | | English | 3 | RoBERTa-L | zero-shot | easier | word | 5 | - | 0.2654 | 0.268 | 0.1820 | 0.4906 | | Spanish | 1 | BERTIN | fine-tune | sinónimo | fácil | - | 3 | 0.3451 | **0.2907** | **0.2238** | **0.5543** | | Spanish | 2 | BERTIN | fine-tune | palabra | simple | - | 10 | 0.3614 | **0.2907**| 0.2225 | 0.538 | | Spanish | 3 | BERTIN | fine-tune | sinónimo | fácil | 10 | 10 | **0.3668** | 0.269 | 0.2128 | 0.5326 | | Portuguese | 1 | BR_BERTo | fine-tune | palavra | simples | - | 8 | **0.1711** | 0.1096 | 0.1011 | 0.2486 | | Portuguese | 2 | BR_BERTo | fine-tune | sinônimo | fácil | - | 10 | 0.1363 | 0.0962 | 0.0944 | 0.2379 | | Portuguese | 3 | BR_BERTo | fine-tune | sinônimo | simples | 5 | 10 | 0.1577 | **0.1283**| **0.1071**| **0.2834**| ## Citation If you use our results and scripts in your research, please cite our work: "[UoM&MMU at TSAR-2022 Shared Task: Prompt Learning for Lexical Simplification](https://drive.google.com/file/d/1x5dRxgcSGAaCCrjsgpCHnYek9G-TmZff/view?usp=share_link)". ``` @inproceedings{vasquez-rodriguez-etal-2022-prompt-ls, title = "UoM\&MMU at TSAR-2022 Shared Task: Prompt Learning for Lexical Simplification", author = "V{\'a}squez-Rodr{\'\i}guez, Laura and Nguyen, Nhung T. H. and Shardlow, Matthew and Ananiadou, Sophia", booktitle = "Shared Task on Text Simplification, Accessibility, and Readability (TSAR-2022), EMNLP 2022", month = dec, year = "2022", } ```
lmvasque/prompt-ls-pt-3
lmvasque
2022-11-10T19:11:44Z
109
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-11-08T14:06:17Z
--- license: cc-by-4.0 --- ## UoM&MMU at TSAR-2022 Shared Task - Prompt Learning for Lexical Simplification: prompt-ls-pt-3 We present **PromptLS**, a method for fine-tuning large pre-trained masked language models to perform the task of Lexical Simplification. This model is part of a series of models presented at the [TSAR-2022 Shared Task](https://taln.upf.edu/pages/tsar2022-st/) by the University of Manchester and Manchester Metropolitan University (UoM&MMU) Team in English, Spanish and Portuguese. You can find more details about the project in our [GitHub](https://github.com/lmvasque/ls-prompt-tsar2022). ## Models Our models were fine-tuned using prompt-learning for **Lexical Simplification**. These are the available models you can use (current model page in bold): | Model Name | Run # | Language | Setting | |----------------------------------------------------------------------|-------|:--------------:|---------------| | [prompt-ls-en-1](https://huggingface.co/lmvasque/prompt-ls-en-1) | 1 | English | fine-tune | | [prompt-ls-en-2](https://huggingface.co/lmvasque/prompt-ls-en-2) | 2 | English | fine-tune | | [roberta-large](https://huggingface.co/roberta-large) | 3 | English | zero-shot | | [prompt-ls-es-1](https://huggingface.co/lmvasque/prompt-ls-es-1) | 1 | Spanish | fine-tune | | [prompt-ls-es-2](https://huggingface.co/lmvasque/prompt-ls-es-2) | 2 | Spanish | fine-tune | | [prompt-ls-es-3](https://huggingface.co/lmvasque/prompt-ls-es-3) | 3 | Spanish | fine-tune | | [prompt-ls-pt-1](https://huggingface.co/lmvasque/prompt-ls-pt-1) | 1 | Portuguese | fine-tune | | [prompt-ls-pt-2](https://huggingface.co/lmvasque/prompt-ls-pt-2) | 2 | Portuguese | fine-tune | | **[prompt-ls-pt-3](https://huggingface.co/lmvasque/prompt-ls-pt-3)** | **3** | **Portuguese** | **fine-tune** | For the zero-shot setting, we used the original models with no further training. Links to these models are also updated in the table above. ## Results We include the [official results](https://github.com/LaSTUS-TALN-UPF/TSAR-2022-Shared-Task/tree/main/results/official) from the competition test set as a reference. However, we encourage the users to also check our results in the development set, which show an increased performance for Spanish and Portuguese. You can find more details in our [paper](https://drive.google.com/file/d/1x5dRxgcSGAaCCrjsgpCHnYek9G-TmZff/view?usp=share_link). | Language | # | Model | Setting | Prompt1 | Prompt2 | w | k | Acc@1 | A@3 | M@3 | P@3 | |------------|---|-------|--------------|---------|---------|---|---|-------|-----|-----|-------------| | English | 1 | RoBERTa-L | fine-tune | simple | word | 5 | 5 | **0.6353** | **0.5308** | **0.4244** | **0.8739** | | English | 2 | mBERT | multilingual | easier | word | 10 | 10 | 0.4959 | 0.4235 | 0.3273 | 0.7560 | | English | 3 | RoBERTa-L | zero-shot | easier | word | 5 | - | 0.2654 | 0.268 | 0.1820 | 0.4906 | | Spanish | 1 | BERTIN | fine-tune | sinónimo | fácil | - | 3 | 0.3451 | **0.2907** | **0.2238** | **0.5543** | | Spanish | 2 | BERTIN | fine-tune | palabra | simple | - | 10 | 0.3614 | **0.2907**| 0.2225 | 0.538 | | Spanish | 3 | BERTIN | fine-tune | sinónimo | fácil | 10 | 10 | **0.3668** | 0.269 | 0.2128 | 0.5326 | | Portuguese | 1 | BR_BERTo | fine-tune | palavra | simples | - | 8 | **0.1711** | 0.1096 | 0.1011 | 0.2486 | | Portuguese | 2 | BR_BERTo | fine-tune | sinônimo | fácil | - | 10 | 0.1363 | 0.0962 | 0.0944 | 0.2379 | | Portuguese | 3 | BR_BERTo | fine-tune | sinônimo | simples | 5 | 10 | 0.1577 | **0.1283**| **0.1071**| **0.2834**| ## Citation If you use our results and scripts in your research, please cite our work: "[UoM&MMU at TSAR-2022 Shared Task: Prompt Learning for Lexical Simplification](https://drive.google.com/file/d/1x5dRxgcSGAaCCrjsgpCHnYek9G-TmZff/view?usp=share_link)". ``` @inproceedings{vasquez-rodriguez-etal-2022-prompt-ls, title = "UoM\&MMU at TSAR-2022 Shared Task: Prompt Learning for Lexical Simplification", author = "V{\'a}squez-Rodr{\'\i}guez, Laura and Nguyen, Nhung T. H. and Shardlow, Matthew and Ananiadou, Sophia", booktitle = "Shared Task on Text Simplification, Accessibility, and Readability (TSAR-2022), EMNLP 2022", month = dec, year = "2022", } ```
lmvasque/prompt-ls-es-2
lmvasque
2022-11-10T19:11:23Z
107
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-11-08T14:02:37Z
--- license: cc-by-4.0 --- ## UoM&MMU at TSAR-2022 Shared Task - Prompt Learning for Lexical Simplification: prompt-ls-es-2 We present **PromptLS**, a method for fine-tuning large pre-trained masked language models to perform the task of Lexical Simplification. This model is part of a series of models presented at the [TSAR-2022 Shared Task](https://taln.upf.edu/pages/tsar2022-st/) by the University of Manchester and Manchester Metropolitan University (UoM&MMU) Team in English, Spanish and Portuguese. You can find more details about the project in our [GitHub](https://github.com/lmvasque/ls-prompt-tsar2022). ## Models Our models were fine-tuned using prompt-learning for **Lexical Simplification**. These are the available models you can use (current model page in bold): | Model Name | Run # | Language | Setting | |----------------------------------------------------------------------|----|:-----------:|-----------| | [prompt-ls-en-1](https://huggingface.co/lmvasque/prompt-ls-en-1) | 1 | English | fine-tune | | [prompt-ls-en-2](https://huggingface.co/lmvasque/prompt-ls-en-2) | 2 | English | fine-tune | | [roberta-large](https://huggingface.co/roberta-large) | 3 | English | zero-shot | | [prompt-ls-es-1](https://huggingface.co/lmvasque/prompt-ls-es-1) | 1 | Spanish | fine-tune | | **[prompt-ls-es-2](https://huggingface.co/lmvasque/prompt-ls-es-2)** | **2** | **Spanish** | **fine-tune** | | [prompt-ls-es-3](https://huggingface.co/lmvasque/prompt-ls-es-3) | 3 | Spanish | fine-tune | | [prompt-ls-pt-1](https://huggingface.co/lmvasque/prompt-ls-pt-1) | 1 | Portuguese | fine-tune | | [prompt-ls-pt-2](https://huggingface.co/lmvasque/prompt-ls-pt-2) | 2 | Portuguese | fine-tune | | [prompt-ls-pt-3](https://huggingface.co/lmvasque/prompt-ls-pt-3) | 3 | Portuguese | fine-tune | For the zero-shot setting, we used the original models with no further training. Links to these models are also updated in the table above. ## Results We include the [official results](https://github.com/LaSTUS-TALN-UPF/TSAR-2022-Shared-Task/tree/main/results/official) from the competition test set as a reference. However, we encourage the users to also check our results in the development set, which show an increased performance for Spanish and Portuguese. You can find more details in our [paper](https://drive.google.com/file/d/1x5dRxgcSGAaCCrjsgpCHnYek9G-TmZff/view?usp=share_link). | Language | # | Model | Setting | Prompt1 | Prompt2 | w | k | Acc@1 | A@3 | M@3 | P@3 | |------------|---|-------|--------------|---------|---------|---|---|-------|-----|-----|-------------| | English | 1 | RoBERTa-L | fine-tune | simple | word | 5 | 5 | **0.6353** | **0.5308** | **0.4244** | **0.8739** | | English | 2 | mBERT | multilingual | easier | word | 10 | 10 | 0.4959 | 0.4235 | 0.3273 | 0.7560 | | English | 3 | RoBERTa-L | zero-shot | easier | word | 5 | - | 0.2654 | 0.268 | 0.1820 | 0.4906 | | Spanish | 1 | BERTIN | fine-tune | sinónimo | fácil | - | 3 | 0.3451 | **0.2907** | **0.2238** | **0.5543** | | Spanish | 2 | BERTIN | fine-tune | palabra | simple | - | 10 | 0.3614 | **0.2907**| 0.2225 | 0.538 | | Spanish | 3 | BERTIN | fine-tune | sinónimo | fácil | 10 | 10 | **0.3668** | 0.269 | 0.2128 | 0.5326 | | Portuguese | 1 | BR_BERTo | fine-tune | palavra | simples | - | 8 | **0.1711** | 0.1096 | 0.1011 | 0.2486 | | Portuguese | 2 | BR_BERTo | fine-tune | sinônimo | fácil | - | 10 | 0.1363 | 0.0962 | 0.0944 | 0.2379 | | Portuguese | 3 | BR_BERTo | fine-tune | sinônimo | simples | 5 | 10 | 0.1577 | **0.1283**| **0.1071**| **0.2834**| ## Citation If you use our results and scripts in your research, please cite our work: "[UoM&MMU at TSAR-2022 Shared Task: Prompt Learning for Lexical Simplification](https://drive.google.com/file/d/1x5dRxgcSGAaCCrjsgpCHnYek9G-TmZff/view?usp=share_link)". ``` @inproceedings{vasquez-rodriguez-etal-2022-prompt-ls, title = "UoM\&MMU at TSAR-2022 Shared Task: Prompt Learning for Lexical Simplification", author = "V{\'a}squez-Rodr{\'\i}guez, Laura and Nguyen, Nhung T. H. and Shardlow, Matthew and Ananiadou, Sophia", booktitle = "Shared Task on Text Simplification, Accessibility, and Readability (TSAR-2022), EMNLP 2022", month = dec, year = "2022", } ```
huggingtweets/paulg
huggingtweets
2022-11-10T18:17:30Z
106
1
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-10T18:15:43Z
--- language: en thumbnail: http://www.huggingtweets.com/paulg/1668104246182/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1824002576/pg-railsconf_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Paul Graham</div> <div style="text-align: center; font-size: 14px;">@paulg</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Paul Graham. | Data | Paul Graham | | --- | --- | | Tweets downloaded | 3248 | | Retweets | 408 | | Short tweets | 169 | | Tweets kept | 2671 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3tt8g81l/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @paulg's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3hc03wa8) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3hc03wa8/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/paulg') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Vested-Sigil/VanGO
Vested-Sigil
2022-11-10T17:54:49Z
0
0
null
[ "region:us" ]
null
2022-11-10T17:51:20Z
#!/usr/bin/env python3 from diffusers import DiffusionPipeline import PIL import requests from io import BytesIO import torch def download_image(url): response = requests.get(url) return PIL.Image.open(BytesIO(response.content)).convert("RGB") pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", custom_pipeline="stable_diffusion_mega", torch_dtype=torch.float16, revision="fp16") pipe.to("cuda") pipe.enable_attention_slicing() ### Text-to-Image images = pipe.text2img("An astronaut riding a horse").images ### Image-to-Image init_image = download_image("https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg") prompt = "A fantasy landscape, trending on artstation" images = pipe.img2img(prompt=prompt, init_image=init_image, strength=0.75, guidance_scale=7.5).images ### Inpainting img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" init_image = download_image(img_url).resize((512, 512)) mask_image = download_image(mask_url).resize((512, 512)) prompt = "a cat sitting on a bench" images = pipe.inpaint(prompt=prompt, init_image=init_image, mask_image=mask_image, strength=0.75).images
huggingtweets/googlepoetics
huggingtweets
2022-11-10T17:53:15Z
103
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-10T17:52:16Z
--- language: en thumbnail: http://www.huggingtweets.com/googlepoetics/1668102791580/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/378800000152236311/e364d2a13dab35a8b65c9decf71ae134_400x400.jpeg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Google Poetics</div> <div style="text-align: center; font-size: 14px;">@googlepoetics</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Google Poetics. | Data | Google Poetics | | --- | --- | | Tweets downloaded | 1569 | | Retweets | 9 | | Short tweets | 35 | | Tweets kept | 1525 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2re8zf12/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @googlepoetics's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/cwwobqqi) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/cwwobqqi/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/googlepoetics') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
nielsr/detr-table-detection
nielsr
2022-11-10T17:21:51Z
214
2
transformers
[ "transformers", "pytorch", "detr", "object-detection", "endpoints_compatible", "region:us" ]
object-detection
2022-09-06T14:28:01Z
Hi, Please don't use this model anymore, it only worked for a specific branch of mine. From now on it's recommended to use https://huggingface.co/microsoft/table-transformer-detection from Transformers. Thanks, have a great day
amitjohn007/bert-finetuned-squad
amitjohn007
2022-11-10T16:43:37Z
59
0
transformers
[ "transformers", "tf", "bert", "question-answering", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-11-10T05:33:12Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: amitjohn007/bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # amitjohn007/bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5685 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16638, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Epoch | |:----------:|:-----:| | 1.2704 | 0 | | 0.7816 | 1 | | 0.5685 | 2 | ### Framework versions - Transformers 4.24.0 - TensorFlow 2.9.2 - Datasets 2.6.1 - Tokenizers 0.13.2
ajtamayoh/NER_EHR_Spanish_model_Mulitlingual_BERT
ajtamayoh
2022-11-10T16:42:14Z
113
1
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-05-11T02:54:02Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: NER_EHR_Spanish_model_Mulitlingual_BERT results: [] widget: - text: 'Presentamos el caso de una mujer de 30 años, fumadora de 20 cigarrillos/día y sin otros antecedentes personales de interés. La paciente refiere infecciones urinarias de repetición. Se indica realización de ecografía abdominal, observándose una lesión nodular intravesical, por lo que es derivada a consulta de urología. En cistoscopia se visualiza tumoración exofítica de 3x3 cms. en cara lateral derecha con mucosa vesical íntegra, no encontrándose alteraciones en el resto de la vejiga. Se realiza exploración bajo anestesia (EBA) y resección transuretral de dicha lesión (RTU). En el informe de anatomía patológica macroscópicamente se describen fragmentos de pared vesical con urotelio conservado sin displasia, destacando en la capa muscular propia y en continuidad con el tejido muscular de la misma, una tumoración fusocelular con células que muestran unos núcleos de gran tamaño, pleomórficos, de aspecto vesiculoso y unos citoplasmas amplios eosinófilos. Esta celularidad se dispone en formas de fascículos mal definidos y entre la misma se reconoce abundante celularidad constituida fundamentalmente por numerosas células plasmáticas y leucocitos polimorfonucleares eosinófilos. No se observa un índice mitótico elevado, aunque el índice de proliferación medido como positividad nuclear con anticuerpos frente a MIB-1 se encuentra entre el 10 y el 25% de la celularidad tumoral. No se han objetivado áreas de necrosis. En estudio inmunohistoquímico se observa marcada positividad frente a citoqueratinas (AE1/AE3) y CAM5.2 a nivel citoplasmático, así como una marcada positividad citoplasmática con anticuerpos frente a p80 (proteína ALK). La celularidad descrita ha resultado negativa con anticuerpos frente a músculo liso (actina de músculo liso, MyO D1 y Calretinina), así como para CEA y citoqueratinas de alto peso molecular, observándose tan sólo positividad focal y aislada frente a EMA. Tras realización de FISH sobre material parafinado no se evidencia traslocación en el gen de la ALK. El diagnóstico anatomopatológico definitivo es tumor miofibroblástico inflamatorio vesical.' --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NER_EHR_Spanish_model_Mulitlingual_BERT This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the DisTEMIST shared task 2022 dataset. It is available at: https://temu.bsc.es/distemist/category/data/ It achieves the following results on the evaluation set: - Loss: 0.2603 - Precision: 0.5637 - Recall: 0.5801 - F1: 0.5718 - Accuracy: 0.9534 ## Model description For a complete description of our system, please go to: https://ceur-ws.org/Vol-3180/paper-26.pdf ## Training and evaluation data Dataset provided by DisTEMIST shared task, it is available at: https://temu.bsc.es/distemist/category/data/ ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 71 | 0.2060 | 0.5017 | 0.5540 | 0.5266 | 0.9496 | | No log | 2.0 | 142 | 0.2163 | 0.5363 | 0.5433 | 0.5398 | 0.9495 | | No log | 3.0 | 213 | 0.2245 | 0.5521 | 0.5356 | 0.5438 | 0.9514 | | No log | 4.0 | 284 | 0.2453 | 0.5668 | 0.5985 | 0.5822 | 0.9522 | | No log | 5.0 | 355 | 0.2433 | 0.5657 | 0.5579 | 0.5617 | 0.9530 | | No log | 6.0 | 426 | 0.2553 | 0.5762 | 0.5762 | 0.5762 | 0.9536 | | No log | 7.0 | 497 | 0.2603 | 0.5637 | 0.5801 | 0.5718 | 0.9534 | ### How to cite this work: Tamayo, A., Burgos, D. A., & Gelbukh, A. (2022). mbert and simple post-processing: A baseline for disease mention detection in spanish. In Working Notes of Conference and Labs of the Evaluation (CLEF) Forum. CEUR Workshop Proceedings. @inproceedings{tamayo2022mbert, title={mbert and simple post-processing: A baseline for disease mention detection in spanish}, author={Tamayo, Antonio and Burgos, Diego A and Gelbukh, Alexander}, booktitle={Working Notes of Conference and Labs of the Evaluation (CLEF) Forum. CEUR Workshop Proceedings}, year={2022} } ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.2.0 - Tokenizers 0.12.1
google/deeplabv3_mobilenet_v2_1.0_513
google
2022-11-10T16:28:13Z
870
7
transformers
[ "transformers", "pytorch", "mobilenet_v2", "vision", "image-segmentation", "dataset:pascal-voc", "arxiv:1801.04381", "arxiv:1802.02611", "license:other", "endpoints_compatible", "region:us" ]
image-segmentation
2022-11-10T16:05:57Z
--- license: other tags: - vision - image-segmentation datasets: - pascal-voc widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-2.jpg example_title: Cat --- # MobileNetV2 with DeepLabV3+ MobileNet V2 model pre-trained on PASCAL VOC at resolution 513x513. It was introduced in [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen. It was first released in [this repository](https://github.com/tensorflow/models/tree/master/research/deeplab). Disclaimer: The team releasing MobileNet V2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description From the [original README](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md): > MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models, such as Inception, are used. MobileNets can be run efficiently on mobile devices [...] MobileNets trade off between latency, size and accuracy while comparing favorably with popular models from the literature. The model in this repo adds a [DeepLabV3+](https://arxiv.org/abs/1802.02611) head to the MobileNetV2 backbone for semantic segmentation. ## Intended uses & limitations You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?search=mobilenet_v2) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python from transformers import AutoImageProcessor, AutoModelForSemanticSegmentation from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) preprocessor = AutoImageProcessor.from_pretrained("google/deeplabv3_mobilenet_v2_1.0_513") model = AutoModelForSemanticSegmentation.from_pretrained("google/deeplabv3_mobilenet_v2_1.0_513") inputs = preprocessor(images=image, return_tensors="pt") outputs = model(**inputs) predicted_mask = preprocessor.post_process_semantic_segmentation(outputs) ``` Currently, both the feature extractor and model support PyTorch. ### BibTeX entry and citation info ```bibtex @inproceedings{deeplabv3plus2018, title={Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation}, author={Liang-Chieh Chen and Yukun Zhu and George Papandreou and Florian Schroff and Hartwig Adam}, booktitle={ECCV}, year={2018} } ```
RafaelEiji/jurisbert-base-classify
RafaelEiji
2022-11-10T16:19:29Z
103
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-10T12:49:42Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output This model is a fine-tuned version of [juridics/jurisbert-base-portuguese-uncased](https://huggingface.co/juridics/jurisbert-base-portuguese-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4838 - Accuracy: 0.7176 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 24 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20.0 ### Training results ### Framework versions - Transformers 4.24.0.dev0 - Pytorch 1.12.0+cu116 - Datasets 2.6.1 - Tokenizers 0.13.1
mrm8488/flan-t5-xl-finetuned-gsm8k
mrm8488
2022-11-10T15:43:46Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:gsm8k", "doi:10.57967/hf/0102", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-11-10T11:05:43Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - gsm8k model-index: - name: flan-t5-xl-finetuned-gsm8k results: [] widget: - "Please, answer the following question reasoning step-by-step: If Manu eats twice a day, how many meals does he take for a week?" --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-xl-finetuned-gsm8k This model is a fine-tuned version of [google/flan-t5-xl](https://huggingface.co/google/flan-t5-xl) on the gsm8k dataset. It achieves the following results on the evaluation set: - Loss: 0.2853 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.2845 | 1.0 | 1868 | 0.2778 | | 0.2204 | 2.0 | 3736 | 0.2718 | | 0.1803 | 3.0 | 5604 | 0.2762 | | 0.1578 | 4.0 | 7472 | 0.2853 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
celinely/camembert-base-finetuned-sentence-simplification-fr
celinely
2022-11-10T15:26:39Z
56
0
transformers
[ "transformers", "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-10-27T09:41:46Z
--- tags: - generated_from_trainer metrics: - rouge model-index: - name: camembert-base-finetuned-sentence-simplification-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # camembert-base-finetuned-sentence-simplification-fr This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0001 - Rouge1: 100.0 - Rouge2: 100.0 - Rougel: 100.0 - Rougelsum: 100.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:------:|:---------:| | 0.0202 | 1.0 | 167 | 0.0006 | 99.978 | 99.9587 | 99.978 | 99.978 | | 0.0034 | 2.0 | 334 | 0.0001 | 100.0 | 100.0 | 100.0 | 100.0 | | 0.0019 | 3.0 | 501 | 0.0001 | 100.0 | 100.0 | 100.0 | 100.0 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu102 - Datasets 2.6.1 - Tokenizers 0.13.1
sanchit-gandhi/whisper-medium-es-5k-1e-5
sanchit-gandhi
2022-11-10T15:26:01Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "es", "dataset:facebook/multilingual_librispeech", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-11-10T09:01:56Z
--- language: - es license: apache-2.0 tags: - hf-asr-leaderboard - generated_from_trainer datasets: - facebook/multilingual_librispeech metrics: - wer model-index: - name: Whisper Small Es - Sanchit Gandhi results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Multilingual LibriSpeech type: facebook/multilingual_librispeech args: 'config: es, split: test' metrics: - name: Wer type: wer value: 4.988756935106611 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Es - Sanchit Gandhi This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Multilingual LibriSpeech dataset. It achieves the following results on the evaluation set: - Loss: 0.1252 - Wer: 4.9888 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 2500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2346 | 0.2 | 500 | 0.1957 | 8.5131 | | 0.1252 | 0.4 | 1000 | 0.1448 | 5.7876 | | 0.2076 | 0.6 | 1500 | 0.1361 | 5.5786 | | 0.2356 | 0.8 | 2000 | 0.1504 | 6.6611 | | 0.1893 | 1.0 | 2500 | 0.1252 | 4.9888 | ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.12.0 - Datasets 2.6.2.dev0 - Tokenizers 0.12.1
Matthijs/mobilenet_v1_0.75_192
Matthijs
2022-11-10T14:20:14Z
237
0
transformers
[ "transformers", "pytorch", "mobilenet_v1", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:1704.04861", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-06-22T12:07:44Z
--- license: other tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # MobileNet V1 MobileNet V1 model pre-trained on ImageNet-1k at resolution 192x192. It was introduced in [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) by Howard et al, and first released in [this repository](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md). Disclaimer: The team releasing MobileNet V1 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description From the [original README](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md): > MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models, such as Inception, are used. MobileNets can be run efficiently on mobile devices [...] MobileNets trade off between latency, size and accuracy while comparing favorably with popular models from the literature. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=mobilenet_v1) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import MobileNetV1FeatureExtractor, MobileNetV1ForImageClassification from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = MobileNetV1FeatureExtractor.from_pretrained("Matthijs/mobilenet_v1_1.0_224") model = MobileNetV1ForImageClassification.from_pretrained("Matthijs/mobilenet_v1_1.0_224") inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Note: This model actually predicts 1001 classes, the 1000 classes from ImageNet plus an extra “background” class (index 0). Currently, both the feature extractor and model support PyTorch.
135790ESERESERESEr/Salem
135790ESERESERESEr
2022-11-10T14:15:28Z
0
0
null
[ "region:us" ]
null
2022-11-10T14:14:18Z
--- license: openrail ---paint(saelam)
Narsil/layoutlm-funsd
Narsil
2022-11-10T13:52:40Z
54
0
transformers
[ "transformers", "pytorch", "tensorboard", "layoutlm", "token-classification", "generated_from_trainer", "endpoints-template", "object-detection", "dataset:funsd", "autotrain_compatible", "endpoints_compatible", "region:us" ]
object-detection
2022-11-10T11:35:09Z
--- tags: - generated_from_trainer - endpoints-template library_name: transformers pipeline_tag: object-detection widget: - src: "https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png" example_title: invoice - src: "https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/contract.jpeg" example_title: contract datasets: - funsd model-index: - name: layoutlm-funsd results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlm-funsd This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on the funsd dataset. It achieves the following results on the evaluation set: - Loss: 1.0045 - Answer: {'precision': 0.7348314606741573, 'recall': 0.8084054388133498, 'f1': 0.7698646262507357, 'number': 809} - Header: {'precision': 0.44285714285714284, 'recall': 0.5210084033613446, 'f1': 0.47876447876447875, 'number': 119} - Question: {'precision': 0.8211009174311926, 'recall': 0.8403755868544601, 'f1': 0.8306264501160092, 'number': 1065} - Overall Precision: 0.7599 - Overall Recall: 0.8083 - Overall F1: 0.7866 - Overall Accuracy: 0.8106 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ## Deploy Model with Inference Endpoints Before we can get started, make sure you meet all of the following requirements: 1. An Organization/User with an active plan and *WRITE* access to the model repository. 2. Can access the UI: [https://ui.endpoints.huggingface.co](https://ui.endpoints.huggingface.co/endpoints) ### 1. Deploy LayoutLM and Send requests In this tutorial, you will learn how to deploy a [LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm) to [Hugging Face Inference Endpoints](https://huggingface.co/inference-endpoints) and how you can integrate it via an API into your products. This tutorial is not covering how you create the custom handler for inference. If you want to learn how to create a custom Handler for Inference Endpoints, you can either checkout the [documentation](https://huggingface.co/docs/inference-endpoints/guides/custom_handler) or go through [“Custom Inference with Hugging Face Inference Endpoints”](https://www.philschmid.de/custom-inference-handler) We are going to deploy [philschmid/layoutlm-funsd](https://huggingface.co/philschmid/layoutlm-funsd) which implements the following `handler.py` ```python from typing import Dict, List, Any from transformers import LayoutLMForTokenClassification, LayoutLMv2Processor import torch from subprocess import run # install tesseract-ocr and pytesseract run("apt install -y tesseract-ocr", shell=True, check=True) run("pip install pytesseract", shell=True, check=True) # helper function to unnormalize bboxes for drawing onto the image def unnormalize_box(bbox, width, height): return [ width * (bbox[0] / 1000), height * (bbox[1] / 1000), width * (bbox[2] / 1000), height * (bbox[3] / 1000), ] # set device device = torch.device("cuda" if torch.cuda.is_available() else "cpu") class EndpointHandler: def __init__(self, path=""): # load model and processor from path self.model = LayoutLMForTokenClassification.from_pretrained(path).to(device) self.processor = LayoutLMv2Processor.from_pretrained(path) def __call__(self, data: Dict[str, bytes]) -> Dict[str, List[Any]]: """ Args: data (:obj:): includes the deserialized image file as PIL.Image """ # process input image = data.pop("inputs", data) # process image encoding = self.processor(image, return_tensors="pt") # run prediction with torch.inference_mode(): outputs = self.model( input_ids=encoding.input_ids.to(device), bbox=encoding.bbox.to(device), attention_mask=encoding.attention_mask.to(device), token_type_ids=encoding.token_type_ids.to(device), ) predictions = outputs.logits.softmax(-1) # post process output result = [] for item, inp_ids, bbox in zip( predictions.squeeze(0).cpu(), encoding.input_ids.squeeze(0).cpu(), encoding.bbox.squeeze(0).cpu() ): label = self.model.config.id2label[int(item.argmax().cpu())] if label == "O": continue score = item.max().item() text = self.processor.tokenizer.decode(inp_ids) bbox = unnormalize_box(bbox.tolist(), image.width, image.height) result.append({"label": label, "score": score, "text": text, "bbox": bbox}) return {"predictions": result} ``` ### 2. Send HTTP request using Python Hugging Face Inference endpoints can directly work with binary data, this means that we can directly send our image from our document to the endpoint. We are going to use `requests` to send our requests. (make your you have it installed `pip install requests`) ```python import json import requests as r import mimetypes ENDPOINT_URL="" # url of your endpoint HF_TOKEN="" # organization token where you deployed your endpoint def predict(path_to_image:str=None): with open(path_to_image, "rb") as i: b = i.read() headers= { "Authorization": f"Bearer {HF_TOKEN}", "Content-Type": mimetypes.guess_type(path_to_image)[0] } response = r.post(ENDPOINT_URL, headers=headers, data=b) return response.json() prediction = predict(path_to_image="path_to_your_image.png") print(prediction) # {'predictions': [{'label': 'I-ANSWER', 'score': 0.4823932945728302, 'text': '[CLS]', 'bbox': [0.0, 0.0, 0.0, 0.0]}, {'label': 'B-HEADER', 'score': 0.992474377155304, 'text': 'your', 'bbox': [1712.529, 181.203, 1859.949, 228.88799999999998]}, ``` ### 3. Draw result on image To get a better understanding of what the model predicted you can also draw the predictions on the provided image. ```python from PIL import Image, ImageDraw, ImageFont # draw results on image def draw_result(path_to_image,result): image = Image.open(path_to_image) label2color = { "B-HEADER": "blue", "B-QUESTION": "red", "B-ANSWER": "green", "I-HEADER": "blue", "I-QUESTION": "red", "I-ANSWER": "green", } # draw predictions over the image draw = ImageDraw.Draw(image) font = ImageFont.load_default() for res in result: draw.rectangle(res["bbox"], outline="black") draw.rectangle(res["bbox"], outline=label2color[res["label"]]) draw.text((res["bbox"][0] + 10, res["bbox"][1] - 10), text=res["label"], fill=label2color[res["label"]], font=font) return image draw_result("path_to_your_image.png", prediction["predictions"]) ```
toanbui1991/distilbert-base-uncased-finetuned-squad
toanbui1991
2022-11-10T13:39:29Z
59
0
transformers
[ "transformers", "tf", "tensorboard", "distilbert", "question-answering", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-11-09T03:01:51Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: toanbui1991/distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # toanbui1991/distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.5101 - Train End Logits Accuracy: 0.6065 - Train Start Logits Accuracy: 0.5692 - Validation Loss: 1.1679 - Validation End Logits Accuracy: 0.6823 - Validation Start Logits Accuracy: 0.6523 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11064, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 1.5101 | 0.6065 | 0.5692 | 1.1679 | 0.6823 | 0.6523 | 0 | ### Framework versions - Transformers 4.24.0 - TensorFlow 2.10.0 - Datasets 2.6.1 - Tokenizers 0.13.2
huggingtweets/barkmeta-lb22_sus-nft_god
huggingtweets
2022-11-10T13:18:55Z
103
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-10T13:00:51Z
--- language: en thumbnail: http://www.huggingtweets.com/barkmeta-lb22_sus-nft_god/1668086330381/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1579110344420622342/QzePSc2g_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1489268127565324291/ZQK5RoFg_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1559936197564268551/WXSx0leh_400x400.png&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">LB22 & NFT God & Bark❓</div> <div style="text-align: center; font-size: 14px;">@barkmeta-lb22_sus-nft_god</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from LB22 & NFT God & Bark❓. | Data | LB22 | NFT God | Bark❓ | | --- | --- | --- | --- | | Tweets downloaded | 1220 | 3250 | 3250 | | Retweets | 467 | 20 | 285 | | Short tweets | 381 | 165 | 1868 | | Tweets kept | 372 | 3065 | 1097 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3vq9v8ck/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @barkmeta-lb22_sus-nft_god's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ixknti18) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ixknti18/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/barkmeta-lb22_sus-nft_god') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
ViktorDo/SciBERT-WIKI_Lifecycle_Finetuned
ViktorDo
2022-11-10T12:55:56Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-10T11:38:03Z
--- tags: - generated_from_trainer model-index: - name: SciBERT-WIKI_Lifecycle_Finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SciBERT-WIKI_Lifecycle_Finetuned This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1142 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0933 | 1.0 | 2082 | 0.1159 | | 0.0782 | 2.0 | 4164 | 0.0935 | | 0.0442 | 3.0 | 6246 | 0.1142 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
Matthijs/deeplabv3_mobilenet_v2_1.0_513
Matthijs
2022-11-10T12:50:21Z
296
1
transformers
[ "transformers", "pytorch", "coreml", "mobilenet_v2", "vision", "image-segmentation", "dataset:pascal-voc", "arxiv:1801.04381", "arxiv:1802.02611", "license:other", "endpoints_compatible", "region:us" ]
image-segmentation
2022-06-28T11:16:06Z
--- license: other tags: - vision - image-segmentation datasets: - pascal-voc widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-2.jpg example_title: Cat --- # MobileNetV2 with DeepLabV3+ MobileNet V2 model pre-trained on PASCAL VOC at resolution 513x513. It was introduced in [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen. It was first released in [this repository](https://github.com/tensorflow/models/tree/master/research/deeplab). Disclaimer: The team releasing MobileNet V2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description From the [original README](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md): > MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models, such as Inception, are used. MobileNets can be run efficiently on mobile devices [...] MobileNets trade off between latency, size and accuracy while comparing favorably with popular models from the literature. The model in this repo adds a [DeepLabV3+](https://arxiv.org/abs/1802.02611) head to the MobileNetV2 backbone for semantic segmentation. ## Intended uses & limitations You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?search=mobilenet_v2) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python from transformers import MobileNetV2FeatureExtractor, MobileNetV2ForSemanticSegmentation from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = MobileNetV2FeatureExtractor.from_pretrained("Matthijs/deeplabv3_mobilenet_v2_1.0_513") model = MobileNetV2ForSemanticSegmentation.from_pretrained("Matthijs/deeplabv3_mobilenet_v2_1.0_513") inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits predicted_mask = logits.argmax(1).squeeze(0) ``` Currently, both the feature extractor and model support PyTorch. ### BibTeX entry and citation info ```bibtex @inproceedings{deeplabv3plus2018, title={Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation}, author={Liang-Chieh Chen and Yukun Zhu and George Papandreou and Florian Schroff and Hartwig Adam}, booktitle={ECCV}, year={2018} } ```
Matthijs/mobilenet_v2_1.4_224
Matthijs
2022-11-10T12:48:00Z
1,146
0
transformers
[ "transformers", "pytorch", "coreml", "mobilenet_v2", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:1801.04381", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-06-27T13:32:28Z
--- license: other tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # MobileNet V2 MobileNet V2 model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen. It was first released in [this repository](https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet). Disclaimer: The team releasing MobileNet V2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description From the [original README](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md): > MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models, such as Inception, are used. MobileNets can be run efficiently on mobile devices [...] MobileNets trade off between latency, size and accuracy while comparing favorably with popular models from the literature. The checkpoints are named **mobilenet\_v2\_*depth*\_*size***, for example **mobilenet\_v2\_1.0\_224**, where **1.0** is the depth multiplier and **224** is the resolution of the input images the model was trained on. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=mobilenet_v2) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import MobileNetV2FeatureExtractor, MobileNetV2ForImageClassification from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = MobileNetV2FeatureExtractor.from_pretrained("Matthijs/mobilenet_v2_1.0_224") model = MobileNetV2ForImageClassification.from_pretrained("Matthijs/mobilenet_v2_1.0_224") inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Note: This model actually predicts 1001 classes, the 1000 classes from ImageNet plus an extra “background” class (index 0). Currently, both the feature extractor and model support PyTorch. ### BibTeX entry and citation info ```bibtex @inproceedings{mobilenetv22018, title={MobileNetV2: Inverted Residuals and Linear Bottlenecks}, author={Mark Sandler and Andrew Howard and Menglong Zhu and Andrey Zhmoginov and Liang-Chieh Chen}, booktitle={CVPR}, year={2018} } ```
Vsevolod/company-names-similarity-sentence-transformer
Vsevolod
2022-11-10T12:44:01Z
648
16
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-10-24T11:15:41Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 1222 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.WeightedRandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 122.1875, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 100, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
huggingtweets/sbe_sus
huggingtweets
2022-11-10T12:41:46Z
105
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-10T12:20:40Z
--- language: en thumbnail: http://www.huggingtweets.com/sbe_sus/1668084101960/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1579111637973336071/MkdCeTeX_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">sberto.eth 📈</div> <div style="text-align: center; font-size: 14px;">@sbe_sus</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from sberto.eth 📈. | Data | sberto.eth 📈 | | --- | --- | | Tweets downloaded | 1273 | | Retweets | 648 | | Short tweets | 221 | | Tweets kept | 404 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1rwjbirb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sbe_sus's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2ejp5m2v) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2ejp5m2v/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/sbe_sus') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
wakio/dummy-model
wakio
2022-11-10T12:22:03Z
59
0
transformers
[ "transformers", "tf", "camembert", "fill-mask", "generated_from_keras_callback", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-11-10T11:54:52Z
--- license: mit tags: - generated_from_keras_callback model-index: - name: dummy-model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # dummy-model This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.24.0 - TensorFlow 2.9.2 - Tokenizers 0.13.2
m50x102b/yeh
m50x102b
2022-11-10T12:16:07Z
0
0
null
[ "region:us" ]
null
2022-11-10T12:15:38Z
A magnificent and ancient Blue ice cave at the edge of the known universe in a reflective pond of cosmic stars, cinematic, atmospheric, 8K, mystical, dynamic lighting, landscape photography by Marc Adamus,
ViktorDo/SciBERT-WIKI_Life_Form_Finetuned
ViktorDo
2022-11-10T10:53:55Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-10T10:06:48Z
--- tags: - generated_from_trainer model-index: - name: SciBERT-WIKI_Life_Form_Finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SciBERT-WIKI_Life_Form_Finetuned This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4427 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.4919 | 1.0 | 1175 | 0.4396 | | 0.3471 | 2.0 | 2350 | 0.4196 | | 0.2362 | 3.0 | 3525 | 0.4427 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
Norod78/hebrew-gpt_neo-small
Norod78
2022-11-10T10:35:44Z
400
0
transformers
[ "transformers", "pytorch", "jax", "onnx", "safetensors", "gpt_neo", "text-generation", "he", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- language: he thumbnail: https://avatars1.githubusercontent.com/u/3617152?norod.jpg widget: - text: "עוד בימי קדם" - text: "קוראים לי דורון ואני מעוניין ל" - text: "קוראים לי איציק ואני חושב ש" - text: "החתול שלך מאוד חמוד ו" license: mit --- # hebrew-gpt_neo-small Hebrew text generation model based on [EleutherAI's gpt-neo](https://github.com/EleutherAI/gpt-neo). Each was trained on a TPUv3-8 which was made avilable to me via the [TPU Research Cloud](https://sites.research.google/trc/) Program. ## Datasets 1. An assortment of various Hebrew corpuses - I have made it available [here](https://mega.nz/folder/CodSSA4R#4INvMes-56m_WUi7jQMbJQ) 2. oscar / unshuffled_deduplicated_he - [Homepage](https://oscar-corpus.com) | [Dataset Permalink](https://huggingface.co/datasets/viewer/?dataset=oscar&config=unshuffled_deduplicated_he) The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture. 3. CC100-Hebrew Dataset [Homepage](https://metatext.io/datasets/cc100-hebrew) Created by Conneau & Wenzek et al. at 2020, the CC100-Hebrew This dataset is one of the 100 corpora of monolingual data that was processed from the January-December 2018 Commoncrawl snapshots from the CC-Net repository. The size of this corpus is 6.1G., in Hebrew language. ## Training Config Available [here](https://github.com/Norod/hebrew-gpt_neo/tree/main/hebrew-gpt_neo-small/configs) <BR> ## Usage ### Google Colab Notebook Available [here ](https://colab.research.google.com/github/Norod/hebrew-gpt_neo/blob/main/hebrew-gpt_neo-small/Norod78_hebrew_gpt_neo_small_Colab.ipynb) <BR> #### Simple usage sample code ```python !pip install tokenizers==0.10.2 transformers==4.6.0 from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Norod78/hebrew-gpt_neo-small") model = AutoModelForCausalLM.from_pretrained("Norod78/hebrew-gpt_neo-small", pad_token_id=tokenizer.eos_token_id) prompt_text = "אני אוהב שוקולד ועוגות" max_len = 512 sample_output_num = 3 seed = 1000 import numpy as np import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") n_gpu = 0 if torch.cuda.is_available()==False else torch.cuda.device_count() print(f"device: {device}, n_gpu: {n_gpu}") np.random.seed(seed) torch.manual_seed(seed) if n_gpu > 0: torch.cuda.manual_seed_all(seed) model.to(device) encoded_prompt = tokenizer.encode( prompt_text, add_special_tokens=False, return_tensors="pt") encoded_prompt = encoded_prompt.to(device) if encoded_prompt.size()[-1] == 0: input_ids = None else: input_ids = encoded_prompt print("input_ids = " + str(input_ids)) if input_ids != None: max_len += len(encoded_prompt[0]) if max_len > 2048: max_len = 2048 print("Updated max_len = " + str(max_len)) stop_token = "<|endoftext|>" new_lines = "\n\n\n" sample_outputs = model.generate( input_ids, do_sample=True, max_length=max_len, top_k=50, top_p=0.95, num_return_sequences=sample_output_num ) print(100 * '-' + "\n\t\tOutput\n" + 100 * '-') for i, sample_output in enumerate(sample_outputs): text = tokenizer.decode(sample_output, skip_special_tokens=True) # Remove all text after the stop token text = text[: text.find(stop_token) if stop_token else None] # Remove all text after 3 newlines text = text[: text.find(new_lines) if new_lines else None] print("\n{}: {}".format(i, text)) print("\n" + 100 * '-') ```
facebook/maskformer-swin-base-ade
facebook
2022-11-10T10:22:19Z
3,410
11
transformers
[ "transformers", "pytorch", "maskformer", "vision", "image-segmentation", "dataset:scene_parse_150", "arxiv:2107.06278", "license:other", "endpoints_compatible", "region:us" ]
image-segmentation
2022-03-02T23:29:05Z
--- license: other tags: - vision - image-segmentation datasets: - scene_parse_150 widget: - src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg example_title: House - src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg example_title: Castle --- # MaskFormer MaskFormer model trained on ADE20k semantic segmentation (base-sized version, Swin backbone). It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169). Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/maskformer_architecture.png) ## Intended uses & limitations You can use this particular checkpoint for semantic segmentation. See the [model hub](https://huggingface.co/models?search=maskformer) to look for other fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation from PIL import Image import requests url = "https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = MaskFormerFeatureExtractor.from_pretrained("facebook/maskformer-swin-base-ade") inputs = feature_extractor(images=image, return_tensors="pt") model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-base-ade") outputs = model(**inputs) # model predicts class_queries_logits of shape `(batch_size, num_queries)` # and masks_queries_logits of shape `(batch_size, num_queries, height, width)` class_queries_logits = outputs.class_queries_logits masks_queries_logits = outputs.masks_queries_logits # you can pass them to feature_extractor for postprocessing # we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs) predicted_semantic_map = feature_extractor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0] ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer).
ArtbyArbi/picbixex
ArtbyArbi
2022-11-10T10:00:15Z
33
0
diffusers
[ "diffusers", "license:mit", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2022-11-10T09:57:15Z
--- license: mit --- ### PicBixex on Stable Diffusion via Dreambooth trained on the [fast-DreamBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook #### model by ArbiCreatesArt This your the Stable Diffusion model fine-tuned the PicBixex concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt(s)`: **PicBixex** You can also train your own concepts and upload them to the library by using [the fast-DremaBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Here are the images used for training this concept: PicBixex ![PicBixex 0](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(1).jpg) ![PicBixex 1](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(2).jpg) ![PicBixex 2](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(3).jpg) ![PicBixex 3](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(4).jpg) ![PicBixex 4](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(5).jpg) ![PicBixex 5](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(6).jpg) ![PicBixex 6](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(7).jpg) ![PicBixex 7](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(8).jpg) ![PicBixex 8](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(9).jpg) ![PicBixex 9](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(10).jpg) ![PicBixex 10](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(11).jpg) ![PicBixex 11](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(12).jpg) ![PicBixex 12](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(13).jpg) ![PicBixex 13](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(14).jpg) ![PicBixex 14](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(15).jpg) ![PicBixex 15](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(16).jpg) ![PicBixex 16](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(17).jpg) ![PicBixex 17](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(18).jpg) ![PicBixex 18](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(19).jpg) ![PicBixex 19](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(20).jpg) ![PicBixex 20](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(21).jpg) ![PicBixex 21](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(22).jpg) ![PicBixex 22](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(23).JPG) ![PicBixex 23](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(24).jpg) ![PicBixex 24](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(25).jpg) ![PicBixex 25](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(26).jpg) ![PicBixex 26](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(27).jpg) ![PicBixex 27](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(28).jpg) ![PicBixex 28](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(29).jpg) ![PicBixex 29](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(30).jpg) ![PicBixex 30](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(31).jpg) ![PicBixex 31](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(32).jpg) ![PicBixex 32](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(33).jpg) ![PicBixex 33](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(34).jpg) ![PicBixex 34](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(35).jpg) ![PicBixex 35](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(36).jpg) ![PicBixex 36](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(37).jpg) ![PicBixex 37](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(38).jpg) ![PicBixex 38](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(39).jpg) ![PicBixex 39](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(40).jpg) ![PicBixex 40](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(41).jpg) ![PicBixex 41](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(42).jpg) ![PicBixex 42](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(43).jpg) ![PicBixex 43](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(44).jpg) ![PicBixex 44](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(45).jpg) ![PicBixex 45](https://huggingface.co/ArbiCreatesArt/picbixex/resolve/main/concept_images/PicBixex_(46).jpg)
qcs/ddpm-butterflies-128
qcs
2022-11-10T09:41:34Z
0
0
diffusers
[ "diffusers", "tensorboard", "en", "dataset:huggan/smithsonian_butterflies_subset", "license:apache-2.0", "diffusers:DDPMPipeline", "region:us" ]
null
2022-11-10T02:04:12Z
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: huggan/smithsonian_butterflies_subset metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflies-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/smithsonian_butterflies_subset` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/qcs/ddpm-butterflies-128/tensorboard?#scalars)
naverpapago/garnet
naverpapago
2022-11-10T09:33:03Z
0
2
pytorch
[ "pytorch", "Scene Text Removal", "Image to Image", "arxiv:2210.07489", "license:apache-2.0", "region:us" ]
null
2022-11-08T02:01:55Z
--- license: apache-2.0 tags: - Scene Text Removal - Image to Image library_name: pytorch --- ### GaRNet This is text-removal model that introduced in the paper below and first released at [this page](https://github.com/naver/garnet). \ [The Surprisingly Straightforward Scene Text Removal Method With Gated Attention and Region of Interest Generation: A Comprehensive Prominent Model Analysis](https://arxiv.org/abs/2210.07489). \ Hyeonsu Lee, Chankyu Choi \ Naver Corp. \ In ECCV 2022. ### Model description GaRNet is a generator that create non-text image with given image and coresponding text box mask. It consists of convolution encoder and decoder. The encoder consists of residual block with attention module called Gated Attention. Gated Attention module has two Spatial attention branch. Each attention branch finds text stroke or its surrounding regions. The module adjusts the weight of these two domains by trainable parameters. The model was trained in PatchGAN manner with Region-of-Interest Generation. \ The discriminator is consists of convolution encoder. Given an image, it determines whether each patch, which indicates text-box regions, is real or fake. All loss functions treat non-textbox regions as 'don't care'. ### Intended uses & limitations This model can be used for areas that require the process of erasing text from an image, such as concealment private information, text editing.\ You can use the raw model or pre-trained model.\ Note that pre-trained model was trained in both Synthetic and SCUT_EnsText dataset. And the SCUT-EnsText dataset can only be used for non-commercial research purposes. ### How to use You can use inference code in [this page](https://github.com/naver/garnet). ### BibTeX entry and citation info ``` @inproceedings{lee2022surprisingly, title={The Surprisingly Straightforward Scene Text Removal Method with Gated Attention and Region of Interest Generation: A Comprehensive Prominent Model Analysis}, author={Lee, Hyeonsu and Choi, Chankyu}, booktitle={European Conference on Computer Vision}, pages={457--472}, year={2022}, organization={Springer} } ```
alexionby/clip-l-roberta-finetuned
alexionby
2022-11-10T09:05:37Z
81
0
transformers
[ "transformers", "pytorch", "vision-text-dual-encoder", "feature-extraction", "generated_from_trainer", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-10T07:16:25Z
--- tags: - generated_from_trainer model-index: - name: clip-l-roberta-finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # clip-l-roberta-finetuned This model is a fine-tuned version of [./clip-l-roberta](https://huggingface.co/./clip-l-roberta) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 22 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.13.0 - Datasets 2.6.1 - Tokenizers 0.13.2
CompVis/stable-diffusion-v-1-1-original
CompVis
2022-11-10T08:54:14Z
0
17
null
[ "stable-diffusion", "text-to-image", "arxiv:2112.10752", "arxiv:2103.00020", "arxiv:2205.11487", "arxiv:2207.12598", "arxiv:1910.09700", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2022-08-09T19:36:42Z
--- license: creativeml-openrail-m tags: - stable-diffusion - text-to-image extra_gated_prompt: |- This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license carefully here: https://huggingface.co/spaces/CompVis/stable-diffusion-license extra_gated_heading: Please read the LICENSE to access this model --- # Stable Diffusion v1 Model Card Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input. The **Stable-Diffusion-v-1-1** was trained on 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en), followed by 194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`). For more information, please refer to [Training](#training). #### Download the weights - [sd-v1-1.ckpt](https://huggingface.co/CompVis/stable-diffusion-v-1-1-original/resolve/main/sd-v1-1.ckpt) - [sd-v1-1-full-ema.ckpt](https://huggingface.co/CompVis/stable-diffusion-v-1-1-original/resolve/main/sd-v1-1-full-ema.ckpt) This weights are intended to be used with the original [CompVis Stable Diffusion codebase](https://github.com/CompVis/stable-diffusion). If you are looking for the model to use with the D🧨iffusers library, [come here](https://huggingface.co/CompVis/stable-diffusion-v1-1). ## Model Details - **Developed by:** Robin Rombach, Patrick Esser - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based. - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487). - **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752). - **Cite as:** @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } # Uses ## Direct Use The model is intended for research purposes only. Possible research areas and tasks include - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. - Research on generative models. Excluded uses are described below. ### Misuse, Malicious Use, and Out-of-Scope Use _Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_. The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. #### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. #### Misuse and Malicious Use Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to: - Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc. - Intentionally promoting or propagating discriminatory content or harmful stereotypes. - Impersonating individuals without their consent. - Sexual content without consent of the people who might see it. - Mis- and disinformation - Representations of egregious violence and gore - Sharing of copyrighted or licensed material in violation of its terms of use. - Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use. ## Limitations and Bias ### Limitations - The model does not achieve perfect photorealism - The model cannot render legible text - The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere” - Faces and people in general may not be generated properly. - The model was trained mainly with English captions and will not work as well in other languages. - The autoencoding part of the model is lossy - The model was trained on a large-scale dataset [LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material and is not fit for product use without additional safety mechanisms and considerations. - No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data. The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images. ### Bias While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/), which consists of images that are primarily limited to English descriptions. Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for. This affects the overall output of the model, as white and western cultures are often set as the default. Further, the ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts. ## Training **Training Data** The model developers used the following dataset for training the model: - LAION-2B (en) and subsets thereof (see next section) **Training Procedure** Stable Diffusion v1 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training, - Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4 - Text prompts are encoded through a ViT-L/14 text-encoder. - The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention. - The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet. We currently provide three checkpoints, `sd-v1-1.ckpt`, `sd-v1-2.ckpt` and `sd-v1-3.ckpt`, which were trained as follows, - `sd-v1-1.ckpt`: 237k steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en). 194k steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`). - `sd-v1-2.ckpt`: Resumed from `sd-v1-1.ckpt`. 515k steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en, filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)). - `sd-v1-3.ckpt`: Resumed from `sd-v1-2.ckpt`. 195k steps at resolution `512x512` on "laion-improved-aesthetics" and 10\% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). - **Hardware:** 32 x 8 x A100 GPUs - **Optimizer:** AdamW - **Gradient Accumulations**: 2 - **Batch:** 32 x 8 x 2 x 4 = 2048 - **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant ## Evaluation Results Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling steps show the relative improvements of the checkpoints: ![pareto](https://huggingface.co/CompVis/stable-diffusion/resolve/main/v1-variants-scores.jpg) Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores. ## Environmental Impact **Stable Diffusion v1** **Estimated Emissions** Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact. - **Hardware Type:** A100 PCIe 40GB - **Hours used:** 150000 - **Cloud Provider:** AWS - **Compute Region:** US-east - **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq. ## Citation ```bibtex @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } ``` *This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
ntsema/wav2vec2-xlsr-53-espeak-cv-ft-sah2-ntsema-colab
ntsema
2022-11-10T08:25:33Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:audiofolder", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-11-09T07:30:30Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - audiofolder metrics: - wer model-index: - name: wav2vec2-xlsr-53-espeak-cv-ft-sah2-ntsema-colab results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: audiofolder type: audiofolder config: default split: train args: default metrics: - name: Wer type: wer value: 0.3295938104448743 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xlsr-53-espeak-cv-ft-sah2-ntsema-colab This model is a fine-tuned version of [facebook/wav2vec2-xlsr-53-espeak-cv-ft](https://huggingface.co/facebook/wav2vec2-xlsr-53-espeak-cv-ft) on the audiofolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3586 - Wer: 0.3296 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.4128 | 5.71 | 400 | 0.4462 | 0.5733 | | 0.2344 | 11.43 | 800 | 0.3489 | 0.3969 | | 0.1181 | 17.14 | 1200 | 0.3470 | 0.3602 | | 0.0837 | 22.85 | 1600 | 0.3608 | 0.3451 | | 0.0645 | 28.57 | 2000 | 0.3586 | 0.3296 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
vent42/test
vent42
2022-11-10T08:22:44Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2022-11-10T08:22:44Z
--- license: bigscience-openrail-m ---
NoCrypt/cafe-instagram-unofficial
NoCrypt
2022-11-10T08:02:10Z
0
37
null
[ "stable-diffusion", "text-to-image", "en", "region:us" ]
text-to-image
2022-11-10T07:49:12Z
--- language: - en tags: - stable-diffusion - text-to-image inference: false --- # Cafe Unofficial Instagram TEST Model Release > Trained on ~140k Instagram images made up of primarily Japanese accounts (mix of cosplay, model, and personal accounts) > While the model can create some realistic Instagram-esque images on its own, for full potential, it is recommended that it be merged with another model (such as berry or anything) Scrapped from [cafe's torrent](magnet:?xt=urn:btih:eb085b3e22310a338e6ea00172cb887c10c54cbc&dn=cafe-instagram-unofficial-test-epoch-9-140k-images-fp32.ckpt&tr=udp%3A%2F%2Ftracker.openbittorrent.com%3A80&tr=udp%3A%2F%2Fopentor.org%3A2710&tr=udp%3A%2F%2Ftracker.ccc.de%3A80&tr=udp%3A%2F%2Ftracker.blackunicorn.xyz%3A6969&tr=udp%3A%2F%2Ftracker.coppersurfer.tk%3A6969&tr=udp%3A%2F%2Ftracker.leechers-paradise.org%3A6969) Credit: - [Cafe - かふぇ](https://lookup.guru/305088304284434434) for the model - [Sikinx](https://lookup.guru/671692519985119232) for the drive link
TimePlan/q-FrozenLake-v1-4x4-noSlippery
TimePlan
2022-11-10T06:54:48Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-11-10T06:54:40Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="TimePlan/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
NoCrypt/momocha-mix
NoCrypt
2022-11-10T06:49:03Z
0
19
null
[ "stable-diffusion", "text-to-image", "en", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2022-11-10T06:39:29Z
--- language: - en tags: - stable-diffusion - text-to-image license: creativeml-openrail-m inference: false --- # Momocha mix models Scrapped from [chenyfan's sharepoint](https://cyfan-my.sharepoint.com/:f:/g/personal/chenyfan_cyfan_onmicrosoft_com/EilOWB40m3ZJn6ahczIUIs4B6v0XvizO5YorOhG_5eYSUw?e=ZyP7qE) Example output: ![](https://i.ibb.co/T1Gp5yg/photo-2022-11-09-15-54-30.jpg)
Terence3927/q-Taxi-v3
Terence3927
2022-11-10T06:20:54Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-11-10T06:20:45Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Terence3927/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
Terence3927/q-FrozenLake-v1-4x4-noSlippery
Terence3927
2022-11-10T06:12:39Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-11-10T06:08:58Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Terence3927/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
NahedAbdelgaber/ner_base_model
NahedAbdelgaber
2022-11-10T05:33:06Z
117
0
transformers
[ "transformers", "pytorch", "bert", "token-classification", "en", "dataset:Satellite-Instrument-NER", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-18T02:14:09Z
--- language: en datasets: - Satellite-Instrument-NER widget: - text: "Centroid Moment Tensor Global Navigation Satellite System GNSS" - text: "This paper describes the latest version of the algorithm MAIAC used for processing the MODIS Collection 6 data record." - text: "We derive tropospheric column BrO during the ARCTAS and ARCPAC field campaigns in spring 2008 using retrievals of total column BrO from the satellite UV nadir sensors OMI and GOME - 2 using a radiative transfer model and stratospheric column BrO from a photochemical simulation." license: mit --- # bert-base-NER ## Model description **bert-base-NER** is a fine-tuned BERT model that is ready to use for **Named Entity Recognition** and achieves **F1 0.61** for the NER task. It has been trained to recognize two types of entities: instrument and satellite. Specifically, this model is a *bert-base-cased* model that was fine-tuned on Satellite-Instrument-NER dataset. ## Intended uses & limitations #### How to use You can use this model with Transformers *pipeline* for NER. ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("NahedAbdelgaber/ner_base_model") model = AutoModelForTokenClassification.from_pretrained("NahedAbdelgaber/ner_base_model") nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Centroid Moment Tensor Global Navigation Satellite System GNSS" ner_results = nlp(example) print(ner_results) ```
Yocel1/whisper-small-fr
Yocel1
2022-11-10T05:29:29Z
76
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "hi", "dataset:mozilla-foundation/common_voice_11_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-11-09T16:31:12Z
--- language: - hi license: apache-2.0 tags: - hf-asr-leaderboard - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: Whisper Small Fr - Joss results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 11.0 FR type: mozilla-foundation/common_voice_11_0 args: 'config: fr, split: test' metrics: - name: Wer type: wer value: 24.03653329331678 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Fr - Joss This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 FR dataset. It achieves the following results on the evaluation set: - Loss: 0.4212 - Wer: 24.0365 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.3803 | 0.99 | 1000 | 0.3992 | 23.9465 | | 0.2214 | 1.99 | 2000 | 0.3902 | 22.8108 | | 0.0986 | 2.98 | 3000 | 0.4028 | 22.4459 | | 0.0478 | 3.98 | 4000 | 0.4212 | 24.0365 | ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
betelgeux/bert-base-uncased-issues-128
betelgeux
2022-11-10T05:21:31Z
101
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-11-09T07:16:54Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-base-uncased-issues-128 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-issues-128 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0348 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 16 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 2.3932 | 1.0 | 1409 | 2.0750 | | 2.1659 | 2.0 | 2818 | 1.9781 | | 2.0364 | 3.0 | 4227 | 2.1215 | | 1.9399 | 4.0 | 5636 | 2.1018 | | 1.8857 | 5.0 | 7045 | 1.9919 | | 1.813 | 6.0 | 8454 | 2.2653 | | 1.7505 | 7.0 | 9863 | 2.0857 | | 1.7196 | 8.0 | 11272 | 1.9211 | | 1.672 | 9.0 | 12681 | 1.9853 | | 1.6379 | 10.0 | 14090 | 2.0391 | | 1.6037 | 11.0 | 15499 | 1.9305 | | 1.5699 | 12.0 | 16908 | 2.0291 | | 1.5363 | 13.0 | 18317 | 2.0492 | | 1.5155 | 14.0 | 19726 | 1.8807 | | 1.4999 | 15.0 | 21135 | 1.8604 | | 1.4784 | 16.0 | 22544 | 2.0348 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.12.1 - Datasets 2.4.0 - Tokenizers 0.12.1
Terence3927/ppo-LunarLander-v2-optuna
Terence3927
2022-11-10T05:17:58Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-11-10T05:17:35Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 275.24 +/- 24.12 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
xu1998hz/sescore_english_coco
xu1998hz
2022-11-10T03:59:19Z
0
1
null
[ "region:us" ]
null
2022-11-05T01:45:07Z
SEScore for COCO Image Captioning
zhangfx7/deberta-base-finetuned-cola
zhangfx7
2022-11-10T02:43:42Z
101
0
transformers
[ "transformers", "pytorch", "tensorboard", "deberta", "text-classification", "generated_from_trainer", "dataset:glue", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-10T02:22:29Z
--- license: mit tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: deberta-base-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: cola split: train args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.0 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-base-finetuned-cola This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6187 - Matthews Correlation: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.6187 | 1.0 | 535 | 0.6187 | 0.0 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2