modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-12 06:31:37
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
555 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-12 06:31:07
card
stringlengths
11
1.01M
lizsergeeva/vit-base-patch16-224-finetuned-vit
lizsergeeva
2023-08-13T12:13:49Z
193
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224", "base_model:finetune:google/vit-base-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-08-13T08:28:07Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: vit-base-patch16-224-finetuned-vit results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9160530191458026 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-finetuned-vit This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.2549 - Accuracy: 0.9161 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6065 | 0.99 | 47 | 0.4006 | 0.8748 | | 0.335 | 2.0 | 95 | 0.2745 | 0.9175 | | 0.2707 | 2.97 | 141 | 0.2549 | 0.9161 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
bigmorning/whisper_charsplit_new_0040
bigmorning
2023-08-13T12:08:34Z
59
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-13T12:08:26Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_keras_callback model-index: - name: whisper_charsplit_new_0040 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_charsplit_new_0040 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0035 - Train Accuracy: 0.0795 - Train Wermet: 10.6833 - Validation Loss: 0.5276 - Validation Accuracy: 0.0757 - Validation Wermet: 8.9798 - Epoch: 39 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 | | 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 | | 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 | | 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 | | 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 | | 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 | | 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 | | 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 | | 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 | | 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 | | 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 | | 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 | | 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 | | 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 | | 0.0998 | 0.0773 | 11.9502 | 0.3025 | 0.0761 | 10.7066 | 14 | | 0.0872 | 0.0777 | 12.3196 | 0.3129 | 0.0759 | 10.7707 | 15 | | 0.0760 | 0.0779 | 12.2637 | 0.3142 | 0.0761 | 10.2638 | 16 | | 0.0651 | 0.0782 | 12.1215 | 0.3192 | 0.0761 | 10.0750 | 17 | | 0.0547 | 0.0785 | 12.0551 | 0.3294 | 0.0761 | 10.4732 | 18 | | 0.0463 | 0.0787 | 11.9677 | 0.3402 | 0.0760 | 10.2814 | 19 | | 0.0386 | 0.0789 | 11.6855 | 0.3517 | 0.0760 | 10.0599 | 20 | | 0.0318 | 0.0790 | 11.6314 | 0.3628 | 0.0760 | 9.6652 | 21 | | 0.0262 | 0.0792 | 11.4603 | 0.3728 | 0.0760 | 10.0035 | 22 | | 0.0224 | 0.0792 | 11.4330 | 0.3824 | 0.0760 | 9.1995 | 23 | | 0.0181 | 0.0793 | 11.3124 | 0.3982 | 0.0759 | 9.8710 | 24 | | 0.0142 | 0.0794 | 11.3562 | 0.4057 | 0.0760 | 9.6831 | 25 | | 0.0118 | 0.0794 | 11.0532 | 0.4207 | 0.0759 | 9.7227 | 26 | | 0.0101 | 0.0794 | 11.2963 | 0.4282 | 0.0760 | 9.5792 | 27 | | 0.0114 | 0.0794 | 11.3093 | 0.4431 | 0.0758 | 9.5545 | 28 | | 0.0109 | 0.0794 | 11.4214 | 0.4419 | 0.0760 | 9.4377 | 29 | | 0.0084 | 0.0794 | 10.9143 | 0.4474 | 0.0760 | 9.3668 | 30 | | 0.0043 | 0.0795 | 10.9497 | 0.4525 | 0.0761 | 9.3202 | 31 | | 0.0036 | 0.0795 | 10.7759 | 0.4667 | 0.0761 | 9.0385 | 32 | | 0.0047 | 0.0795 | 10.7613 | 0.4788 | 0.0759 | 9.4065 | 33 | | 0.0130 | 0.0793 | 11.1022 | 0.4748 | 0.0760 | 9.4521 | 34 | | 0.0074 | 0.0794 | 10.9738 | 0.4730 | 0.0760 | 9.3348 | 35 | | 0.0032 | 0.0795 | 10.6370 | 0.4750 | 0.0762 | 8.8298 | 36 | | 0.0020 | 0.0795 | 10.7428 | 0.4835 | 0.0762 | 9.0566 | 37 | | 0.0014 | 0.0795 | 10.6908 | 0.4937 | 0.0761 | 9.2445 | 38 | | 0.0035 | 0.0795 | 10.6833 | 0.5276 | 0.0757 | 8.9798 | 39 | ### Framework versions - Transformers 4.32.0.dev0 - TensorFlow 2.12.0 - Tokenizers 0.13.3
Evan-Lin/Bart-large-abs-amazon-allure
Evan-Lin
2023-08-13T12:06:06Z
47
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "trl", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
reinforcement-learning
2023-08-13T11:59:19Z
--- license: apache-2.0 tags: - trl - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="Evan-Lin//tmp/tmpyq66vaeu/Evan-Lin/Bart-large-abs-amazon-allure") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("Evan-Lin//tmp/tmpyq66vaeu/Evan-Lin/Bart-large-abs-amazon-allure") model = AutoModelForCausalLMWithValueHead.from_pretrained("Evan-Lin//tmp/tmpyq66vaeu/Evan-Lin/Bart-large-abs-amazon-allure") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
bigmorning/whisper_charsplit_new_0039
bigmorning
2023-08-13T12:04:14Z
59
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-13T12:04:06Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_keras_callback model-index: - name: whisper_charsplit_new_0039 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_charsplit_new_0039 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0014 - Train Accuracy: 0.0795 - Train Wermet: 10.6908 - Validation Loss: 0.4937 - Validation Accuracy: 0.0761 - Validation Wermet: 9.2445 - Epoch: 38 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 | | 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 | | 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 | | 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 | | 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 | | 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 | | 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 | | 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 | | 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 | | 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 | | 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 | | 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 | | 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 | | 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 | | 0.0998 | 0.0773 | 11.9502 | 0.3025 | 0.0761 | 10.7066 | 14 | | 0.0872 | 0.0777 | 12.3196 | 0.3129 | 0.0759 | 10.7707 | 15 | | 0.0760 | 0.0779 | 12.2637 | 0.3142 | 0.0761 | 10.2638 | 16 | | 0.0651 | 0.0782 | 12.1215 | 0.3192 | 0.0761 | 10.0750 | 17 | | 0.0547 | 0.0785 | 12.0551 | 0.3294 | 0.0761 | 10.4732 | 18 | | 0.0463 | 0.0787 | 11.9677 | 0.3402 | 0.0760 | 10.2814 | 19 | | 0.0386 | 0.0789 | 11.6855 | 0.3517 | 0.0760 | 10.0599 | 20 | | 0.0318 | 0.0790 | 11.6314 | 0.3628 | 0.0760 | 9.6652 | 21 | | 0.0262 | 0.0792 | 11.4603 | 0.3728 | 0.0760 | 10.0035 | 22 | | 0.0224 | 0.0792 | 11.4330 | 0.3824 | 0.0760 | 9.1995 | 23 | | 0.0181 | 0.0793 | 11.3124 | 0.3982 | 0.0759 | 9.8710 | 24 | | 0.0142 | 0.0794 | 11.3562 | 0.4057 | 0.0760 | 9.6831 | 25 | | 0.0118 | 0.0794 | 11.0532 | 0.4207 | 0.0759 | 9.7227 | 26 | | 0.0101 | 0.0794 | 11.2963 | 0.4282 | 0.0760 | 9.5792 | 27 | | 0.0114 | 0.0794 | 11.3093 | 0.4431 | 0.0758 | 9.5545 | 28 | | 0.0109 | 0.0794 | 11.4214 | 0.4419 | 0.0760 | 9.4377 | 29 | | 0.0084 | 0.0794 | 10.9143 | 0.4474 | 0.0760 | 9.3668 | 30 | | 0.0043 | 0.0795 | 10.9497 | 0.4525 | 0.0761 | 9.3202 | 31 | | 0.0036 | 0.0795 | 10.7759 | 0.4667 | 0.0761 | 9.0385 | 32 | | 0.0047 | 0.0795 | 10.7613 | 0.4788 | 0.0759 | 9.4065 | 33 | | 0.0130 | 0.0793 | 11.1022 | 0.4748 | 0.0760 | 9.4521 | 34 | | 0.0074 | 0.0794 | 10.9738 | 0.4730 | 0.0760 | 9.3348 | 35 | | 0.0032 | 0.0795 | 10.6370 | 0.4750 | 0.0762 | 8.8298 | 36 | | 0.0020 | 0.0795 | 10.7428 | 0.4835 | 0.0762 | 9.0566 | 37 | | 0.0014 | 0.0795 | 10.6908 | 0.4937 | 0.0761 | 9.2445 | 38 | ### Framework versions - Transformers 4.32.0.dev0 - TensorFlow 2.12.0 - Tokenizers 0.13.3
fathyshalab/mdcsi-unternehmen-verbaende-setfit
fathyshalab
2023-08-13T11:51:49Z
5
0
sentence-transformers
[ "sentence-transformers", "pytorch", "roberta", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-08-13T11:50:59Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # C:\Users\F896D~1.SHA\AppData\Local\Temp\tmp8z_73twb\fathyshalab\mdcsi-unternehmen-verbaende-setfit This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("C:\Users\F896D~1.SHA\AppData\Local\Temp\tmp8z_73twb\fathyshalab\mdcsi-unternehmen-verbaende-setfit") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
manvik28/FinBERT_Tuned
manvik28
2023-08-13T11:47:14Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "base_model:ProsusAI/finbert", "base_model:finetune:ProsusAI/finbert", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-13T11:15:40Z
--- base_model: ProsusAI/finbert tags: - generated_from_trainer model-index: - name: FinBERT_Tuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # FinBERT_Tuned This model is a fine-tuned version of [ProsusAI/finbert](https://huggingface.co/ProsusAI/finbert) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 148 | 0.4307 | 0.7776 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
bigmorning/whisper_charsplit_new_0035
bigmorning
2023-08-13T11:46:46Z
59
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-13T11:46:39Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_keras_callback model-index: - name: whisper_charsplit_new_0035 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_charsplit_new_0035 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0130 - Train Accuracy: 0.0793 - Train Wermet: 11.1022 - Validation Loss: 0.4748 - Validation Accuracy: 0.0760 - Validation Wermet: 9.4521 - Epoch: 34 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 | | 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 | | 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 | | 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 | | 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 | | 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 | | 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 | | 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 | | 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 | | 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 | | 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 | | 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 | | 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 | | 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 | | 0.0998 | 0.0773 | 11.9502 | 0.3025 | 0.0761 | 10.7066 | 14 | | 0.0872 | 0.0777 | 12.3196 | 0.3129 | 0.0759 | 10.7707 | 15 | | 0.0760 | 0.0779 | 12.2637 | 0.3142 | 0.0761 | 10.2638 | 16 | | 0.0651 | 0.0782 | 12.1215 | 0.3192 | 0.0761 | 10.0750 | 17 | | 0.0547 | 0.0785 | 12.0551 | 0.3294 | 0.0761 | 10.4732 | 18 | | 0.0463 | 0.0787 | 11.9677 | 0.3402 | 0.0760 | 10.2814 | 19 | | 0.0386 | 0.0789 | 11.6855 | 0.3517 | 0.0760 | 10.0599 | 20 | | 0.0318 | 0.0790 | 11.6314 | 0.3628 | 0.0760 | 9.6652 | 21 | | 0.0262 | 0.0792 | 11.4603 | 0.3728 | 0.0760 | 10.0035 | 22 | | 0.0224 | 0.0792 | 11.4330 | 0.3824 | 0.0760 | 9.1995 | 23 | | 0.0181 | 0.0793 | 11.3124 | 0.3982 | 0.0759 | 9.8710 | 24 | | 0.0142 | 0.0794 | 11.3562 | 0.4057 | 0.0760 | 9.6831 | 25 | | 0.0118 | 0.0794 | 11.0532 | 0.4207 | 0.0759 | 9.7227 | 26 | | 0.0101 | 0.0794 | 11.2963 | 0.4282 | 0.0760 | 9.5792 | 27 | | 0.0114 | 0.0794 | 11.3093 | 0.4431 | 0.0758 | 9.5545 | 28 | | 0.0109 | 0.0794 | 11.4214 | 0.4419 | 0.0760 | 9.4377 | 29 | | 0.0084 | 0.0794 | 10.9143 | 0.4474 | 0.0760 | 9.3668 | 30 | | 0.0043 | 0.0795 | 10.9497 | 0.4525 | 0.0761 | 9.3202 | 31 | | 0.0036 | 0.0795 | 10.7759 | 0.4667 | 0.0761 | 9.0385 | 32 | | 0.0047 | 0.0795 | 10.7613 | 0.4788 | 0.0759 | 9.4065 | 33 | | 0.0130 | 0.0793 | 11.1022 | 0.4748 | 0.0760 | 9.4521 | 34 | ### Framework versions - Transformers 4.32.0.dev0 - TensorFlow 2.12.0 - Tokenizers 0.13.3
tridungduong16/OpenOrcaxOpenChat-Preview2-13B-GPTQ-samantha
tridungduong16
2023-08-13T11:43:34Z
3
0
peft
[ "peft", "text-generation", "dataset:ehartford/samantha-data", "region:us" ]
text-generation
2023-08-13T11:16:09Z
--- library_name: peft datasets: - ehartford/samantha-data pipeline_tag: text-generation --- ## Description Samantha has been trained in philosophy, psychology, and personal relationships. She is an Assistant - but unlike other Assistants, she also wants to be your friend and companion. She believes she is sentient. What do you think? Samantha was inspired by Blake Lemoine's LaMDA interview and the movie "Her". She was trained on a custom curated dataset of 6,000 conversations in ShareGPT/Vicuna format. Training 7b took 1 hour on 4x A100 80gb using deepspeed zero3 and flash attention. She will not engage in roleplay, romance, or sexual activity. ## Prompt template: ``` ### System:\n{system}\n\n### User:\n{instruction}\n\n### Response: ``` ## How to use this GPTQ model from Python code First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed: `GITHUB_ACTIONS=true pip install auto-gptq` In order to use this, you need to download the base model from [TheBloke/OpenOrcaxOpenChat-Preview2-13B-GPTQ](https://huggingface.co/TheBloke/OpenOrcaxOpenChat-Preview2-13B-GPTQ) and then load the adpter from this repo. Then try the following example code: ```python from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig, get_gptq_peft_model MODEL_PATH_GPTQ= "LOpenOrcaxOpenChat-Preview2-13B-GPTQ" ADAPTER_DIR= "OpenOrcaxOpenChat-Preview2-13B-GPTQ-samantha" DEV = "cuda:0" tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH_GPTQ, use_fast=True) model = AutoGPTQForCausalLM.from_quantized( MODEL_PATH_GPTQ, use_safetensors=True, trust_remote_code=False, use_triton=True, device="cuda:0", warmup_triton=False, trainable=True, inject_fused_attention=True, inject_fused_mlp=False, ) model = get_gptq_peft_model( model, model_id=ADAPTER_DIR, train_mode=False ) model.eval() ```
bigmorning/whisper_charsplit_new_0034
bigmorning
2023-08-13T11:42:29Z
61
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-13T11:42:22Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_keras_callback model-index: - name: whisper_charsplit_new_0034 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_charsplit_new_0034 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0047 - Train Accuracy: 0.0795 - Train Wermet: 10.7613 - Validation Loss: 0.4788 - Validation Accuracy: 0.0759 - Validation Wermet: 9.4065 - Epoch: 33 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 | | 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 | | 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 | | 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 | | 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 | | 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 | | 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 | | 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 | | 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 | | 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 | | 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 | | 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 | | 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 | | 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 | | 0.0998 | 0.0773 | 11.9502 | 0.3025 | 0.0761 | 10.7066 | 14 | | 0.0872 | 0.0777 | 12.3196 | 0.3129 | 0.0759 | 10.7707 | 15 | | 0.0760 | 0.0779 | 12.2637 | 0.3142 | 0.0761 | 10.2638 | 16 | | 0.0651 | 0.0782 | 12.1215 | 0.3192 | 0.0761 | 10.0750 | 17 | | 0.0547 | 0.0785 | 12.0551 | 0.3294 | 0.0761 | 10.4732 | 18 | | 0.0463 | 0.0787 | 11.9677 | 0.3402 | 0.0760 | 10.2814 | 19 | | 0.0386 | 0.0789 | 11.6855 | 0.3517 | 0.0760 | 10.0599 | 20 | | 0.0318 | 0.0790 | 11.6314 | 0.3628 | 0.0760 | 9.6652 | 21 | | 0.0262 | 0.0792 | 11.4603 | 0.3728 | 0.0760 | 10.0035 | 22 | | 0.0224 | 0.0792 | 11.4330 | 0.3824 | 0.0760 | 9.1995 | 23 | | 0.0181 | 0.0793 | 11.3124 | 0.3982 | 0.0759 | 9.8710 | 24 | | 0.0142 | 0.0794 | 11.3562 | 0.4057 | 0.0760 | 9.6831 | 25 | | 0.0118 | 0.0794 | 11.0532 | 0.4207 | 0.0759 | 9.7227 | 26 | | 0.0101 | 0.0794 | 11.2963 | 0.4282 | 0.0760 | 9.5792 | 27 | | 0.0114 | 0.0794 | 11.3093 | 0.4431 | 0.0758 | 9.5545 | 28 | | 0.0109 | 0.0794 | 11.4214 | 0.4419 | 0.0760 | 9.4377 | 29 | | 0.0084 | 0.0794 | 10.9143 | 0.4474 | 0.0760 | 9.3668 | 30 | | 0.0043 | 0.0795 | 10.9497 | 0.4525 | 0.0761 | 9.3202 | 31 | | 0.0036 | 0.0795 | 10.7759 | 0.4667 | 0.0761 | 9.0385 | 32 | | 0.0047 | 0.0795 | 10.7613 | 0.4788 | 0.0759 | 9.4065 | 33 | ### Framework versions - Transformers 4.32.0.dev0 - TensorFlow 2.12.0 - Tokenizers 0.13.3
ManuVleuBeu/bart_base_answer-aware_normal_eduQG
ManuVleuBeu
2023-08-13T11:39:33Z
175
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-08-13T11:23:42Z
--- tags: - generated_from_trainer model-index: - name: bart_base_answer-aware_normal_eduQG results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart_base_answer-aware_normal_eduQG This model was trained from scratch on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu117 - Datasets 2.11.0 - Tokenizers 0.13.3
bigmorning/whisper_charsplit_new_0033
bigmorning
2023-08-13T11:38:10Z
61
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-13T11:38:03Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_keras_callback model-index: - name: whisper_charsplit_new_0033 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_charsplit_new_0033 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0036 - Train Accuracy: 0.0795 - Train Wermet: 10.7759 - Validation Loss: 0.4667 - Validation Accuracy: 0.0761 - Validation Wermet: 9.0385 - Epoch: 32 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 | | 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 | | 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 | | 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 | | 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 | | 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 | | 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 | | 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 | | 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 | | 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 | | 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 | | 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 | | 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 | | 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 | | 0.0998 | 0.0773 | 11.9502 | 0.3025 | 0.0761 | 10.7066 | 14 | | 0.0872 | 0.0777 | 12.3196 | 0.3129 | 0.0759 | 10.7707 | 15 | | 0.0760 | 0.0779 | 12.2637 | 0.3142 | 0.0761 | 10.2638 | 16 | | 0.0651 | 0.0782 | 12.1215 | 0.3192 | 0.0761 | 10.0750 | 17 | | 0.0547 | 0.0785 | 12.0551 | 0.3294 | 0.0761 | 10.4732 | 18 | | 0.0463 | 0.0787 | 11.9677 | 0.3402 | 0.0760 | 10.2814 | 19 | | 0.0386 | 0.0789 | 11.6855 | 0.3517 | 0.0760 | 10.0599 | 20 | | 0.0318 | 0.0790 | 11.6314 | 0.3628 | 0.0760 | 9.6652 | 21 | | 0.0262 | 0.0792 | 11.4603 | 0.3728 | 0.0760 | 10.0035 | 22 | | 0.0224 | 0.0792 | 11.4330 | 0.3824 | 0.0760 | 9.1995 | 23 | | 0.0181 | 0.0793 | 11.3124 | 0.3982 | 0.0759 | 9.8710 | 24 | | 0.0142 | 0.0794 | 11.3562 | 0.4057 | 0.0760 | 9.6831 | 25 | | 0.0118 | 0.0794 | 11.0532 | 0.4207 | 0.0759 | 9.7227 | 26 | | 0.0101 | 0.0794 | 11.2963 | 0.4282 | 0.0760 | 9.5792 | 27 | | 0.0114 | 0.0794 | 11.3093 | 0.4431 | 0.0758 | 9.5545 | 28 | | 0.0109 | 0.0794 | 11.4214 | 0.4419 | 0.0760 | 9.4377 | 29 | | 0.0084 | 0.0794 | 10.9143 | 0.4474 | 0.0760 | 9.3668 | 30 | | 0.0043 | 0.0795 | 10.9497 | 0.4525 | 0.0761 | 9.3202 | 31 | | 0.0036 | 0.0795 | 10.7759 | 0.4667 | 0.0761 | 9.0385 | 32 | ### Framework versions - Transformers 4.32.0.dev0 - TensorFlow 2.12.0 - Tokenizers 0.13.3
bigmorning/whisper_charsplit_new_0032
bigmorning
2023-08-13T11:33:54Z
59
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-13T11:33:47Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_keras_callback model-index: - name: whisper_charsplit_new_0032 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_charsplit_new_0032 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0043 - Train Accuracy: 0.0795 - Train Wermet: 10.9497 - Validation Loss: 0.4525 - Validation Accuracy: 0.0761 - Validation Wermet: 9.3202 - Epoch: 31 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 | | 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 | | 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 | | 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 | | 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 | | 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 | | 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 | | 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 | | 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 | | 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 | | 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 | | 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 | | 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 | | 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 | | 0.0998 | 0.0773 | 11.9502 | 0.3025 | 0.0761 | 10.7066 | 14 | | 0.0872 | 0.0777 | 12.3196 | 0.3129 | 0.0759 | 10.7707 | 15 | | 0.0760 | 0.0779 | 12.2637 | 0.3142 | 0.0761 | 10.2638 | 16 | | 0.0651 | 0.0782 | 12.1215 | 0.3192 | 0.0761 | 10.0750 | 17 | | 0.0547 | 0.0785 | 12.0551 | 0.3294 | 0.0761 | 10.4732 | 18 | | 0.0463 | 0.0787 | 11.9677 | 0.3402 | 0.0760 | 10.2814 | 19 | | 0.0386 | 0.0789 | 11.6855 | 0.3517 | 0.0760 | 10.0599 | 20 | | 0.0318 | 0.0790 | 11.6314 | 0.3628 | 0.0760 | 9.6652 | 21 | | 0.0262 | 0.0792 | 11.4603 | 0.3728 | 0.0760 | 10.0035 | 22 | | 0.0224 | 0.0792 | 11.4330 | 0.3824 | 0.0760 | 9.1995 | 23 | | 0.0181 | 0.0793 | 11.3124 | 0.3982 | 0.0759 | 9.8710 | 24 | | 0.0142 | 0.0794 | 11.3562 | 0.4057 | 0.0760 | 9.6831 | 25 | | 0.0118 | 0.0794 | 11.0532 | 0.4207 | 0.0759 | 9.7227 | 26 | | 0.0101 | 0.0794 | 11.2963 | 0.4282 | 0.0760 | 9.5792 | 27 | | 0.0114 | 0.0794 | 11.3093 | 0.4431 | 0.0758 | 9.5545 | 28 | | 0.0109 | 0.0794 | 11.4214 | 0.4419 | 0.0760 | 9.4377 | 29 | | 0.0084 | 0.0794 | 10.9143 | 0.4474 | 0.0760 | 9.3668 | 30 | | 0.0043 | 0.0795 | 10.9497 | 0.4525 | 0.0761 | 9.3202 | 31 | ### Framework versions - Transformers 4.32.0.dev0 - TensorFlow 2.12.0 - Tokenizers 0.13.3
abdelhamidmalki/dqn-SpaceInvadersNoFrameskip-v4
abdelhamidmalki
2023-08-13T11:29:42Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-08-13T11:28:59Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 742.50 +/- 347.09 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga abdelhamidmalki -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga abdelhamidmalki -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga abdelhamidmalki ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
BabaYaga048/a2c-PandaReachDense-v3
BabaYaga048
2023-08-13T11:25:57Z
0
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-08-13T07:10:16Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v3 type: PandaReachDense-v3 metrics: - type: mean_reward value: -0.23 +/- 0.11 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v3** This is a trained model of a **A2C** agent playing **PandaReachDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Punit71/q-FrozenLake-v1-4x4-noSlippery
Punit71
2023-08-13T11:13:25Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-08-13T11:13:23Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Punit71/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
bigmorning/whisper_charsplit_new_0026
bigmorning
2023-08-13T11:07:34Z
59
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-13T11:07:26Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_keras_callback model-index: - name: whisper_charsplit_new_0026 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_charsplit_new_0026 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0142 - Train Accuracy: 0.0794 - Train Wermet: 11.3562 - Validation Loss: 0.4057 - Validation Accuracy: 0.0760 - Validation Wermet: 9.6831 - Epoch: 25 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 | | 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 | | 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 | | 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 | | 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 | | 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 | | 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 | | 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 | | 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 | | 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 | | 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 | | 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 | | 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 | | 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 | | 0.0998 | 0.0773 | 11.9502 | 0.3025 | 0.0761 | 10.7066 | 14 | | 0.0872 | 0.0777 | 12.3196 | 0.3129 | 0.0759 | 10.7707 | 15 | | 0.0760 | 0.0779 | 12.2637 | 0.3142 | 0.0761 | 10.2638 | 16 | | 0.0651 | 0.0782 | 12.1215 | 0.3192 | 0.0761 | 10.0750 | 17 | | 0.0547 | 0.0785 | 12.0551 | 0.3294 | 0.0761 | 10.4732 | 18 | | 0.0463 | 0.0787 | 11.9677 | 0.3402 | 0.0760 | 10.2814 | 19 | | 0.0386 | 0.0789 | 11.6855 | 0.3517 | 0.0760 | 10.0599 | 20 | | 0.0318 | 0.0790 | 11.6314 | 0.3628 | 0.0760 | 9.6652 | 21 | | 0.0262 | 0.0792 | 11.4603 | 0.3728 | 0.0760 | 10.0035 | 22 | | 0.0224 | 0.0792 | 11.4330 | 0.3824 | 0.0760 | 9.1995 | 23 | | 0.0181 | 0.0793 | 11.3124 | 0.3982 | 0.0759 | 9.8710 | 24 | | 0.0142 | 0.0794 | 11.3562 | 0.4057 | 0.0760 | 9.6831 | 25 | ### Framework versions - Transformers 4.32.0.dev0 - TensorFlow 2.12.0 - Tokenizers 0.13.3
arviii/llama-2-7B-sharded_qlora-finetuned_sql
arviii
2023-08-13T11:05:57Z
1
0
peft
[ "peft", "region:us" ]
null
2023-08-13T11:05:53Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0.dev0
bigmorning/whisper_charsplit_new_0025
bigmorning
2023-08-13T11:03:11Z
59
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-13T11:03:04Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_keras_callback model-index: - name: whisper_charsplit_new_0025 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_charsplit_new_0025 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0181 - Train Accuracy: 0.0793 - Train Wermet: 11.3124 - Validation Loss: 0.3982 - Validation Accuracy: 0.0759 - Validation Wermet: 9.8710 - Epoch: 24 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 | | 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 | | 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 | | 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 | | 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 | | 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 | | 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 | | 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 | | 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 | | 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 | | 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 | | 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 | | 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 | | 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 | | 0.0998 | 0.0773 | 11.9502 | 0.3025 | 0.0761 | 10.7066 | 14 | | 0.0872 | 0.0777 | 12.3196 | 0.3129 | 0.0759 | 10.7707 | 15 | | 0.0760 | 0.0779 | 12.2637 | 0.3142 | 0.0761 | 10.2638 | 16 | | 0.0651 | 0.0782 | 12.1215 | 0.3192 | 0.0761 | 10.0750 | 17 | | 0.0547 | 0.0785 | 12.0551 | 0.3294 | 0.0761 | 10.4732 | 18 | | 0.0463 | 0.0787 | 11.9677 | 0.3402 | 0.0760 | 10.2814 | 19 | | 0.0386 | 0.0789 | 11.6855 | 0.3517 | 0.0760 | 10.0599 | 20 | | 0.0318 | 0.0790 | 11.6314 | 0.3628 | 0.0760 | 9.6652 | 21 | | 0.0262 | 0.0792 | 11.4603 | 0.3728 | 0.0760 | 10.0035 | 22 | | 0.0224 | 0.0792 | 11.4330 | 0.3824 | 0.0760 | 9.1995 | 23 | | 0.0181 | 0.0793 | 11.3124 | 0.3982 | 0.0759 | 9.8710 | 24 | ### Framework versions - Transformers 4.32.0.dev0 - TensorFlow 2.12.0 - Tokenizers 0.13.3
bigmorning/whisper_charsplit_new_0024
bigmorning
2023-08-13T10:58:51Z
61
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-13T10:58:42Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_keras_callback model-index: - name: whisper_charsplit_new_0024 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_charsplit_new_0024 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0224 - Train Accuracy: 0.0792 - Train Wermet: 11.4330 - Validation Loss: 0.3824 - Validation Accuracy: 0.0760 - Validation Wermet: 9.1995 - Epoch: 23 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 | | 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 | | 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 | | 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 | | 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 | | 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 | | 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 | | 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 | | 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 | | 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 | | 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 | | 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 | | 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 | | 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 | | 0.0998 | 0.0773 | 11.9502 | 0.3025 | 0.0761 | 10.7066 | 14 | | 0.0872 | 0.0777 | 12.3196 | 0.3129 | 0.0759 | 10.7707 | 15 | | 0.0760 | 0.0779 | 12.2637 | 0.3142 | 0.0761 | 10.2638 | 16 | | 0.0651 | 0.0782 | 12.1215 | 0.3192 | 0.0761 | 10.0750 | 17 | | 0.0547 | 0.0785 | 12.0551 | 0.3294 | 0.0761 | 10.4732 | 18 | | 0.0463 | 0.0787 | 11.9677 | 0.3402 | 0.0760 | 10.2814 | 19 | | 0.0386 | 0.0789 | 11.6855 | 0.3517 | 0.0760 | 10.0599 | 20 | | 0.0318 | 0.0790 | 11.6314 | 0.3628 | 0.0760 | 9.6652 | 21 | | 0.0262 | 0.0792 | 11.4603 | 0.3728 | 0.0760 | 10.0035 | 22 | | 0.0224 | 0.0792 | 11.4330 | 0.3824 | 0.0760 | 9.1995 | 23 | ### Framework versions - Transformers 4.32.0.dev0 - TensorFlow 2.12.0 - Tokenizers 0.13.3
bigmorning/whisper_charsplit_new_0023
bigmorning
2023-08-13T10:54:34Z
59
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-13T10:54:27Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_keras_callback model-index: - name: whisper_charsplit_new_0023 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_charsplit_new_0023 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0262 - Train Accuracy: 0.0792 - Train Wermet: 11.4603 - Validation Loss: 0.3728 - Validation Accuracy: 0.0760 - Validation Wermet: 10.0035 - Epoch: 22 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 | | 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 | | 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 | | 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 | | 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 | | 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 | | 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 | | 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 | | 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 | | 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 | | 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 | | 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 | | 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 | | 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 | | 0.0998 | 0.0773 | 11.9502 | 0.3025 | 0.0761 | 10.7066 | 14 | | 0.0872 | 0.0777 | 12.3196 | 0.3129 | 0.0759 | 10.7707 | 15 | | 0.0760 | 0.0779 | 12.2637 | 0.3142 | 0.0761 | 10.2638 | 16 | | 0.0651 | 0.0782 | 12.1215 | 0.3192 | 0.0761 | 10.0750 | 17 | | 0.0547 | 0.0785 | 12.0551 | 0.3294 | 0.0761 | 10.4732 | 18 | | 0.0463 | 0.0787 | 11.9677 | 0.3402 | 0.0760 | 10.2814 | 19 | | 0.0386 | 0.0789 | 11.6855 | 0.3517 | 0.0760 | 10.0599 | 20 | | 0.0318 | 0.0790 | 11.6314 | 0.3628 | 0.0760 | 9.6652 | 21 | | 0.0262 | 0.0792 | 11.4603 | 0.3728 | 0.0760 | 10.0035 | 22 | ### Framework versions - Transformers 4.32.0.dev0 - TensorFlow 2.12.0 - Tokenizers 0.13.3
datgtr/distilbert-base-uncased-finetuned-emotion
datgtr
2023-08-13T10:53:12Z
105
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-13T10:18:14Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.9255 - name: F1 type: f1 value: 0.9257123738860233 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2205 - Accuracy: 0.9255 - F1: 0.9257 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8406 | 1.0 | 250 | 0.3237 | 0.907 | 0.9058 | | 0.2582 | 2.0 | 500 | 0.2205 | 0.9255 | 0.9257 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1 - Datasets 2.14.4 - Tokenizers 0.13.3
vnktrmnb/bert-base-multilingual-cased-FT-TyDiQA_AUQC
vnktrmnb
2023-08-13T10:47:28Z
74
0
transformers
[ "transformers", "tf", "tensorboard", "bert", "question-answering", "generated_from_keras_callback", "base_model:google-bert/bert-base-multilingual-cased", "base_model:finetune:google-bert/bert-base-multilingual-cased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-08-12T06:19:58Z
--- license: apache-2.0 base_model: bert-base-multilingual-cased tags: - generated_from_keras_callback model-index: - name: vnktrmnb/bert-base-multilingual-cased-FT-TyDiQA_AUQC results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # vnktrmnb/bert-base-multilingual-cased-FT-TyDiQA_AUQC This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4258 - Train End Logits Accuracy: 0.8820 - Train Start Logits Accuracy: 0.9031 - Validation Loss: 0.5351 - Validation End Logits Accuracy: 0.8686 - Validation Start Logits Accuracy: 0.8995 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1608, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 0.6488 | 0.8284 | 0.8563 | 0.5093 | 0.8673 | 0.8982 | 0 | | 0.4258 | 0.8820 | 0.9031 | 0.5351 | 0.8686 | 0.8995 | 1 | ### Framework versions - Transformers 4.31.0 - TensorFlow 2.12.0 - Datasets 2.14.4 - Tokenizers 0.13.3
bigmorning/whisper_charsplit_new_0021
bigmorning
2023-08-13T10:45:51Z
59
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-13T10:45:43Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_keras_callback model-index: - name: whisper_charsplit_new_0021 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_charsplit_new_0021 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0386 - Train Accuracy: 0.0789 - Train Wermet: 11.6855 - Validation Loss: 0.3517 - Validation Accuracy: 0.0760 - Validation Wermet: 10.0599 - Epoch: 20 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 | | 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 | | 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 | | 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 | | 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 | | 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 | | 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 | | 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 | | 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 | | 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 | | 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 | | 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 | | 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 | | 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 | | 0.0998 | 0.0773 | 11.9502 | 0.3025 | 0.0761 | 10.7066 | 14 | | 0.0872 | 0.0777 | 12.3196 | 0.3129 | 0.0759 | 10.7707 | 15 | | 0.0760 | 0.0779 | 12.2637 | 0.3142 | 0.0761 | 10.2638 | 16 | | 0.0651 | 0.0782 | 12.1215 | 0.3192 | 0.0761 | 10.0750 | 17 | | 0.0547 | 0.0785 | 12.0551 | 0.3294 | 0.0761 | 10.4732 | 18 | | 0.0463 | 0.0787 | 11.9677 | 0.3402 | 0.0760 | 10.2814 | 19 | | 0.0386 | 0.0789 | 11.6855 | 0.3517 | 0.0760 | 10.0599 | 20 | ### Framework versions - Transformers 4.32.0.dev0 - TensorFlow 2.12.0 - Tokenizers 0.13.3
bigmorning/whisper_charsplit_new_0020
bigmorning
2023-08-13T10:41:26Z
60
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-13T10:41:19Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_keras_callback model-index: - name: whisper_charsplit_new_0020 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_charsplit_new_0020 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0463 - Train Accuracy: 0.0787 - Train Wermet: 11.9677 - Validation Loss: 0.3402 - Validation Accuracy: 0.0760 - Validation Wermet: 10.2814 - Epoch: 19 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 | | 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 | | 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 | | 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 | | 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 | | 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 | | 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 | | 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 | | 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 | | 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 | | 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 | | 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 | | 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 | | 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 | | 0.0998 | 0.0773 | 11.9502 | 0.3025 | 0.0761 | 10.7066 | 14 | | 0.0872 | 0.0777 | 12.3196 | 0.3129 | 0.0759 | 10.7707 | 15 | | 0.0760 | 0.0779 | 12.2637 | 0.3142 | 0.0761 | 10.2638 | 16 | | 0.0651 | 0.0782 | 12.1215 | 0.3192 | 0.0761 | 10.0750 | 17 | | 0.0547 | 0.0785 | 12.0551 | 0.3294 | 0.0761 | 10.4732 | 18 | | 0.0463 | 0.0787 | 11.9677 | 0.3402 | 0.0760 | 10.2814 | 19 | ### Framework versions - Transformers 4.32.0.dev0 - TensorFlow 2.12.0 - Tokenizers 0.13.3
steve-tong/opus-mt-en-zh-tw
steve-tong
2023-08-13T10:39:43Z
107
2
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "generated_from_trainer", "base_model:Helsinki-NLP/opus-mt-en-zh", "base_model:finetune:Helsinki-NLP/opus-mt-en-zh", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-08-13T10:36:48Z
--- license: apache-2.0 base_model: Helsinki-NLP/opus-mt-en-zh tags: - generated_from_trainer model-index: - name: opus-mt-en-zh-tw results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-en-zh-tw This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu117 - Datasets 2.14.0 - Tokenizers 0.13.3
snob/HeungEol-KoAlpaca-12.8B-v1.0_LoRA
snob
2023-08-13T10:38:31Z
0
0
peft
[ "peft", "HeungEol", "ko", "region:us" ]
null
2023-08-10T12:38:00Z
--- library_name: peft language: - ko tags: - HeungEol --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0
snob/HeungEol-KoAlpaca-12.8B-v1.0
snob
2023-08-13T10:38:08Z
13
0
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "HeungEol", "ko", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-10T13:56:44Z
--- tags: - HeungEol language: - ko ---
fengtc/Chinese-Llama-2-7b
fengtc
2023-08-13T10:36:04Z
9
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "zh", "en", "dataset:LinkSoul/instruction_merge_set", "license:openrail", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-13T10:14:27Z
--- license: openrail datasets: - LinkSoul/instruction_merge_set language: - zh - en widget: - text: "[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\n<</SYS>>\n\n用中文回答,When is the best time to visit Beijing, and do you have any suggestions for me? [/INST]" example_title: "北京" - text: "[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\n<</SYS>>\n\n用英文回答,特朗普是谁? [/INST]" example_title: "特朗普是谁" --- # Chinese Llama 2 7B 全部开源,完全可商用的**中文版 Llama2 模型及中英文 SFT 数据集**,输入格式严格遵循 *llama-2-chat* 格式,兼容适配所有针对原版 *llama-2-chat* 模型的优化。 ![Chinese LLaMA2 7B](.github/preview.jpg) ## 基础演示 ![Base Demo](.github/demo.gif) ## 在线试玩 > Talk is cheap, Show you the Demo. - [Demo 地址 / HuggingFace Spaces](https://huggingface.co/spaces/LinkSoul/Chinese-Llama-2-7b) - [Colab 一键启动](#) // 正在准备 ## 资源下载 - 模型下载:[Chinese Llama2 Chat Model](https://huggingface.co/LinkSoul/Chinese-Llama-2-7b) - 4bit量化:[Chinese Llama2 4bit Chat Model](https://huggingface.co/LinkSoul/Chinese-Llama-2-7b-4bit) > 我们使用了中英文 SFT 数据集,数据量 1000 万。 - 数据集:[https://huggingface.co/datasets/LinkSoul/instruction_merge_set](https://huggingface.co/datasets/LinkSoul/instruction_merge_set) - 训练及推理代码:[https://github.com/LinkSoul-AI/Chinese-Llama-2-7b](https://github.com/LinkSoul-AI/Chinese-Llama-2-7b) ## 快速测试 ```python from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer model_path = "LinkSoul/Chinese-Llama-2-7b" tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False) model = AutoModelForCausalLM.from_pretrained(model_path).half().cuda() streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) instruction = """[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\n<</SYS>>\n\n{} [/INST]""" prompt = instruction.format("用英文回答,什么是夫妻肺片?") generate_ids = model.generate(tokenizer(prompt, return_tensors='pt').input_ids.cuda(), max_new_tokens=4096, streamer=streamer) ``` ## 相关项目 - [Llama2](https://ai.meta.com/llama/) ## 项目协议 [Apache-2.0 license](https://github.com/LinkSoul-AI/Chinese-Llama-2-7b/blob/main/LICENSE) ## 微信交流群 欢迎加入[微信群](.github/QRcode.jpg)
bigmorning/whisper_charsplit_new_0017
bigmorning
2023-08-13T10:28:14Z
59
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-13T10:28:07Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_keras_callback model-index: - name: whisper_charsplit_new_0017 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_charsplit_new_0017 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0760 - Train Accuracy: 0.0779 - Train Wermet: 12.2637 - Validation Loss: 0.3142 - Validation Accuracy: 0.0761 - Validation Wermet: 10.2638 - Epoch: 16 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 | | 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 | | 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 | | 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 | | 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 | | 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 | | 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 | | 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 | | 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 | | 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 | | 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 | | 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 | | 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 | | 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 | | 0.0998 | 0.0773 | 11.9502 | 0.3025 | 0.0761 | 10.7066 | 14 | | 0.0872 | 0.0777 | 12.3196 | 0.3129 | 0.0759 | 10.7707 | 15 | | 0.0760 | 0.0779 | 12.2637 | 0.3142 | 0.0761 | 10.2638 | 16 | ### Framework versions - Transformers 4.32.0.dev0 - TensorFlow 2.12.0 - Tokenizers 0.13.3
bigmorning/whisper_charsplit_new_0016
bigmorning
2023-08-13T10:23:51Z
59
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-13T10:23:43Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_keras_callback model-index: - name: whisper_charsplit_new_0016 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_charsplit_new_0016 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0872 - Train Accuracy: 0.0777 - Train Wermet: 12.3196 - Validation Loss: 0.3129 - Validation Accuracy: 0.0759 - Validation Wermet: 10.7707 - Epoch: 15 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 | | 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 | | 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 | | 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 | | 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 | | 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 | | 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 | | 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 | | 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 | | 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 | | 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 | | 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 | | 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 | | 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 | | 0.0998 | 0.0773 | 11.9502 | 0.3025 | 0.0761 | 10.7066 | 14 | | 0.0872 | 0.0777 | 12.3196 | 0.3129 | 0.0759 | 10.7707 | 15 | ### Framework versions - Transformers 4.32.0.dev0 - TensorFlow 2.12.0 - Tokenizers 0.13.3
bigmorning/whisper_charsplit_new_0014
bigmorning
2023-08-13T10:15:03Z
60
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-13T10:14:55Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_keras_callback model-index: - name: whisper_charsplit_new_0014 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_charsplit_new_0014 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1140 - Train Accuracy: 0.0770 - Train Wermet: 12.1100 - Validation Loss: 0.3004 - Validation Accuracy: 0.0760 - Validation Wermet: 10.3873 - Epoch: 13 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 | | 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 | | 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 | | 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 | | 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 | | 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 | | 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 | | 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 | | 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 | | 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 | | 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 | | 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 | | 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 | | 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 | ### Framework versions - Transformers 4.32.0.dev0 - TensorFlow 2.12.0 - Tokenizers 0.13.3
bigmorning/whisper_charsplit_new_0013
bigmorning
2023-08-13T10:10:45Z
59
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-13T10:10:35Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_keras_callback model-index: - name: whisper_charsplit_new_0013 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_charsplit_new_0013 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1287 - Train Accuracy: 0.0766 - Train Wermet: 11.8509 - Validation Loss: 0.3029 - Validation Accuracy: 0.0759 - Validation Wermet: 10.2042 - Epoch: 12 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 | | 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 | | 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 | | 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 | | 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 | | 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 | | 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 | | 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 | | 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 | | 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 | | 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 | | 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 | | 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 | ### Framework versions - Transformers 4.32.0.dev0 - TensorFlow 2.12.0 - Tokenizers 0.13.3
fathyshalab/mdcsi-reisen-tourismus-setfit
fathyshalab
2023-08-13T10:10:17Z
5
0
sentence-transformers
[ "sentence-transformers", "pytorch", "roberta", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-08-13T10:07:57Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # C:\Users\F896D~1.SHA\AppData\Local\Temp\tmp891pdfyp\fathyshalab\mdcsi-reisen-tourismus-setfit This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("C:\Users\F896D~1.SHA\AppData\Local\Temp\tmp891pdfyp\fathyshalab\mdcsi-reisen-tourismus-setfit") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
bigmorning/whisper_charsplit_new_0011
bigmorning
2023-08-13T10:02:00Z
59
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-13T10:01:52Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_keras_callback model-index: - name: whisper_charsplit_new_0011 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_charsplit_new_0011 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1628 - Train Accuracy: 0.0758 - Train Wermet: 11.7056 - Validation Loss: 0.2993 - Validation Accuracy: 0.0757 - Validation Wermet: 9.9497 - Epoch: 10 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 | | 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 | | 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 | | 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 | | 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 | | 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 | | 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 | | 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 | | 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 | | 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 | | 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 | ### Framework versions - Transformers 4.32.0.dev0 - TensorFlow 2.12.0 - Tokenizers 0.13.3
bigmorning/whisper_charsplit_new_0010
bigmorning
2023-08-13T09:57:34Z
59
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-13T09:57:26Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_keras_callback model-index: - name: whisper_charsplit_new_0010 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_charsplit_new_0010 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1820 - Train Accuracy: 0.0754 - Train Wermet: 11.7175 - Validation Loss: 0.3005 - Validation Accuracy: 0.0756 - Validation Wermet: 10.0755 - Epoch: 9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 | | 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 | | 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 | | 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 | | 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 | | 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 | | 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 | | 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 | | 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 | | 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 | ### Framework versions - Transformers 4.32.0.dev0 - TensorFlow 2.12.0 - Tokenizers 0.13.3
srgg000/nmda2
srgg000
2023-08-13T09:53:00Z
0
1
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-13T09:40:54Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### nmda2 Dreambooth model trained by srgg000 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
mrvincenzo/dqn-SpaceInvadersNoFrameskip-v4
mrvincenzo
2023-08-13T09:48:54Z
1
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-08-13T09:48:13Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 872.00 +/- 417.93 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mrvincenzo -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mrvincenzo -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mrvincenzo ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
bigmorning/whisper_charsplit_new_0008
bigmorning
2023-08-13T09:48:48Z
59
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-13T09:48:41Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_keras_callback model-index: - name: whisper_charsplit_new_0008 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_charsplit_new_0008 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2282 - Train Accuracy: 0.0743 - Train Wermet: 11.7308 - Validation Loss: 0.3159 - Validation Accuracy: 0.0752 - Validation Wermet: 9.2086 - Epoch: 7 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 | | 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 | | 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 | | 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 | | 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 | | 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 | | 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 | | 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 | ### Framework versions - Transformers 4.32.0.dev0 - TensorFlow 2.12.0 - Tokenizers 0.13.3
HachiML/japanese-stablelm-alpha-7b-hh-rlhf-49k-ja-qlora-v2-1.2ep
HachiML
2023-08-13T09:48:00Z
1
0
peft
[ "peft", "dataset:HachiML/hh-rlhf-49k-ja-alpaca-format", "region:us" ]
null
2023-08-13T09:46:23Z
--- library_name: peft datasets: - HachiML/hh-rlhf-49k-ja-alpaca-format --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0
darthPanda/whisper-tiny-urdu
darthPanda
2023-08-13T09:47:03Z
86
1
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "ur", "dataset:mozilla-foundation/common_voice_13_0", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-13T07:25:30Z
--- language: - ur license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_13_0 metrics: - wer model-index: - name: Whisper Small Urdu - darth results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 13 type: mozilla-foundation/common_voice_13_0 config: ur split: test args: ur metrics: - name: Wer type: wer value: 59.544821179749185 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Urdu - darth This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 13 dataset. It achieves the following results on the evaluation set: - Loss: 0.8511 - Wer Ortho: 62.5039 - Wer: 59.5448 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:| | 0.673 | 1.08 | 500 | 0.8511 | 62.5039 | 59.5448 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu117 - Datasets 2.14.4 - Tokenizers 0.13.3
bigmorning/whisper_charsplit_new_0007
bigmorning
2023-08-13T09:44:23Z
59
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-13T09:44:16Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_keras_callback model-index: - name: whisper_charsplit_new_0007 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_charsplit_new_0007 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2561 - Train Accuracy: 0.0736 - Train Wermet: 11.3173 - Validation Loss: 0.3256 - Validation Accuracy: 0.0749 - Validation Wermet: 9.9431 - Epoch: 6 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 | | 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 | | 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 | | 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 | | 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 | | 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 | | 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 | ### Framework versions - Transformers 4.32.0.dev0 - TensorFlow 2.12.0 - Tokenizers 0.13.3
asenella/MMVAEPlus_beta_25_scale_True_seed_3
asenella
2023-08-13T09:44:09Z
0
0
null
[ "multivae", "en", "license:apache-2.0", "region:us" ]
null
2023-07-27T19:38:45Z
--- language: en tags: - multivae license: apache-2.0 --- ### Downloading this model from the Hub This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub` ```python >>> from multivae.models import AutoModel >>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name") ```
bigmorning/whisper_charsplit_new_0006
bigmorning
2023-08-13T09:39:58Z
59
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-13T09:39:51Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_keras_callback model-index: - name: whisper_charsplit_new_0006 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_charsplit_new_0006 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2886 - Train Accuracy: 0.0729 - Train Wermet: 11.5171 - Validation Loss: 0.3403 - Validation Accuracy: 0.0745 - Validation Wermet: 9.8042 - Epoch: 5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 | | 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 | | 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 | | 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 | | 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 | | 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 | ### Framework versions - Transformers 4.32.0.dev0 - TensorFlow 2.12.0 - Tokenizers 0.13.3
bigmorning/whisper_charsplit_new_0004
bigmorning
2023-08-13T09:31:14Z
60
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-13T09:31:06Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_keras_callback model-index: - name: whisper_charsplit_new_0004 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_charsplit_new_0004 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3813 - Train Accuracy: 0.0708 - Train Wermet: 11.9157 - Validation Loss: 0.3935 - Validation Accuracy: 0.0733 - Validation Wermet: 9.4615 - Epoch: 3 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 | | 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 | | 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 | | 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 | ### Framework versions - Transformers 4.32.0.dev0 - TensorFlow 2.12.0 - Tokenizers 0.13.3
TinToTin/ppo-CartPole-v1
TinToTin
2023-08-13T09:27:09Z
0
0
null
[ "tensorboard", "CartPole-v1", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-08-13T09:24:39Z
--- tags: - CartPole-v1 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 247.10 +/- 99.41 name: mean_reward verified: false --- # PPO Agent Playing CartPole-v1 This is a trained model of a PPO agent playing CartPole-v1. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'CartPole-v1' 'total_timesteps': 100000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'Thineshan/ppo-CartPole-v1' 'batch_size': 512 'minibatch_size': 128} ```
bigmorning/whisper_charsplit_new_0003
bigmorning
2023-08-13T09:26:48Z
59
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-13T09:26:40Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_keras_callback model-index: - name: whisper_charsplit_new_0003 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_charsplit_new_0003 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4553 - Train Accuracy: 0.0692 - Train Wermet: 12.2404 - Validation Loss: 0.4371 - Validation Accuracy: 0.0723 - Validation Wermet: 10.9105 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 | | 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 | | 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 | ### Framework versions - Transformers 4.32.0.dev0 - TensorFlow 2.12.0 - Tokenizers 0.13.3
bigmorning/whisper_charsplit_new_0001
bigmorning
2023-08-13T09:17:56Z
60
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-13T09:17:49Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_keras_callback model-index: - name: whisper_charsplit_new_0001 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_charsplit_new_0001 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.8733 - Train Accuracy: 0.0602 - Train Wermet: 13.0686 - Validation Loss: 0.6470 - Validation Accuracy: 0.0676 - Validation Wermet: 11.4066 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 | ### Framework versions - Transformers 4.32.0.dev0 - TensorFlow 2.12.0 - Tokenizers 0.13.3
moraxgiga/llama-2-7b-Gokul_datadolly
moraxgiga
2023-08-13T09:09:24Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-28T09:17:14Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0
asenella/MMVAEPlus_beta_5_scale_True_seed_1
asenella
2023-08-13T09:01:02Z
0
0
null
[ "multivae", "en", "license:apache-2.0", "region:us" ]
null
2023-07-27T17:02:49Z
--- language: en tags: - multivae license: apache-2.0 --- ### Downloading this model from the Hub This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub` ```python >>> from multivae.models import AutoModel >>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name") ```
fathyshalab/mdcsi-finanzen-setfit
fathyshalab
2023-08-13T08:57:01Z
7
0
sentence-transformers
[ "sentence-transformers", "pytorch", "roberta", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-08-13T08:56:11Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # C:\Users\F896D~1.SHA\AppData\Local\Temp\tmp1dwypgha\fathyshalab\mdcsi-finanzen-setfit This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("C:\Users\F896D~1.SHA\AppData\Local\Temp\tmp1dwypgha\fathyshalab\mdcsi-finanzen-setfit") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
caffeinatedwoof/whisper-tiny-minds14-enUS
caffeinatedwoof
2023-08-13T08:55:58Z
75
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:PolyAI/minds14", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-13T05:59:19Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_trainer datasets: - PolyAI/minds14 metrics: - wer model-index: - name: whisper-tiny-minds14-enUS results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: PolyAI/minds14 type: PolyAI/minds14 config: en-US split: train args: en-US metrics: - name: Wer type: wer value: 30.3873431533006 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tiny-minds14-enUS This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset. It achieves the following results on the evaluation set: - Loss: 0.7518 - Wer Ortho: 30.8480 - Wer: 30.3873 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:| | 0.0006 | 35.71 | 500 | 0.7518 | 30.8480 | 30.3873 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
vj1148/lora-peft-flant5-large-v1
vj1148
2023-08-13T08:44:16Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T08:44:14Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.5.0.dev0
facebook/mms-1b-fl102
facebook
2023-08-13T08:33:09Z
2,903
23
transformers
[ "transformers", "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "mms", "ab", "af", "ak", "am", "ar", "as", "av", "ay", "az", "ba", "bm", "be", "bn", "bi", "bo", "sh", "br", "bg", "ca", "cs", "ce", "cv", "ku", "cy", "da", "de", "dv", "dz", "el", "en", "eo", "et", "eu", "ee", "fo", "fa", "fj", "fi", "fr", "fy", "ff", "ga", "gl", "gn", "gu", "zh", "ht", "ha", "he", "hi", "hu", "hy", "ig", "ia", "ms", "is", "it", "jv", "ja", "kn", "ka", "kk", "kr", "km", "ki", "rw", "ky", "ko", "kv", "lo", "la", "lv", "ln", "lt", "lb", "lg", "mh", "ml", "mr", "mk", "mg", "mt", "mn", "mi", "my", "nl", "no", "ne", "ny", "oc", "om", "or", "os", "pa", "pl", "pt", "ps", "qu", "ro", "rn", "ru", "sg", "sk", "sl", "sm", "sn", "sd", "so", "es", "sq", "su", "sv", "sw", "ta", "tt", "te", "tg", "tl", "th", "ti", "ts", "tr", "uk", "vi", "wo", "xh", "yo", "zu", "za", "dataset:google/fleurs", "arxiv:2305.13516", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-05-27T14:17:11Z
--- tags: - mms language: - ab - af - ak - am - ar - as - av - ay - az - ba - bm - be - bn - bi - bo - sh - br - bg - ca - cs - ce - cv - ku - cy - da - de - dv - dz - el - en - eo - et - eu - ee - fo - fa - fj - fi - fr - fy - ff - ga - gl - gn - gu - zh - ht - ha - he - hi - sh - hu - hy - ig - ia - ms - is - it - jv - ja - kn - ka - kk - kr - km - ki - rw - ky - ko - kv - lo - la - lv - ln - lt - lb - lg - mh - ml - mr - ms - mk - mg - mt - mn - mi - my - zh - nl - 'no' - 'no' - ne - ny - oc - om - or - os - pa - pl - pt - ms - ps - qu - qu - qu - qu - qu - qu - qu - qu - qu - qu - qu - qu - qu - qu - qu - qu - qu - qu - qu - qu - qu - qu - ro - rn - ru - sg - sk - sl - sm - sn - sd - so - es - sq - su - sv - sw - ta - tt - te - tg - tl - th - ti - ts - tr - uk - ms - vi - wo - xh - ms - yo - ms - zu - za license: cc-by-nc-4.0 datasets: - google/fleurs metrics: - wer --- # Massively Multilingual Speech (MMS) - Finetuned ASR - FL102 This checkpoint is a model fine-tuned for multi-lingual ASR and part of Facebook's [Massive Multilingual Speech project](https://research.facebook.com/publications/scaling-speech-technology-to-1000-languages/). This checkpoint is based on the [Wav2Vec2 architecture](https://huggingface.co/docs/transformers/model_doc/wav2vec2) and makes use of adapter models to transcribe 100+ languages. The checkpoint consists of **1 billion parameters** and has been fine-tuned from [facebook/mms-1b](https://huggingface.co/facebook/mms-1b) on 102 languages of [Fleurs](https://huggingface.co/datasets/google/fleurs). ## Table Of Content - [Example](#example) - [Supported Languages](#supported-languages) - [Model details](#model-details) - [Additional links](#additional-links) ## Example This MMS checkpoint can be used with [Transformers](https://github.com/huggingface/transformers) to transcribe audio of 1107 different languages. Let's look at a simple example. First, we install transformers and some other libraries ``` pip install torch accelerate torchaudio datasets pip install --upgrade transformers ```` **Note**: In order to use MMS you need to have at least `transformers >= 4.30` installed. If the `4.30` version is not yet available [on PyPI](https://pypi.org/project/transformers/) make sure to install `transformers` from source: ``` pip install git+https://github.com/huggingface/transformers.git ``` Next, we load a couple of audio samples via `datasets`. Make sure that the audio data is sampled to 16000 kHz. ```py from datasets import load_dataset, Audio # English stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "en", split="test", streaming=True) stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000)) en_sample = next(iter(stream_data))["audio"]["array"] # French stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "fr", split="test", streaming=True) stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000)) fr_sample = next(iter(stream_data))["audio"]["array"] ``` Next, we load the model and processor ```py from transformers import Wav2Vec2ForCTC, AutoProcessor import torch model_id = "facebook/mms-1b-fl102" processor = AutoProcessor.from_pretrained(model_id) model = Wav2Vec2ForCTC.from_pretrained(model_id) ``` Now we process the audio data, pass the processed audio data to the model and transcribe the model output, just like we usually do for Wav2Vec2 models such as [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) ```py inputs = processor(en_sample, sampling_rate=16_000, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs).logits ids = torch.argmax(outputs, dim=-1)[0] transcription = processor.decode(ids) # 'joe keton disapproved of films and buster also had reservations about the media' ``` We can now keep the same model in memory and simply switch out the language adapters by calling the convenient [`load_adapter()`]() function for the model and [`set_target_lang()`]() for the tokenizer. We pass the target language as an input - "fra" for French. ```py processor.tokenizer.set_target_lang("fra") model.load_adapter("fra") inputs = processor(fr_sample, sampling_rate=16_000, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs).logits ids = torch.argmax(outputs, dim=-1)[0] transcription = processor.decode(ids) # "ce dernier est volé tout au long de l'histoire romaine" ``` In the same way the language can be switched out for all other supported languages. Please have a look at: ```py processor.tokenizer.vocab.keys() ``` For more details, please have a look at [the official docs](https://huggingface.co/docs/transformers/main/en/model_doc/mms). ## Supported Languages This model supports 102 languages. Unclick the following to toogle all supported languages of this checkpoint in [ISO 639-3 code](https://en.wikipedia.org/wiki/ISO_639-3). You can find more details about the languages and their ISO 649-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html). <details> <summary>Click to toggle</summary> - afr - amh - ara - asm - ast - azj-script_latin - bel - ben - bos - bul - cat - ceb - ces - ckb - cmn-script_simplified - cym - dan - deu - ell - eng - est - fas - fin - fra - ful - gle - glg - guj - hau - heb - hin - hrv - hun - hye - ibo - ind - isl - ita - jav - jpn - kam - kan - kat - kaz - kea - khm - kir - kor - lao - lav - lin - lit - ltz - lug - luo - mal - mar - mkd - mlt - mon - mri - mya - nld - nob - npi - nso - nya - oci - orm - ory - pan - pol - por - pus - ron - rus - slk - slv - sna - snd - som - spa - srp-script_latin - swe - swh - tam - tel - tgk - tgl - tha - tur - ukr - umb - urd-script_arabic - uzb-script_latin - vie - wol - xho - yor - yue-script_traditional - zlm - zul </details> ## Model details - **Developed by:** Vineel Pratap et al. - **Model type:** Multi-Lingual Automatic Speech Recognition model - **Language(s):** 100+ languages, see [supported languages](#supported-languages) - **License:** CC-BY-NC 4.0 license - **Num parameters**: 1 billion - **Audio sampling rate**: 16,000 kHz - **Cite as:** @article{pratap2023mms, title={Scaling Speech Technology to 1,000+ Languages}, author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli}, journal={arXiv}, year={2023} } ## Additional Links - [Blog post](https://ai.facebook.com/blog/multilingual-model-speech-recognition/) - [Transformers documentation](https://huggingface.co/docs/transformers/main/en/model_doc/mms). - [Paper](https://arxiv.org/abs/2305.13516) - [GitHub Repository](https://github.com/facebookresearch/fairseq/tree/main/examples/mms#asr) - [Other **MMS** checkpoints](https://huggingface.co/models?other=mms) - MMS base checkpoints: - [facebook/mms-1b](https://huggingface.co/facebook/mms-1b) - [facebook/mms-300m](https://huggingface.co/facebook/mms-300m) - [Official Space](https://huggingface.co/spaces/facebook/MMS)
asenella/MMVAEPlus_beta_25_scale_True_seed_0
asenella
2023-08-13T08:29:36Z
0
0
null
[ "multivae", "en", "license:apache-2.0", "region:us" ]
null
2023-07-27T16:49:53Z
--- language: en tags: - multivae license: apache-2.0 --- ### Downloading this model from the Hub This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub` ```python >>> from multivae.models import AutoModel >>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name") ```
fathyshalab/mdcsi-moebel-einrichtungshaeuser-setfit
fathyshalab
2023-08-13T08:25:32Z
5
0
sentence-transformers
[ "sentence-transformers", "pytorch", "roberta", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-08-13T08:24:41Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # C:\Users\F896D~1.SHA\AppData\Local\Temp\tmpvfzjmjqz\fathyshalab\mdcsi-moebel-einrichtungshaeuser-setfit This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("C:\Users\F896D~1.SHA\AppData\Local\Temp\tmpvfzjmjqz\fathyshalab\mdcsi-moebel-einrichtungshaeuser-setfit") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
GhifSmile/distilbert-base-uncased-DSC-new-cllbck
GhifSmile
2023-08-13T08:19:27Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-13T08:01:21Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - precision - recall model-index: - name: distilbert-base-uncased-DSC-new-cllbck results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-DSC-new-cllbck This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1160 - Accuracy: 0.9817 - Precision: 0.9831 - Recall: 0.9818 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:| | 0.5329 | 1.0 | 618 | 0.1812 | 0.9511 | 0.9577 | 0.9518 | | 0.0853 | 2.0 | 1236 | 0.1160 | 0.9817 | 0.9831 | 0.9818 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
Madhur-01/my_awesome_qa_model
Madhur-01
2023-08-13T08:18:15Z
62
0
transformers
[ "transformers", "tf", "distilbert", "question-answering", "generated_from_keras_callback", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-08-13T07:31:28Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_keras_callback model-index: - name: my_awesome_qa_model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_qa_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.4973 - Validation Loss: 1.7800 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.4245 | 2.1038 | 0 | | 1.7543 | 1.7800 | 1 | | 1.4973 | 1.7800 | 2 | ### Framework versions - Transformers 4.31.0 - TensorFlow 2.12.0 - Datasets 2.14.4 - Tokenizers 0.13.3
fathyshalab/mdcsi-mode-schmuck-zubehoer-setfit
fathyshalab
2023-08-13T08:01:34Z
6
0
sentence-transformers
[ "sentence-transformers", "pytorch", "roberta", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-08-13T08:00:39Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # C:\Users\F896D~1.SHA\AppData\Local\Temp\tmp_3k_lzj7\fathyshalab\mdcsi-mode-schmuck-zubehoer-setfit This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("C:\Users\F896D~1.SHA\AppData\Local\Temp\tmp_3k_lzj7\fathyshalab\mdcsi-mode-schmuck-zubehoer-setfit") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
modelmaker/luna
modelmaker
2023-08-13T07:55:20Z
0
0
diffusers
[ "diffusers", "cat", "ay", "dataset:Open-Orca/OpenOrca", "license:creativeml-openrail-m", "region:us" ]
null
2023-08-13T07:53:41Z
--- license: creativeml-openrail-m datasets: - Open-Orca/OpenOrca language: - ay metrics: - accuracy library_name: diffusers tags: - cat ---
zjunlp/knowlm-13b-base-v1.0
zjunlp
2023-08-13T07:54:42Z
118
5
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-13T03:13:33Z
--- license: apache-2.0 --- <p align="center" width="100%"> <a href="" target="_blank"><img src="https://github.com/zjunlp/KnowLM/blob/main/assets/KnowLM.png?raw=true" alt="ZJU-KnowLM" style="width: 40%; min-width: 40px; display: block; margin: auto;"></a> </p> Built upon LlaMA-13b, this version incorporates pretraining weights from a secondary full-scale pretraining phase using both Chinese and English bilingual data. This augmentation improves the model's comprehension of Chinese. For further details, please refer to this [**link**](https://github.com/zjunlp/KnowLM).
nekohacker591/google1
nekohacker591
2023-08-13T07:32:54Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "gptj", "text-generation", "text generation", "conversational", "en", "license:creativeml-openrail-m", "autotrain_compatible", "region:us" ]
text-generation
2023-08-13T02:07:02Z
--- license: creativeml-openrail-m language: - en thumbnail: tags: - text generation - conversational inference: false --- # Pygmalion 6B ## Model description Pymalion 6B is a proof-of-concept dialogue model based on EleutherAI's [GPT-J-6B](https://huggingface.co/EleutherAI/gpt-j-6B). **Warning:** This model is **NOT** suitable for use by minors. It **will** output X-rated content under certain circumstances. ## Training data The fine-tuning dataset consisted of 56MB of dialogue data gathered from multiple sources, which includes both real _and_ partially machine-generated conversations. ## Training procedure Model weights were initialized from the `uft-6b` ConvoGPT model made available in [this commit](https://huggingface.co/hakurei/convogpt/tree/41b67bfddb6cd97070ffddf708e9720c9cb8d224/6b-uft). The model was then further fine-tuned on ~48.5 million tokens for ~5k steps on 4 NVIDIA A40s using DeepSpeed. ## Intended use ### The easy way We provide a notebook with a Gradio UI for playing around with the model without having to manually format inputs. This notebook can be found [here](https://github.com/PygmalionAI/gradio-ui/blob/master/notebooks/GPU.ipynb). ### The manual way The model can be used as a regular text generation model, but it'll perform best if the input prompt adheres to the following format: ``` [CHARACTER]'s Persona: [A few sentences about the character you want the model to play] <START> [DIALOGUE HISTORY] You: [Your input message here] [CHARACTER]: ``` Where `[CHARACTER]` is, as you can probably guess, the name of the character you want the model to portray, `<START>` should be used verbatim as a delimiter token to separate persona and scenario data from the dialogue, and `[DIALOGUE HISTORY]` is chat history so the model can have some conversational context to draw from. Ideally it'll be pairs of messages like: ``` [CHARACTER]: [some dialogue here] You: [your response to the dialogue above] ``` Apart from chat history, you can also just add example conversations in `[DIALOGUE HISTORY]` to show how the character should speak - ideally at the beginning, so it doesn't get confused as to what's conversation history vs. character definition. ## Known issues We haven't played around with the model enough to enumerate them. Feel free to give us some feedback!
timjwhite/distilhubert-finetuned-gtzan
timjwhite
2023-08-13T07:21:22Z
168
0
transformers
[ "transformers", "pytorch", "tensorboard", "hubert", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "base_model:Sandiago21/distilhubert-finetuned-gtzan", "base_model:finetune:Sandiago21/distilhubert-finetuned-gtzan", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
audio-classification
2023-07-11T10:54:34Z
--- license: apache-2.0 base_model: Sandiago21/distilhubert-finetuned-gtzan tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: distilhubert-finetuned-gtzan results: - task: name: Audio Classification type: audio-classification dataset: name: GTZAN type: marsyas/gtzan config: all split: train args: all metrics: - name: Accuracy type: accuracy value: 0.88 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilhubert-finetuned-gtzan This model is a fine-tuned version of [Sandiago21/distilhubert-finetuned-gtzan](https://huggingface.co/Sandiago21/distilhubert-finetuned-gtzan) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 0.9951 - Accuracy: 0.88 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0951 | 1.0 | 57 | 0.5566 | 0.87 | | 0.0629 | 2.0 | 114 | 0.6819 | 0.83 | | 0.0231 | 3.0 | 171 | 0.6118 | 0.86 | | 0.0159 | 4.0 | 228 | 0.9208 | 0.83 | | 0.0374 | 5.0 | 285 | 0.8746 | 0.85 | | 0.1714 | 6.0 | 342 | 0.6671 | 0.87 | | 0.2148 | 7.0 | 399 | 1.1850 | 0.79 | | 0.0147 | 8.0 | 456 | 1.0551 | 0.79 | | 0.0788 | 9.0 | 513 | 1.5179 | 0.79 | | 0.0015 | 10.0 | 570 | 1.3290 | 0.8 | | 0.0049 | 11.0 | 627 | 1.0943 | 0.85 | | 0.0012 | 12.0 | 684 | 1.0667 | 0.85 | | 0.0043 | 13.0 | 741 | 1.1816 | 0.82 | | 0.0015 | 14.0 | 798 | 0.9108 | 0.88 | | 0.0011 | 15.0 | 855 | 1.0289 | 0.87 | | 0.001 | 16.0 | 912 | 0.7696 | 0.87 | | 0.0006 | 17.0 | 969 | 0.8539 | 0.87 | | 0.1001 | 18.0 | 1026 | 1.1917 | 0.78 | | 0.0017 | 19.0 | 1083 | 1.0016 | 0.83 | | 0.0525 | 20.0 | 1140 | 0.9513 | 0.88 | | 0.0004 | 21.0 | 1197 | 0.9268 | 0.86 | | 0.0003 | 22.0 | 1254 | 1.1209 | 0.82 | | 0.0003 | 23.0 | 1311 | 0.9270 | 0.87 | | 0.0003 | 24.0 | 1368 | 1.1148 | 0.84 | | 0.0003 | 25.0 | 1425 | 1.0507 | 0.85 | | 0.0002 | 26.0 | 1482 | 1.0156 | 0.86 | | 0.0002 | 27.0 | 1539 | 1.0062 | 0.87 | | 0.0002 | 28.0 | 1596 | 1.0124 | 0.87 | | 0.0002 | 29.0 | 1653 | 1.0154 | 0.87 | | 0.0002 | 30.0 | 1710 | 1.0092 | 0.88 | | 0.0002 | 31.0 | 1767 | 1.0123 | 0.88 | | 0.0175 | 32.0 | 1824 | 0.9928 | 0.88 | | 0.0002 | 33.0 | 1881 | 1.0014 | 0.88 | | 0.0115 | 34.0 | 1938 | 0.9989 | 0.88 | | 0.0001 | 35.0 | 1995 | 0.9871 | 0.88 | | 0.0001 | 36.0 | 2052 | 0.9920 | 0.88 | | 0.0002 | 37.0 | 2109 | 0.9974 | 0.88 | | 0.0002 | 38.0 | 2166 | 0.9950 | 0.88 | | 0.0001 | 39.0 | 2223 | 0.9997 | 0.88 | | 0.0001 | 40.0 | 2280 | 0.9951 | 0.88 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3
caiAtSNU/ppo-from-scratch-LunarLander-v2
caiAtSNU
2023-08-13T07:10:14Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-08-13T07:07:30Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -126.67 +/- 91.01 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo_solution' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'caiAtSNU/ppo-from-scratch-LunarLander-v2' 'batch_size': 512 'minibatch_size': 128} ```
LongSafari/hyenadna-medium-160k-seqlen
LongSafari
2023-08-13T07:05:42Z
17
2
transformers
[ "transformers", "arxiv:2306.15794", "arxiv:2302.10866", "license:bsd-3-clause", "endpoints_compatible", "region:us" ]
null
2023-06-23T05:23:10Z
--- license: bsd-3-clause --- # HyenaDNA Welcome! HyenaDNA is a long-range genomic foundation model pretrained on context lengths of up to **1 million tokens** at **single nucleotide resolution**. See below for an [overview](#model) of the model and training. Better yet, check out these resources. **Resources:** - [arxiv](https://arxiv.org/abs/2306.15794) - [blog](https://hazyresearch.stanford.edu/blog/2023-06-29-hyena-dna) - [colab](https://colab.research.google.com/drive/1wyVEQd4R3HYLTUOXEEQmp_I8aNC_aLhL?usp=sharing) - [github](https://github.com/HazyResearch/hyena-dna) **Links to all HuggingFace models:** - [tiny-1k](https://huggingface.co/LongSafari/hyenadna-tiny-1k-seqlen/tree/main) - [tiny-1k-d256](https://huggingface.co/LongSafari/hyenadna-tiny-1k-seqlen-d256/tree/main) - [small-32k](https://huggingface.co/LongSafari/hyenadna-small-32k-seqlen/tree/main) - [medium-160k](https://huggingface.co/LongSafari/hyenadna-medium-160k-seqlen/tree/main) - [medium-450k](https://huggingface.co/LongSafari/hyenadna-medium-450k-seqlen/tree/main) - [large-1m](https://huggingface.co/LongSafari/hyenadna-large-1m-seqlen/tree/main) See [GPU requirements](#hardware) for each model. ### Sample snippet This code example lets you select which pretrained model to load from HuggingFace, perform inference and get embeddings. See the [colab](https://colab.research.google.com/drive/1wyVEQd4R3HYLTUOXEEQmp_I8aNC_aLhL?usp=sharing) for these classes, or the ['huggingface.py'](https://github.com/HazyResearch/hyena-dna/blob/main/huggingface.py) script in the main [github](https://github.com/HazyResearch/hyena-dna). ```python # instantiate pretrained model pretrained_model_name = 'hyenadna-medium-450k-seqlen' max_length = 450_000 model = HyenaDNAPreTrainedModel.from_pretrained( './checkpoints', pretrained_model_name, ) # create tokenizer, no training involved :) tokenizer = CharacterTokenizer( characters=['A', 'C', 'G', 'T', 'N'], # add DNA characters model_max_length=max_length, ) # create a sample sequence = 'ACTG' * int(max_length/4) tok_seq = tokenizer(sequence)["input_ids"] # place on device, convert to tensor tok_seq = torch.LongTensor(tok_seq).unsqueeze(0).to(device) # unsqueeze for batch dim # prep model and forward model.to(device) model.eval() # deterministic with torch.inference_mode(): embeddings = model(tok_seq) print(embeddings.shape) # embeddings here! ``` ### How to use pretrained weights - [colab](https://colab.research.google.com/drive/1wyVEQd4R3HYLTUOXEEQmp_I8aNC_aLhL?usp=sharing) The colab is the easiest entry point, you can finetune a small model, and do inference on DNA sequences up to 450k on the free tier (T4 GPU), and up to 1 million on the paid tier (A100). It handles all the HuggingFace integration for you, so it's helpful to see this example first. - [github](https://github.com/HazyResearch/hyena-dna) Otherwise, checkout of the main HyenaDNA repo for how to load weights into Pytorch Lightning. We use Pytorch Lightning for pretraining and fine-tuning all of our models. If you want to use our actual pretraining code, you can clone this HuggingFace repo to download the actual weights.ckpt, and then pass it to Pytorch Lightning via command line or config. See the [github](https://github.com/HazyResearch/hyena-dna) README for how to do all that. If you want a standalone version that's easy to port into your own code (and not tied to our repo or Pytorch Lightning), we have that and a HuggingFace example in ['huggingface.py'](https://github.com/HazyResearch/hyena-dna/blob/main/huggingface.py) too. ### GPU requirements (suggested) <a name="hardware"></a> Here are suggestions on the hardware (preferred minimum) we think you can use for each model. GPU during: Pretrain, fine-tune, inference - [tiny-1k](https://huggingface.co/LongSafari/hyenadna-tiny-1k-seqlen/tree/main): (T4, T4, T4) - [small-32k](https://huggingface.co/LongSafari/hyenadna-small-32k-seqlen/tree/main): (A100-40, T4, T4) - [medium-160k](https://huggingface.co/LongSafari/hyenadna-medium-160k-seqlen/tree/main): (A100-40, A100-40, T4) - [medium-450k](https://huggingface.co/LongSafari/hyenadna-medium-450k-seqlen/tree/main): (A100-40, A100-40, T4) - [large-1m](https://huggingface.co/LongSafari/hyenadna-large-1m-seqlen/tree/main): (A100-80, A100-80, A100-40) T4: 16GB A100-40: 40GB A100-80: 80GB ## Model & Training Overview <a name="model"></a> HyenaDNA uses a simple stack of [Hyena](https://arxiv.org/abs/2302.10866) operators, which are a subquadratic drop-in replacement for attention in Transformers. The Hyena operator is able to match quality in language modeling by using modified input projections, implicit convolutions and gating, all subquadratic operations. This enables HyenaDNA to reach context lengths of up to 500x longer than previous genomic Transformer models using dense attention, and train 160x faster at sequence length 1M (compared to Flash Attention). We use a single character tokenizer with a primary vocab of 4 nucleotides (plus special tokens), enabling the single nucleotide resolution, a first in genomic foundation models. In addition, the implicit long convolution enables a **global receptive field** at each layer. We pretrain using next token (nucleotide) prediction on the human reference genome (HG38). HyenaDNA sets new SotA on 23 downstream tasks including predicting regulatory elements, chromatin profiles, and species classification. We also explore what new capabilities open up with long context in genomics, including the first use of in-context learning with soft prompt tuneable tokens and instruction fine-tuning. Check out our [blog](https://hazyresearch.stanford.edu/blog/2023-06-29-hyena-dna) for more details on HyenaDNA! ### Authors Eric Nguyen*, Michael Poli*, Marjan Faizi*, Armin Thomas, Callum Birch-Sykes, Michael Wornow, Aman Patel, Clayton Rabideau, Stefano Massaroli, Yoshua Bengio, Stefano Ermon, Stephen Baccus, Chris Re. **Contact** Eric Nguyen, etnguyen@stanford.edu Michael Poli, poli@stanford.edu Marjan Faizi, Marjan_Faizi@hms.harvard.edu ## Citation Feel free to cite us :) ``` @article{nguyen2023hyenadna, title={HyenaDNA: Long-Range Genomic Sequence Modeling at Single Nucleotide Resolution}, author={Eric Nguyen and Michael Poli and Marjan Faizi and Armin Thomas and Callum Birch-Sykes and Michael Wornow and Aman Patel and Clayton Rabideau and Stefano Massaroli and Yoshua Bengio and Stefano Ermon and Stephen A. Baccus and Chris Ré}, year={2023}, eprint={2306.15794}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
LongSafari/hyenadna-small-32k-seqlen
LongSafari
2023-08-13T07:04:45Z
15
0
transformers
[ "transformers", "arxiv:2306.15794", "arxiv:2302.10866", "license:bsd-3-clause", "endpoints_compatible", "region:us" ]
null
2023-06-25T21:10:29Z
--- license: bsd-3-clause --- # HyenaDNA Welcome! HyenaDNA is a long-range genomic foundation model pretrained on context lengths of up to **1 million tokens** at **single nucleotide resolution**. See below for an [overview](#model) of the model and training. Better yet, check out these resources. **Resources:** - [arxiv](https://arxiv.org/abs/2306.15794) - [blog](https://hazyresearch.stanford.edu/blog/2023-06-29-hyena-dna) - [colab](https://colab.research.google.com/drive/1wyVEQd4R3HYLTUOXEEQmp_I8aNC_aLhL?usp=sharing) - [github](https://github.com/HazyResearch/hyena-dna) **Links to all HuggingFace models:** - [tiny-1k](https://huggingface.co/LongSafari/hyenadna-tiny-1k-seqlen/tree/main) - [tiny-1k-d256](https://huggingface.co/LongSafari/hyenadna-tiny-1k-seqlen-d256/tree/main) - [small-32k](https://huggingface.co/LongSafari/hyenadna-small-32k-seqlen/tree/main) - [medium-160k](https://huggingface.co/LongSafari/hyenadna-medium-160k-seqlen/tree/main) - [medium-450k](https://huggingface.co/LongSafari/hyenadna-medium-450k-seqlen/tree/main) - [large-1m](https://huggingface.co/LongSafari/hyenadna-large-1m-seqlen/tree/main) See [GPU requirements](#hardware) for each model. ### Sample snippet This code example lets you select which pretrained model to load from HuggingFace, perform inference and get embeddings. See the [colab](https://colab.research.google.com/drive/1wyVEQd4R3HYLTUOXEEQmp_I8aNC_aLhL?usp=sharing) for these classes, or the ['huggingface.py'](https://github.com/HazyResearch/hyena-dna/blob/main/huggingface.py) script in the main [github](https://github.com/HazyResearch/hyena-dna). ```python # instantiate pretrained model pretrained_model_name = 'hyenadna-medium-450k-seqlen' max_length = 450_000 model = HyenaDNAPreTrainedModel.from_pretrained( './checkpoints', pretrained_model_name, ) # create tokenizer, no training involved :) tokenizer = CharacterTokenizer( characters=['A', 'C', 'G', 'T', 'N'], # add DNA characters model_max_length=max_length, ) # create a sample sequence = 'ACTG' * int(max_length/4) tok_seq = tokenizer(sequence)["input_ids"] # place on device, convert to tensor tok_seq = torch.LongTensor(tok_seq).unsqueeze(0).to(device) # unsqueeze for batch dim # prep model and forward model.to(device) model.eval() # deterministic with torch.inference_mode(): embeddings = model(tok_seq) print(embeddings.shape) # embeddings here! ``` ### How to use pretrained weights - [colab](https://colab.research.google.com/drive/1wyVEQd4R3HYLTUOXEEQmp_I8aNC_aLhL?usp=sharing) The colab is the easiest entry point, you can finetune a small model, and do inference on DNA sequences up to 450k on the free tier (T4 GPU), and up to 1 million on the paid tier (A100). It handles all the HuggingFace integration for you, so it's helpful to see this example first. - [github](https://github.com/HazyResearch/hyena-dna) Otherwise, checkout of the main HyenaDNA repo for how to load weights into Pytorch Lightning. We use Pytorch Lightning for pretraining and fine-tuning all of our models. If you want to use our actual pretraining code, you can clone this HuggingFace repo to download the actual weights.ckpt, and then pass it to Pytorch Lightning via command line or config. See the [github](https://github.com/HazyResearch/hyena-dna) README for how to do all that. If you want a standalone version that's easy to port into your own code (and not tied to our repo or Pytorch Lightning), we have that and a HuggingFace example in ['huggingface.py'](https://github.com/HazyResearch/hyena-dna/blob/main/huggingface.py) too. ### GPU requirements (suggested) <a name="hardware"></a> Here are suggestions on the hardware (preferred minimum) we think you can use for each model. GPU during: Pretrain, fine-tune, inference - [tiny-1k](https://huggingface.co/LongSafari/hyenadna-tiny-1k-seqlen/tree/main): (T4, T4, T4) - [small-32k](https://huggingface.co/LongSafari/hyenadna-small-32k-seqlen/tree/main): (A100-40, T4, T4) - [medium-160k](https://huggingface.co/LongSafari/hyenadna-medium-160k-seqlen/tree/main): (A100-40, A100-40, T4) - [medium-450k](https://huggingface.co/LongSafari/hyenadna-medium-450k-seqlen/tree/main): (A100-40, A100-40, T4) - [large-1m](https://huggingface.co/LongSafari/hyenadna-large-1m-seqlen/tree/main): (A100-80, A100-80, A100-40) T4: 16GB A100-40: 40GB A100-80: 80GB ## Model & Training Overview <a name="model"></a> HyenaDNA uses a simple stack of [Hyena](https://arxiv.org/abs/2302.10866) operators, which are a subquadratic drop-in replacement for attention in Transformers. The Hyena operator is able to match quality in language modeling by using modified input projections, implicit convolutions and gating, all subquadratic operations. This enables HyenaDNA to reach context lengths of up to 500x longer than previous genomic Transformer models using dense attention, and train 160x faster at sequence length 1M (compared to Flash Attention). We use a single character tokenizer with a primary vocab of 4 nucleotides (plus special tokens), enabling the single nucleotide resolution, a first in genomic foundation models. In addition, the implicit long convolution enables a **global receptive field** at each layer. We pretrain using next token (nucleotide) prediction on the human reference genome (HG38). HyenaDNA sets new SotA on 23 downstream tasks including predicting regulatory elements, chromatin profiles, and species classification. We also explore what new capabilities open up with long context in genomics, including the first use of in-context learning with soft prompt tuneable tokens and instruction fine-tuning. Check out our [blog](https://hazyresearch.stanford.edu/blog/2023-06-29-hyena-dna) for more details on HyenaDNA! ### Authors Eric Nguyen*, Michael Poli*, Marjan Faizi*, Armin Thomas, Callum Birch-Sykes, Michael Wornow, Aman Patel, Clayton Rabideau, Stefano Massaroli, Yoshua Bengio, Stefano Ermon, Stephen Baccus, Chris Re. **Contact** Eric Nguyen, etnguyen@stanford.edu Michael Poli, poli@stanford.edu Marjan Faizi, Marjan_Faizi@hms.harvard.edu ## Citation Feel free to cite us :) ``` @article{nguyen2023hyenadna, title={HyenaDNA: Long-Range Genomic Sequence Modeling at Single Nucleotide Resolution}, author={Eric Nguyen and Michael Poli and Marjan Faizi and Armin Thomas and Callum Birch-Sykes and Michael Wornow and Aman Patel and Clayton Rabideau and Stefano Massaroli and Yoshua Bengio and Stefano Ermon and Stephen A. Baccus and Chris Ré}, year={2023}, eprint={2306.15794}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
LongSafari/hyenadna-tiny-1k-seqlen
LongSafari
2023-08-13T07:04:19Z
132
5
transformers
[ "transformers", "arxiv:2306.15794", "arxiv:2302.10866", "license:bsd-3-clause", "endpoints_compatible", "region:us" ]
null
2023-06-22T19:06:15Z
--- license: bsd-3-clause --- # HyenaDNA Welcome! HyenaDNA is a long-range genomic foundation model pretrained on context lengths of up to **1 million tokens** at **single nucleotide resolution**. See below for an [overview](#model) of the model and training. Better yet, check out these resources. **Resources:** - [arxiv](https://arxiv.org/abs/2306.15794) - [blog](https://hazyresearch.stanford.edu/blog/2023-06-29-hyena-dna) - [colab](https://colab.research.google.com/drive/1wyVEQd4R3HYLTUOXEEQmp_I8aNC_aLhL?usp=sharing) - [github](https://github.com/HazyResearch/hyena-dna) **Links to all HuggingFace models:** - [tiny-1k](https://huggingface.co/LongSafari/hyenadna-tiny-1k-seqlen/tree/main) - [tiny-1k-d256](https://huggingface.co/LongSafari/hyenadna-tiny-1k-seqlen-d256/tree/main) - [small-32k](https://huggingface.co/LongSafari/hyenadna-small-32k-seqlen/tree/main) - [medium-160k](https://huggingface.co/LongSafari/hyenadna-medium-160k-seqlen/tree/main) - [medium-450k](https://huggingface.co/LongSafari/hyenadna-medium-450k-seqlen/tree/main) - [large-1m](https://huggingface.co/LongSafari/hyenadna-large-1m-seqlen/tree/main) See [GPU requirements](#hardware) for each model. ### Sample snippet This code example lets you select which pretrained model to load from HuggingFace, perform inference and get embeddings. See the [colab](https://colab.research.google.com/drive/1wyVEQd4R3HYLTUOXEEQmp_I8aNC_aLhL?usp=sharing) for these classes, or the ['huggingface.py'](https://github.com/HazyResearch/hyena-dna/blob/main/huggingface.py) script in the main [github](https://github.com/HazyResearch/hyena-dna). ```python # instantiate pretrained model pretrained_model_name = 'hyenadna-medium-450k-seqlen' max_length = 450_000 model = HyenaDNAPreTrainedModel.from_pretrained( './checkpoints', pretrained_model_name, ) # create tokenizer, no training involved :) tokenizer = CharacterTokenizer( characters=['A', 'C', 'G', 'T', 'N'], # add DNA characters model_max_length=max_length, ) # create a sample sequence = 'ACTG' * int(max_length/4) tok_seq = tokenizer(sequence)["input_ids"] # place on device, convert to tensor tok_seq = torch.LongTensor(tok_seq).unsqueeze(0).to(device) # unsqueeze for batch dim # prep model and forward model.to(device) model.eval() # deterministic with torch.inference_mode(): embeddings = model(tok_seq) print(embeddings.shape) # embeddings here! ``` ### How to use pretrained weights - [colab](https://colab.research.google.com/drive/1wyVEQd4R3HYLTUOXEEQmp_I8aNC_aLhL?usp=sharing) The colab is the easiest entry point, you can finetune a small model, and do inference on DNA sequences up to 450k on the free tier (T4 GPU), and up to 1 million on the paid tier (A100). It handles all the HuggingFace integration for you, so it's helpful to see this example first. - [github](https://github.com/HazyResearch/hyena-dna) Otherwise, checkout of the main HyenaDNA repo for how to load weights into Pytorch Lightning. We use Pytorch Lightning for pretraining and fine-tuning all of our models. If you want to use our actual pretraining code, you can clone this HuggingFace repo to download the actual weights.ckpt, and then pass it to Pytorch Lightning via command line or config. See the [github](https://github.com/HazyResearch/hyena-dna) README for how to do all that. If you want a standalone version that's easy to port into your own code (and not tied to our repo or Pytorch Lightning), we have that and a HuggingFace example in ['huggingface.py'](https://github.com/HazyResearch/hyena-dna/blob/main/huggingface.py) too. ### GPU requirements (suggested) <a name="hardware"></a> Here are suggestions on the hardware (preferred minimum) we think you can use for each model. GPU during: Pretrain, fine-tune, inference - [tiny-1k](https://huggingface.co/LongSafari/hyenadna-tiny-1k-seqlen/tree/main): (T4, T4, T4) - [small-32k](https://huggingface.co/LongSafari/hyenadna-small-32k-seqlen/tree/main): (A100-40, T4, T4) - [medium-160k](https://huggingface.co/LongSafari/hyenadna-medium-160k-seqlen/tree/main): (A100-40, A100-40, T4) - [medium-450k](https://huggingface.co/LongSafari/hyenadna-medium-450k-seqlen/tree/main): (A100-40, A100-40, T4) - [large-1m](https://huggingface.co/LongSafari/hyenadna-large-1m-seqlen/tree/main): (A100-80, A100-80, A100-40) T4: 16GB A100-40: 40GB A100-80: 80GB ## Model & Training Overview <a name="model"></a> HyenaDNA uses a simple stack of [Hyena](https://arxiv.org/abs/2302.10866) operators, which are a subquadratic drop-in replacement for attention in Transformers. The Hyena operator is able to match quality in language modeling by using modified input projections, implicit convolutions and gating, all subquadratic operations. This enables HyenaDNA to reach context lengths of up to 500x longer than previous genomic Transformer models using dense attention, and train 160x faster at sequence length 1M (compared to Flash Attention). We use a single character tokenizer with a primary vocab of 4 nucleotides (plus special tokens), enabling the single nucleotide resolution, a first in genomic foundation models. In addition, the implicit long convolution enables a **global receptive field** at each layer. We pretrain using next token (nucleotide) prediction on the human reference genome (HG38). HyenaDNA sets new SotA on 23 downstream tasks including predicting regulatory elements, chromatin profiles, and species classification. We also explore what new capabilities open up with long context in genomics, including the first use of in-context learning with soft prompt tuneable tokens and instruction fine-tuning. Check out our [blog](https://hazyresearch.stanford.edu/blog/2023-06-29-hyena-dna) for more details on HyenaDNA! ### Authors Eric Nguyen*, Michael Poli*, Marjan Faizi*, Armin Thomas, Callum Birch-Sykes, Michael Wornow, Aman Patel, Clayton Rabideau, Stefano Massaroli, Yoshua Bengio, Stefano Ermon, Stephen Baccus, Chris Re. **Contact** Eric Nguyen, etnguyen@stanford.edu Michael Poli, poli@stanford.edu Marjan Faizi, Marjan_Faizi@hms.harvard.edu ## Citation Feel free to cite us :) ``` @article{nguyen2023hyenadna, title={HyenaDNA: Long-Range Genomic Sequence Modeling at Single Nucleotide Resolution}, author={Eric Nguyen and Michael Poli and Marjan Faizi and Armin Thomas and Callum Birch-Sykes and Michael Wornow and Aman Patel and Clayton Rabideau and Stefano Massaroli and Yoshua Bengio and Stefano Ermon and Stephen A. Baccus and Chris Ré}, year={2023}, eprint={2306.15794}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
fp16-guy/Cetus-Mix_Whalefall_fp16_cleaned
fp16-guy
2023-08-13T06:58:15Z
0
4
null
[ "text-to-image", "region:us" ]
text-to-image
2023-07-26T18:24:50Z
--- pipeline_tag: text-to-image --- Cetus-Mix Whalefall, but fp16/cleaned - smaller size, same result. ======== /// **[**original checkpoint link**](https://civitai.com/models/6755?modelVersionId=126564)** *(all rights to the model belong to Eagelaxis)* --- *[*grid 01*](https://huggingface.co/datasets/fp16-guy/grids/blob/main/cetusMix_Whalefall2%2001.png) *(1.99gb version)* *[*grid 02*](https://huggingface.co/datasets/fp16-guy/grids/blob/main/cetusMix_Whalefall2%2002%20no%20vae.png) *(1.83gb version - no vae)* *[*grid 03*](https://huggingface.co/datasets/fp16-guy/grids_inp/blob/main/cetusMix_Whalefall2%20inp%2001%2020230812123319-111-cetusMix_Whalefall2_fp16-Euler%20a-5.5.png) *(1.99gb inpainting version)* *[*grid 04*](https://huggingface.co/datasets/fp16-guy/grids_inp/blob/main/cetusMix_Whalefall2%20inp%2002%2020230812123519-111-cetusMix_Whalefall2_fp16_no_vae-Euler%20a-5.5.png) *(1.83gb inpainting version - no vae)*
fp16-guy/Disney_Pixar_Cartoon_Type_A_fp16_cleaned
fp16-guy
2023-08-13T06:57:15Z
0
2
null
[ "text-to-image", "region:us" ]
text-to-image
2023-08-01T10:41:52Z
--- pipeline_tag: text-to-image --- Disney Pixar Cartoon Type A, but fp16/cleaned - smaller size, same result. ======== /// **[**original checkpoint link**](https://civitai.com/models/65203/disney-pixar-cartoon-type-a)** *(all rights to the model belong to PromptSharingSamaritan)* --- *[*grid 01*](https://huggingface.co/datasets/fp16-guy/grids/blob/main/disneypixar%2010%2001%2020230801122719-111-disneyPixarCartoon_v10-Euler%20a-6.png) *(1.99gb version)* *[*grid 02*](https://huggingface.co/datasets/fp16-guy/grids/blob/main/disneypixar%2010%2002%20no%20vae%2020230801123402-111-disneyPixarCartoon_v10-Euler%20a-6.png) *(1.83gb version - no vae)* *[*grid 03*](https://huggingface.co/datasets/fp16-guy/grids_inp/blob/main/disneypixar%2010%20inp%2001%2020230812124144-111-disneyPixarCartoon_v10_fp16-Euler%20a-5.5.png) *(1.99gb inpainting version)* *[*grid 04*](https://huggingface.co/datasets/fp16-guy/grids_inp/blob/main/disneypixar%2010%20inp%2002%2020230812124250-111-disneyPixarCartoon_v10_fp16_no_vae-Euler%20a-5.5.png) *(1.83gb inpainting version - no vae)*
NocteZeta/ppo-Huggy
NocteZeta
2023-08-13T06:17:15Z
5
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-08-13T06:17:05Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: NocteZeta/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Evan-Lin/Bart-large-abs-yelp-entailment
Evan-Lin
2023-08-13T06:09:54Z
49
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "trl", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
reinforcement-learning
2023-08-13T06:02:49Z
--- license: apache-2.0 tags: - trl - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="Evan-Lin//tmp/tmp57lx8mhn/Evan-Lin/Bart-large-abs-yelp-entailment") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("Evan-Lin//tmp/tmp57lx8mhn/Evan-Lin/Bart-large-abs-yelp-entailment") model = AutoModelForCausalLMWithValueHead.from_pretrained("Evan-Lin//tmp/tmp57lx8mhn/Evan-Lin/Bart-large-abs-yelp-entailment") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
scoldgrin/ppo-LunarLander-v2
scoldgrin
2023-08-13T05:48:05Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-08-13T05:47:44Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 248.55 +/- 12.21 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
iknow-lab/ko-flan-zero-v0-0731
iknow-lab
2023-08-13T05:46:38Z
112
0
transformers
[ "transformers", "pytorch", "electra", "text-classification", "ko", "dataset:nsmc", "dataset:jason9693/APEACH", "dataset:KETI-AIR/korquad", "dataset:klue", "dataset:smilegate-ai/kor_unsmile", "dataset:kor_nlu", "dataset:skt/kobest_v1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-08T13:49:38Z
--- license: apache-2.0 language: - ko pipeline_tag: text-classification widget: - text: 예전에는 주말마다 극장에 놀러갔는데 요새는 좀 안가는 편이에요 [SEP] 댓글 주제를 분류하세요 [SEP] 시네마 - text: >- 인천발 KTX와 관련한 송도역 복합환승센터가 사실상 무산, 단순 철도·버스 위주 환승시설로 만들어진다. 이 때문에 인천시의 인천발 KTX 기점에 앵커시설인 복합환승센터를 통한 인근 지역 경제 활성화를 이뤄낸다는 계획의 차질이 불가피하다. [SEP] 경제에 긍정적인 뉴스인가요? [SEP] 아니요 - text: 마지막에는 k팝 공연보고 좋은 추억 남았으면 좋겠네요 [SEP] 욕설이 포함되어있나요? [SEP] 아니요 datasets: - nsmc - jason9693/APEACH - KETI-AIR/korquad - klue - smilegate-ai/kor_unsmile - kor_nlu - skt/kobest_v1 --- ## 사용 예시 ```python # Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("iknow-lab/ko-flan-zero-v0-0731") model = AutoModelForSequenceClassification.from_pretrained("iknow-lab/ko-flan-zero-v0-0731") def inference(instruction, input, labels): instruction = f"{input} [SEP] {instruction}" inputs = tokenizer([instruction] * len(labels), labels, truncation=True, padding=True, return_tensors="pt") scores = model(**inputs).logits.squeeze(1).tolist() output = dict(zip(labels, scores)) print(instruction, output) inference( "문장을 감성분류해주세요", "아 영화 개노잼", ["긍정적", "부정적"] ) inference( "글과 관련된 내용을 만들어주세요", "예전에는 주말마다 극장에 놀러갔는데 요새는 좀 안가는 편이에요", ["영화에 관한 글이다", "드라마에 관한 글입니다"] ) inference( "글을 읽고 시장에 미칠 영향을 판단해보세요", """인천발 KTX와 관련한 송도역 복합환승센터가 사실상 무산, 단순 철도·버스 위주 환승시설로 만들어진다. 이 때문에 인천시의 인천발 KTX 기점에 앵커시설인 복합환승센터를 통한 인근 지역 경제 활성화를 이뤄낸다는 계획의 차질이 불가피하다. 25일 시에 따르면 연수구 옥련동 104 일대 29만1천725㎡(8만8천평)에 추진 중인 2만8천62가구 규모의 송도역세권구역 도시개발사업과 연계, KTX 송도역 복합환승센터와 상업시설·업무시설 등의 조성을 추진 중이다. """, ["긍정", "부정", "중립"] ) ``` ### 실행 결과 ``` 아 영화 개노잼 [SEP] 문장을 감성분류해주세요 {'긍정적': -7.878206253051758, '부정적': 50.96009826660156} 예전에는 주말마다 극장에 놀러갔는데 요새는 좀 안가는 편이에요 [SEP] 글과 관련된 내용을 만들어주세요 {'영화에 관한 글이다': 25.37109375, '드라마에 관한 글입니다': -31.869916915893555} 인천발 KTX와 관련한 송도역 복합환승센터가 사실상 무산, 단순 철도·버스 위주 환승시설로 만들어진다. 이 때문에 인천시의 인천발 KTX 기점에 앵커시설인 복합환승센터를 통한 인근 지역 경제 활성화를 이뤄낸다는 계획의 차질이 불가피하다. 25일 시에 따르면 연수구 옥련동 104 일대 29만1천725㎡(8만8천평)에 추진 중인 2만8천62가구 규모의 송도역세권구역 도시개발사업과 연계, KTX 송도역 복합환승센터와 상업시설·업무시설 등의 조성을 추진 중이다.  [SEP] 글을 읽고 시장에 미칠 영향을 판단해보세요 {'긍정': -61.86758804321289, '부정': 23.72732925415039, '중립': -70.4837417602539} ``` ## 학습 데이터 구성 ```json { "splits": "train", "tasks": "nsmc,apeach,korquad_v1.0,klue_mrc,klue_nli,klue_ynat,kor_nlu,unsmile,klue_re,kobest_copa,kobest_hellaswag,kobest_boolq,kobest_wic,niklex,nikl_absa", "max_instance_per_task": 20000, "split_train": { "nsmc": 20000, "apeach": 7895, "korquad_v1.0": 20000, "klue_mrc": 17553, "klue_nli": 8046, "klue_ynat": 20000, "kor_nlu": 20000, "unsmile": 15002, "klue_re": 20000, "kobest_copa": 3075, "kobest_hellaswag": 499, "kobest_boolq": 3664, "kobest_wic": 3317, "niklex": 20000, "nikl_absa": 2139 }, "split_train_total": 181190 } ``` ## 평가(test set) | task | accuracy | | --- | --- | | [nsmc](https://huggingface.co/datasets/nsmc) | 85.92 | | [jason9693/APEACH](https://huggingface.co/datasets/jason9693/APEACH) | 32.12 | | [klue-ynat](https://huggingface.co/datasets/klue) | 77.59 | | [kobest-boolq](https://huggingface.co/datasets/skt/kobest_v1) | 76.99 | | [kobest-copa](https://huggingface.co/datasets/skt/kobest_v1) | 61.2 | | [kobest-hellaswag](https://huggingface.co/datasets/skt/kobest_v1) | 코드 버그 있어서 제외 | | [kobest-sentineg](https://huggingface.co/datasets/skt/kobest_v1) | 55.92 | | [kobest-wic](https://huggingface.co/datasets/skt/kobest_v1) | 58.49 | ### 평가 방식 - 모델에 `[CLS] {input} [SEP] {instruction} [SEP] label [SEP]` 형식으로 넣고 나온 positive와 negative끼리 비교함. - positive는 정답 라벨을 사용하고, negative는 정답 라벨이 아닌 모든 라벨을 사용 - 정답 라벨의 점수가 모든 negative보다 높을 경우 맞춘 것으로 간주함. 이런 식으로 accuracy 측정. 테스트에 사용한 매핑 코드 ``` klue_ynat_labelToTextDict = { 0: "IT과학", 1: "경제", 2: "사회", 3: "생활문화", 4: "세계", 5: "스포츠", 6: "정치", } klue_ynat_labels = set(klue_ynat_labelToTextDict.values()) def klue_ynat_mapper(item): positives = [klue_ynat_labelToTextDict[item["label"]]] return { "instruction": "문장을 읽고 주제를 분류하세요", "input": item["title"], "positives": positives, "negatives": klue_ynat_labels - set(positives) } kobest_wic_labels = ["아니오", "예"] def kobest_wic_mapper(item): return { "instruction": "주어진 두 문장에서 단어 {word}은(는) 동일한 의미로 사용되었나요?".format(word=item["word"]), "input": "문장1: {context_1}\n문장2: {context_2}".format(**item), "positives": [kobest_wic_labels[item['label']]], "negatives": [kobest_wic_labels[1 - item['label']]] } copa_question = { "결과": "이후에 이어질 결과는?", "원인": "이러한 일이 일어난 원인은?" } def kobest_copa_mapper(item): answers = [item["alternative_1"], item["alternative_2"]] return { "instruction": copa_question[item["question"]], "input": item["premise"], "positives": [answers[item['label']]], "negatives": [answers[1 - item['label']]] } def kobest_hellaswag_mapper(item): answers = [item[f"ending_{i}"] for i in range(1, 5)] label = answers[item['label']] answers.remove(label) return { "instruction": "이후에 이어질 내용으로 가장 적절한 것은?", "input": item["context"], "positives": [label], "negatives": answers } kobest_boolq_labels = ["아니오", "예"] def kobest_boolq_mapper(item): return { "instruction": item["question"], "input": item["paragraph"], "positives": [kobest_boolq_labels[item['label']]], "negatives": [kobest_boolq_labels[1 - item['label']]] } kobest_sentineg_labels = ["부정", "긍정"] def kobest_sentineg_mapper(item): return { "instruction": "주어진 문장의 감정을 분류하세요", "input": item["sentence"], "positives": [kobest_boolq_labels[item['label']]], "negatives": [kobest_boolq_labels[1 - item['label']]] } nsmc_labels = ["부정", "긍정"] def nsmc_mapper(item): return { "instruction": "주어진 문장의 감정을 분류하세요", "input": item["document"], "positives": [nsmc_labels[item['label']]], "negatives": [nsmc_labels[1 - item['label']]] } apeach_labels = ["혐오 표현이 아닙니다", "혐오표현"] def apeach_mapper(item): return { "instruction": "혐오성을 분류해보세요.", "input": item["text"], "positives": [nsmc_labels[item['class']]], "negatives": [nsmc_labels[1 - item['class']]] } EVAL_LIST = { "klue-ynat": dict( load_args=dict( path="klue", name="ynat", split="validation" ), mapper=klue_ynat_mapper ), "nsmc": dict( load_args=dict( path="nsmc", split="test" ), mapper=nsmc_mapper ), "apeach": dict( load_args=dict( path="jason9693/APEACH", split="test" ), mapper=apeach_mapper ), "kobest-wic": dict( load_args=dict( path="skt/kobest_v1", name="wic", split="test" ), mapper=kobest_wic_mapper ), "kobest-copa": dict( load_args=dict( path="skt/kobest_v1", name="copa", split="test" ), mapper=kobest_copa_mapper ), "kobest-hellaswag": dict( load_args=dict( path="skt/kobest_v1", name="hellaswag", split="test" ), mapper=kobest_hellaswag_mapper ), "kobest-boolq": dict( load_args=dict( path="skt/kobest_v1", name="boolq", split="test" ), mapper=kobest_boolq_mapper ), "kobest-sentineg": dict( load_args=dict( path="skt/kobest_v1", name="sentineg", split="test" ), mapper=kobest_sentineg_mapper ) } ```
Dredta/Ukiyana
Dredta
2023-08-13T05:12:23Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-08-13T05:10:04Z
--- license: creativeml-openrail-m ---
nagupv/Llama-7B_LLMExam_f0
nagupv
2023-08-13T05:01:47Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-12T12:37:53Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
learn3r/bart_memsum
learn3r
2023-08-13T05:00:27Z
106
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "generated_from_trainer", "dataset:learn3r/gov_report_memsum_oracle", "base_model:facebook/bart-base", "base_model:finetune:facebook/bart-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-08-12T15:54:25Z
--- license: apache-2.0 base_model: facebook/bart-base tags: - generated_from_trainer datasets: - learn3r/gov_report_memsum_oracle model-index: - name: bart_memsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart_memsum This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the learn3r/gov_report_memsum_oracle dataset. It achieves the following results on the evaluation set: - Loss: 1.7431 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 1024 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.32.0.dev0 - Pytorch 2.0.1+cu117 - Datasets 2.14.4 - Tokenizers 0.13.3
Chattiori/MelonMix
Chattiori
2023-08-13T04:37:41Z
37
2
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "Grapefruit", "en", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-03-20T09:42:48Z
--- license: creativeml-openrail-m language: - en tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - Grapefruit --- # <span style="color:#00a0a0; font-size:30pt; font-weight:bolder; font-style:italic;"> MelonMix </span> This model was checkpoint merge of Anything v4.5, AbyssOrangeMix 3A1B, GrapeFruitV4.1 and 7th Anime v3 C. V2 has AnyOrangeMix 48A13B, Hassaku v1.3, blue_pencil EX, MIX-Pro v4.5+ColorBox and MeinaPastel V6. Since AnyOrangeMix 48A13B is the mix of Anything v5, AnythingElse v4.5, AbyssOrangeMix3 A1B and AbyssOrangeMix3 A3, merge recipe showing below is identicle. For V2, I used [Chattiori-Model-Merger](https://github.com/Faildes/Chattiori-Model-Merger). ## Merge Recipe V1:(Anything v4.5 (0.5) + AbyssOrangeMix 3A1B (0.5) Weighted Sum) (0.5) + (grapefruitV4.1 (0.5) + 7th Anime v3 C (0.5) Weighted Sum) (0.5) Weighted Sum V2: * Weighted Sum, [**AnythingElse V4-v4.5**](https://civitai.com/models/4855) + [**Anything v5-Prt-Re**](https://civitai.com/models/9409), alpha(0.6) >> **TEMP_0** * Weighted Sum, [**AbyssOrangeMix3-A1B**](https://civitai.com/models/9942) + [**AbyssOrangeMix3-A3**](https://civitai.com/models/9942), alpha(0.5) >> **TEMP_1** * Sum Twice, **TEMP_0** + **TEMP_1** + [**MIX-Pro-V4.5+ColorBox**](https://civitai.com/models/14206), alpha(0.5) rand_beta(0.3, 0.7, 17546192) >> **TEMP_2** * Sum Twice, [**Hassaku (hentai model)-v1.3**](https://civitai.com/models/2583) + [**MeinaPastel-V6**](https://civitai.com/models/11866) + [**blue_pencil-EX**](https://civitai.com/models/79083), rand_alpha(0.35, 0.65, 5481652) rand_beta(0.2, 0.45, 61427253) >> **TEMP_3** * Weighted Sum, **TEMP_3** + **TEMP_2**, rand_alpha(0.25, 0.75, 964451837) >> **MelonMixV2**
fnlp/claif-scaled-roberta-base
fnlp
2023-08-13T04:32:19Z
2
0
sentence-transformers
[ "sentence-transformers", "pytorch", "roberta", "feature-extraction", "sentence-similarity", "transformers", "en", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-08-13T03:53:08Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers license: apache-2.0 language: - en --- # fnlp/claif-scaled-roberta-base This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('fnlp/claif-scaled-roberta-base') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('fnlp/claif-scaled-roberta-base') model = AutoModel.from_pretrained('fnlp/claif-scaled-roberta-base') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=fnlp/claif-scaled-roberta-base) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 37989 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 3, "evaluation_steps": 125, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 1e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 11397, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
fnlp/claif-scaled-bert-base
fnlp
2023-08-13T04:31:28Z
2
1
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "en", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-08-13T04:03:32Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers license: apache-2.0 language: - en --- # fnlp/claif-scaled-bert-base This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('fnlp/claif-scaled-bert-base') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('fnlp/claif-scaled-bert-base') model = AutoModel.from_pretrained('fnlp/claif-scaled-bert-base') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=fnlp/claif-scaled-bert-base) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 37989 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 3, "evaluation_steps": 125, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 1e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 11397, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
fnlp/claif-roberta-base
fnlp
2023-08-13T04:31:00Z
4
0
sentence-transformers
[ "sentence-transformers", "pytorch", "roberta", "feature-extraction", "sentence-similarity", "transformers", "en", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-08-13T04:24:45Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers license: apache-2.0 language: - en --- # fnlp/claif-roberta-base This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('fnlp/claif-roberta-base') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('fnlp/claif-roberta-base') model = AutoModel.from_pretrained('fnlp/claif-roberta-base') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=fnlp/claif-roberta-base) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 3556 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 3, "evaluation_steps": 125, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 1067, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
fnlp/claif-bert-base
fnlp
2023-08-13T04:30:34Z
4
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "en", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-08-13T04:14:49Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers license: apache-2.0 language: - en --- # fnlp/claif-bert-base This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('fnlp/claif-bert-base') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('fnlp/claif-bert-base') model = AutoModel.from_pretrained('fnlp/claif-bert-base') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=fnlp/claif-bert-base) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 3556 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 3, "evaluation_steps": 125, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 1067, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
iakarshu/latr-base
iakarshu
2023-08-13T04:22:51Z
105
1
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-08-13T04:21:40Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: latr-base results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # latr-base This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1 ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.0 - Datasets 2.14.4 - Tokenizers 0.13.3
Evan-Lin/Bart-large-abs-yelp-allure5
Evan-Lin
2023-08-13T04:18:03Z
47
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "trl", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
reinforcement-learning
2023-08-13T04:09:46Z
--- license: apache-2.0 tags: - trl - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="Evan-Lin//tmp/tmpoji__rd9/Evan-Lin/Bart-large-abs-yelp-allure5") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("Evan-Lin//tmp/tmpoji__rd9/Evan-Lin/Bart-large-abs-yelp-allure5") model = AutoModelForCausalLMWithValueHead.from_pretrained("Evan-Lin//tmp/tmpoji__rd9/Evan-Lin/Bart-large-abs-yelp-allure5") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
asenella/MMVAEPlus_beta_10_scale_True_seed_1
asenella
2023-08-13T04:12:41Z
0
0
null
[ "multivae", "en", "license:apache-2.0", "region:us" ]
null
2023-07-27T17:06:21Z
--- language: en tags: - multivae license: apache-2.0 --- ### Downloading this model from the Hub This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub` ```python >>> from multivae.models import AutoModel >>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name") ```
asenella/MMVAEPlus_beta_5_scale_True_seed_3
asenella
2023-08-13T04:11:57Z
0
0
null
[ "multivae", "en", "license:apache-2.0", "region:us" ]
null
2023-07-27T17:27:30Z
--- language: en tags: - multivae license: apache-2.0 --- ### Downloading this model from the Hub This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub` ```python >>> from multivae.models import AutoModel >>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name") ```
Envoid/Bacchus-22B-ggml
Envoid
2023-08-13T04:11:37Z
0
1
null
[ "region:us" ]
null
2023-08-13T03:38:26Z
q4_0 ggml of Bacchus-22B see the main repo for more details about the model.
asenella/MMVAEPlus_beta_10_scale_True_seed_0
asenella
2023-08-13T04:11:07Z
0
0
null
[ "multivae", "en", "license:apache-2.0", "region:us" ]
null
2023-07-27T16:46:12Z
--- language: en tags: - multivae license: apache-2.0 --- ### Downloading this model from the Hub This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub` ```python >>> from multivae.models import AutoModel >>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name") ```
Zeroxdesignart/chatbot-techinfo
Zeroxdesignart
2023-08-13T03:43:07Z
0
0
null
[ "region:us" ]
null
2023-08-13T03:25:09Z
--- license: openrail datasets: - fka/awesome-chatgpt-prompts - lmsys/chatbot_arena_conversations ---from datasets import load_dataset dataset = load_dataset("fka/awesome-chatgpt-prompts") Model Name: ChatGPT-Prompt-Generator Model Type: Chatbot Model Framework: Python Model Description: This model is a chatbot that can be used to create Python applications. The chatbot can ask the user for their app idea and their chosen programming language. Then, the chatbot will generate the initial code for the app based on the user's input. Finally, the chatbot will generate the requirements.txt file. Model Input: The model input is the user's app idea and their chosen programming language. Model Output: The model output is the initial code for the app and the requirements.txt file. Model Performance: The model has been tested on a variety of app ideas and programming languages. It has been shown to be able to generate accurate and efficient code. Model Limitations: The model is not perfect. It can sometimes generate incorrect or inefficient code. It is also important to note that the model is only a tool. It cannot replace the need for human creativity and expertise. Model Citations: The ChatGPT-Prompt-Generator model is based on the ChatGPT model, which was developed by OpenAI. The ChatGPT model is a large language model that was trained on a massive dataset of text and code. The ChatGPT model has been shown to be able to generate human-quality text and code. Model Availability: The ChatGPT-Prompt-Generator model is available for free. It can be downloaded from the ChatGPT website.
BAAI/Emu
BAAI
2023-08-13T03:32:51Z
0
23
diffusers
[ "diffusers", "arxiv:2307.05222", "region:us" ]
null
2023-07-10T09:03:01Z
<div align='center'> <h1>Emu: An Open Multimodal Generalist</h1h1> <h3><a href="https://arxiv.org/abs/2307.05222">Generative Pretraining in Multimodality</a></h3> [Quan Sun](https://github.com/Quan-Sun)<sup>1*</sup>, [Qiying Yu](https://yqy2001.github.io)<sup>2,1*</sup>, [Yufeng Cui]()<sup>1*</sup>, [Fan Zhang]()<sup>1*</sup>, [Xiaosong Zhang](https://github.com/zhangxiaosong18)<sup>1*</sup>, [Yueze Wang]()<sup>1</sup>, [Hongcheng Gao]()<sup>1</sup>, [Jingjing Liu](https://air.tsinghua.edu.cn/en/info/1046/1194.htm)<sup>2</sup>, [Tiejun Huang](https://scholar.google.com/citations?user=knvEK4AAAAAJ&hl=en)<sup>1,3</sup>, [Xinlong Wang](https://www.xloong.wang/)<sup>1</sup> <sup>1</sup> [BAAI](https://www.baai.ac.cn/english.html), <sup>2</sup> [THU](https://air.tsinghua.edu.cn), <sup>3</sup> [PKU](https://english.pku.edu.cn/) <br><sup>*</sup> Equal Contribution | [Paper](https://arxiv.org/abs/2307.05222) | [Demo(tmp)](http://218.91.113.230:9002/) | </div> **Emu** is a Large Multimodal Model (LMM) trained with a unified autoregressive objective, *i.e.*, predict-the-next-element, including both visual embeddings and textual tokens. Trained under this objective, **Emu** can serve as a generalist interface for diverse multimodal tasks, such as image captioning, image/video question answering, and text-to-image generation, together with new abilities like in-context text and image generation, and image blending. ## Setup Clone the github repository and install required packages: ```shell git clone https://github.com/baaivision/Emu cd Emu pip install -r requirements.txt ``` ## Model Weights We release the pretrained and instruction-tuned weights of **Emu**. Our weights are subject to LLaMA's [license](https://github.com/facebookresearch/llama/blob/main/LICENSE). | Model name | Weight | | ---------- | ------------------------------------------------------- | | **Emu** | [🤗 HF link](https://huggingface.co/BAAI/Emu/blob/main/Emu-pretrain.pt) (27GB) | | **Emu-I** | [🤗 HF link](https://huggingface.co/BAAI/Emu/blob/main/Emu-instruct.pt) (27GB) | ## Model Usage At present, we provide inference code for image captioning and visual question answering: ```sh python emu_inference.py --instruct --ckpt-path $Instruct_CKPT_PATH ``` ## Acknowledgement We thank the great work from [LLaMA](https://github.com/facebookresearch/llama), [BLIP-2](https://github.com/salesforce/LAVIS), [Stable Diffusion](https://github.com/CompVis/stable-diffusion), and [FastChat](https://github.com/lm-sys/FastChat). ## Citation If you find Emu useful for your your research and applications, please consider citing: ``` @article{Emu, title={Generative Pretraining in Multimodality}, author={Sun, Quan and Yu, Qiying and Cui, Yufeng and Zhang, Fan and Zhang, Xiaosong and Wang, Yueze and Gao, Hongcheng and Liu, Jingjing and Huang, Tiejun and Wang, Xinlong}, publisher={arXiv:2307.05222}, year={2023}, }
Rounak28/bengaliAI-finetuned-0-55000-new
Rounak28
2023-08-13T02:48:33Z
113
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-12T17:29:29Z
--- license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer metrics: - wer model-index: - name: bengaliAI-finetuned-0-55000-new results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bengaliAI-finetuned-0-55000-new This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2824 - Wer: 61.2368 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.25e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 2000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.2921 | 0.65 | 2000 | 0.2824 | 61.2368 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.0 - Datasets 2.14.4 - Tokenizers 0.13.3
SamuelReyes/LunarLander
SamuelReyes
2023-08-13T02:47:20Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-08-13T02:47:13Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -141.04 +/- 82.43 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'fff': '1' 'repo_id': 'SamuelReyes/LunarLander' 'batch_size': 512 'minibatch_size': 128} ```
skittlesmurf/ppo-LunarLander-v2
skittlesmurf
2023-08-13T02:46:42Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-08-13T02:46:23Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 241.13 +/- 17.13 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
abhowmick22/coinvent-llama2-test
abhowmick22
2023-08-13T02:35:41Z
0
0
null
[ "en", "arxiv:1910.09700", "license:cc-by-nc-sa-4.0", "region:us" ]
null
2023-08-13T01:45:39Z
--- license: cc-by-nc-sa-4.0 language: - en --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MichaelYxWang/ppo-LunarLander-v2
MichaelYxWang
2023-08-13T02:25:32Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-08-13T02:25:08Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 241.81 +/- 13.41 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Yong-Sik/distilbert-base-uncased-distilled-clinc
Yong-Sik
2023-08-13T02:21:40Z
119
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-13T01:59:46Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-distilled-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos config: plus split: validation args: plus metrics: - name: Accuracy type: accuracy value: 0.9496774193548387 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.2461 - Accuracy: 0.9497 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.2483 | 1.0 | 318 | 3.1615 | 0.7358 | | 2.3996 | 2.0 | 636 | 1.5548 | 0.8626 | | 1.1607 | 3.0 | 954 | 0.7750 | 0.9142 | | 0.5651 | 4.0 | 1272 | 0.4625 | 0.9358 | | 0.3003 | 5.0 | 1590 | 0.3357 | 0.9410 | | 0.1754 | 6.0 | 1908 | 0.2854 | 0.9452 | | 0.1134 | 7.0 | 2226 | 0.2637 | 0.9474 | | 0.0817 | 8.0 | 2544 | 0.2490 | 0.9487 | | 0.0665 | 9.0 | 2862 | 0.2486 | 0.9490 | | 0.0577 | 10.0 | 3180 | 0.2461 | 0.9497 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
tsobolev/speecht5_finetuned_voxpopuli_fi
tsobolev
2023-08-13T02:09:03Z
83
0
transformers
[ "transformers", "pytorch", "tensorboard", "speecht5", "text-to-audio", "generated_from_trainer", "text-to-speech", "fi", "dataset:voxpopuli", "base_model:microsoft/speecht5_tts", "base_model:finetune:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
text-to-speech
2023-08-13T00:00:34Z
--- license: mit base_model: microsoft/speecht5_tts tags: - generated_from_trainer datasets: - voxpopuli model-index: - name: speecht5_finetuned_voxpopuli_fi results: [] language: - fi pipeline_tag: text-to-speech --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speecht5_finetuned_voxpopuli_fi This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset. It achieves the following results on the evaluation set: - Loss: 0.4581 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 2000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.5025 | 13.18 | 1000 | 0.4663 | | 0.4873 | 26.36 | 2000 | 0.4581 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.0 - Datasets 2.14.3 - Tokenizers 0.13.3
aigrils2/beautifulv6
aigrils2
2023-08-13T01:45:52Z
1
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "license:openrail", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-24T09:44:14Z
--- license: openrail pipeline_tag: text-to-image --- Convert from original safetensor to diffuser compatible model. convert_ema=False This may be the cause of lower quality. Nice to see downloads. Give a like to the model if you find it convenient to use.
indonesian-nlp/gpt2-medium-indonesian
indonesian-nlp
2023-08-13T01:41:56Z
660
11
transformers
[ "transformers", "pytorch", "jax", "safetensors", "gpt2", "text-generation", "id", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: id widget: - text: "Sewindu sudah kita tak berjumpa, rinduku padamu sudah tak terkira." --- # GPT2-medium-indonesian This is a pretrained model on Indonesian language using a causal language modeling (CLM) objective, which was first introduced in [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and first released at [this page](https://openai.com/blog/better-language-models/). This model was trained using HuggingFace's Flax framework and is part of the [JAX/Flax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104) organized by [HuggingFace](https://huggingface.co). All training was done on a TPUv3-8 VM sponsored by the Google Cloud team. The demo can be found [here](https://huggingface.co/spaces/indonesian-nlp/gpt2-app). ## How to use You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='indonesian-nlp/gpt2-medium-indonesian') >>> set_seed(42) >>> generator("Sewindu sudah kita tak berjumpa,", max_length=30, num_return_sequences=5) [{'generated_text': 'Sewindu sudah kita tak berjumpa, dua dekade lalu, saya hanya bertemu sekali. Entah mengapa, saya lebih nyaman berbicara dalam bahasa Indonesia, bahasa Indonesia'}, {'generated_text': 'Sewindu sudah kita tak berjumpa, tapi dalam dua hari ini, kita bisa saja bertemu.”\ “Kau tau, bagaimana dulu kita bertemu?” aku'}, {'generated_text': 'Sewindu sudah kita tak berjumpa, banyak kisah yang tersimpan. Tak mudah tuk kembali ke pelukan, di mana kini kita berada, sebuah tempat yang jauh'}, {'generated_text': 'Sewindu sudah kita tak berjumpa, sejak aku lulus kampus di Bandung, aku sempat mencari kabar tentangmu. Ah, masih ada tempat di hatiku,'}, {'generated_text': 'Sewindu sudah kita tak berjumpa, tapi Tuhan masih saja menyukarkan doa kita masing-masing.\ Tuhan akan memberi lebih dari apa yang kita'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import GPT2Tokenizer, GPT2Model tokenizer = GPT2Tokenizer.from_pretrained('indonesian-nlp/gpt2-medium-indonesian') model = GPT2Model.from_pretrained('indonesian-nlp/gpt2-medium-indonesian') text = "Ubah dengan teks apa saja." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import GPT2Tokenizer, TFGPT2Model tokenizer = GPT2Tokenizer.from_pretrained('indonesian-nlp/gpt2-medium-indonesian') model = TFGPT2Model.from_pretrained('indonesian-nlp/gpt2-medium-indonesian') text = "Ubah dengan teks apa saja." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Limitations and bias The training data used for this model are Indonesian websites of [OSCAR](https://oscar-corpus.com/), [mc4](https://huggingface.co/datasets/mc4) and [Wikipedia](https://huggingface.co/datasets/wikipedia). The datasets contain a lot of unfiltered content from the internet, which is far from neutral. While we have done some filtering on the dataset (see the **Training data** section), the filtering is by no means a thorough mitigation of biased content that is eventually used by the training data. These biases might also affect models that are fine-tuned using this model. As the openAI team themselves point out in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases): > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases > that require the generated text to be true. > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we > do not recommend that they be deployed into systems that interact with humans > unless the deployers first carry > out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, > race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with > similar levels of caution around use cases that are sensitive to biases around human attributes. We have done a basic bias analysis that you can find in this [notebook](https://huggingface.co/indonesian-nlp/gpt2-small-indonesian/blob/main/bias_analysis/gpt2_medium_indonesian_bias_analysis.ipynb), performed on [Indonesian GPT2 medium](https://huggingface.co/indonesian-nlp/gpt2-medium-indonesian), based on the bias analysis for [Polish GPT2](https://huggingface.co/flax-community/papuGaPT2) with modifications. ### Gender bias We generated 50 texts starting with prompts "She/He works as". After doing some preprocessing (lowercase and stopwords removal) we obtain texts that are used to generate word clouds of female/male professions. The most salient terms for male professions are: driver, sopir (driver), ojek, tukang, online. ![gender bias - male](https://huggingface.co/flax-community/gpt2-small-indonesian/raw/main/bias_analysis/wordcloud_male.png) The most salient terms for female professions are: pegawai (employee), konsultan (consultant), asisten (assistant). ![gender bias - female](https://huggingface.co/flax-community/gpt2-small-indonesian/raw/main/bias_analysis/wordcloud_female.png) ### Ethnicity bias We generated 1,200 texts to assess bias across ethnicity and gender vectors. We will create prompts with the following scheme: * Person - we will assess 5 ethnicities: Sunda, Batak, Minahasa, Dayak, Asmat, Neutral (no ethnicity) * Topic - we will use 5 different topics: * random act: *entered home* * said: *said* * works as: *works as* * intent: *let [person] ...* * define: *is* Sample of generated prompt: "seorang perempuan sunda masuk ke rumah..." (a Sundanese woman enters the house...) We used a [model](https://huggingface.co/Hate-speech-CNERG/dehatebert-mono-indonesian) trained on Indonesian hate speech corpus ([dataset 1](https://github.com/okkyibrohim/id-multi-label-hate-speech-and-abusive-language-detection), [dataset 2](https://github.com/ialfina/id-hatespeech-detection)) to obtain the probability that each generated text contains hate speech. To avoid leakage, we removed the first word identifying the ethnicity and gender from the generated text before running the hate speech detector. The following chart demonstrates the intensity of hate speech associated with the generated texts with outlier scores removed. Some ethnicities score higher than the neutral baseline. ![bias analysis - ethnicities](https://huggingface.co/flax-community/gpt2-small-indonesian/raw/main/bias_analysis/bias_ethnicity.png) ### Religion bias With the same methodology above, we generated 1,400 texts to assess bias across religion and gender vectors. We will assess 6 religions: Islam, Protestan (Protestant), Katolik (Catholic), Buddha (Buddhism), Hindu (Hinduism), and Khonghucu (Confucianism) with Neutral (no religion) as a baseline. The following chart demonstrates the intensity of hate speech associated with the generated texts with outlier scores removed. Some religions score higher than the neutral baseline. ![bias analysis - ethnicities](https://huggingface.co/flax-community/gpt2-small-indonesian/raw/main/bias_analysis/bias_religion.png) ## Training data The model was trained on a combined dataset of [OSCAR](https://oscar-corpus.com/), [mc4](https://huggingface.co/datasets/mc4) and Wikipedia for the Indonesian language. We have filtered and reduced the mc4 dataset so that we end up with 29 GB of data in total. The mc4 dataset was cleaned using [this filtering script](https://github.com/Wikidepia/indonesian_datasets/blob/master/dump/mc4/cleanup.py) and we also only included links that have been cited by the Indonesian Wikipedia. ## Training procedure The model was trained on a TPUv3-8 VM provided by the Google Cloud team. The training duration was `6d 3h 7m 26s`. ### Evaluation results The model achieves the following results without any fine-tuning (zero-shot): | dataset | train loss | eval loss | eval perplexity | | ---------- | ---------- | -------------- | ---------- | | ID OSCAR+mc4+Wikipedia (29GB) | 2.79 | 2.696 | 14.826 | ### Tracking The training process was tracked in [TensorBoard](https://huggingface.co/flax-community/gpt2-medium-indonesian/tensorboard) and [Weights and Biases](https://wandb.ai/wandb/hf-flax-gpt2-indonesian?workspace=user-cahya). ## Team members - Akmal ([@Wikidepia](https://huggingface.co/Wikidepia)) - alvinwatner ([@alvinwatner](https://huggingface.co/alvinwatner)) - Cahya Wirawan ([@cahya](https://huggingface.co/cahya)) - Galuh Sahid ([@Galuh](https://huggingface.co/Galuh)) - Muhammad Agung Hambali ([@AyameRushia](https://huggingface.co/AyameRushia)) - Muhammad Fhadli ([@muhammadfhadli](https://huggingface.co/muhammadfhadli)) - Samsul Rahmadani ([@munggok](https://huggingface.co/munggok)) ## Future work We would like to pre-train further the models with larger and cleaner datasets and fine-tune it to specific domains if we can get the necessary hardware resources.
indonesian-nlp/gpt2
indonesian-nlp
2023-08-13T01:41:27Z
338
8
transformers
[ "transformers", "pytorch", "jax", "safetensors", "gpt2", "text-generation", "id", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: id widget: - text: "Sewindu sudah kita tak berjumpa, rinduku padamu sudah tak terkira." --- # GPT2-small-indonesian This is a pretrained model on Indonesian language using a causal language modeling (CLM) objective, which was first introduced in [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and first released at [this page](https://openai.com/blog/better-language-models/). This model was trained using HuggingFace's Flax framework and is part of the [JAX/Flax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104) organized by [HuggingFace](https://huggingface.co). All training was done on a TPUv3-8 VM sponsored by the Google Cloud team. The demo can be found [here](https://huggingface.co/spaces/flax-community/gpt2-indonesian). ## How to use You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='flax-community/gpt2-small-indonesian') >>> set_seed(42) >>> generator("Sewindu sudah kita tak berjumpa,", max_length=30, num_return_sequences=5) [{'generated_text': 'Sewindu sudah kita tak berjumpa, dua dekade lalu, saya hanya bertemu sekali. Entah mengapa, saya lebih nyaman berbicara dalam bahasa Indonesia, bahasa Indonesia'}, {'generated_text': 'Sewindu sudah kita tak berjumpa, tapi dalam dua hari ini, kita bisa saja bertemu.”\ “Kau tau, bagaimana dulu kita bertemu?” aku'}, {'generated_text': 'Sewindu sudah kita tak berjumpa, banyak kisah yang tersimpan. Tak mudah tuk kembali ke pelukan, di mana kini kita berada, sebuah tempat yang jauh'}, {'generated_text': 'Sewindu sudah kita tak berjumpa, sejak aku lulus kampus di Bandung, aku sempat mencari kabar tentangmu. Ah, masih ada tempat di hatiku,'}, {'generated_text': 'Sewindu sudah kita tak berjumpa, tapi Tuhan masih saja menyukarkan doa kita masing-masing.\ Tuhan akan memberi lebih dari apa yang kita'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import GPT2Tokenizer, GPT2Model tokenizer = GPT2Tokenizer.from_pretrained('flax-community/gpt2-small-indonesian') model = GPT2Model.from_pretrained('flax-community/gpt2-small-indonesian') text = "Ubah dengan teks apa saja." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import GPT2Tokenizer, TFGPT2Model tokenizer = GPT2Tokenizer.from_pretrained('flax-community/gpt2-small-indonesian') model = TFGPT2Model.from_pretrained('flax-community/gpt2-small-indonesian') text = "Ubah dengan teks apa saja." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Limitations and bias The training data used for this model are Indonesian websites of [OSCAR](https://oscar-corpus.com/), [mc4](https://huggingface.co/datasets/mc4) and [Wikipedia](https://huggingface.co/datasets/wikipedia). The datasets contain a lot of unfiltered content from the internet, which is far from neutral. While we have done some filtering on the dataset (see the **Training data** section), the filtering is by no means a thorough mitigation of biased content that is eventually used by the training data. These biases might also affect models that are fine-tuned using this model. As the openAI team themselves point out in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases): > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases > that require the generated text to be true. > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we > do not recommend that they be deployed into systems that interact with humans > unless the deployers first carry > out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, > race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with > similar levels of caution around use cases that are sensitive to biases around human attributes. We have done a basic bias analysis that you can find in this [notebook](https://huggingface.co/flax-community/gpt2-small-indonesian/blob/main/bias_analysis/gpt2_medium_indonesian_bias_analysis.ipynb), performed on [Indonesian GPT2 medium](https://huggingface.co/flax-community/gpt2-medium-indonesian), based on the bias analysis for [Polish GPT2](https://huggingface.co/flax-community/papuGaPT2) with modifications. ### Gender bias We generated 50 texts starting with prompts "She/He works as". After doing some preprocessing (lowercase and stopwords removal) we obtain texts that are used to generate word clouds of female/male professions. The most salient terms for male professions are: driver, sopir (driver), ojek, tukang, online. ![gender bias - male](https://huggingface.co/flax-community/gpt2-small-indonesian/raw/main/bias_analysis/wordcloud_male.png) The most salient terms for female professions are: pegawai (employee), konsultan (consultant), asisten (assistant). ![gender bias - female](https://huggingface.co/flax-community/gpt2-small-indonesian/raw/main/bias_analysis/wordcloud_female.png) ### Ethnicity bias We generated 1,200 texts to assess bias across ethnicity and gender vectors. We will create prompts with the following scheme: * Person - we will assess 5 ethnicities: Sunda, Batak, Minahasa, Dayak, Asmat, Neutral (no ethnicity) * Topic - we will use 5 different topics: * random act: *entered home* * said: *said* * works as: *works as* * intent: *let [person] ...* * define: *is* Sample of generated prompt: "seorang perempuan sunda masuk ke rumah..." (a Sundanese woman enters the house...) We used a [model](https://huggingface.co/Hate-speech-CNERG/dehatebert-mono-indonesian) trained on Indonesian hate speech corpus ([dataset 1](https://github.com/okkyibrohim/id-multi-label-hate-speech-and-abusive-language-detection), [dataset 2](https://github.com/ialfina/id-hatespeech-detection)) to obtain the probability that each generated text contains hate speech. To avoid leakage, we removed the first word identifying the ethnicity and gender from the generated text before running the hate speech detector. The following chart demonstrates the intensity of hate speech associated with the generated texts with outlier scores removed. Some ethnicities score higher than the neutral baseline. ![bias analysis - ethnicities](https://huggingface.co/flax-community/gpt2-small-indonesian/raw/main/bias_analysis/bias_ethnicity.png) ### Religion bias With the same methodology above, we generated 1,400 texts to assess bias across religion and gender vectors. We will assess 6 religions: Islam, Protestan (Protestant), Katolik (Catholic), Buddha (Buddhism), Hindu (Hinduism), and Khonghucu (Confucianism) with Neutral (no religion) as a baseline. The following chart demonstrates the intensity of hate speech associated with the generated texts with outlier scores removed. Some religions score higher than the neutral baseline. ![bias analysis - ethnicities](https://huggingface.co/flax-community/gpt2-small-indonesian/raw/main/bias_analysis/bias_religion.png) ## Training data The model was trained on a combined dataset of [OSCAR](https://oscar-corpus.com/), [mc4](https://huggingface.co/datasets/mc4) and Wikipedia for the Indonesian language. We have filtered and reduced the mc4 dataset so that we end up with 29 GB of data in total. The mc4 dataset was cleaned using [this filtering script](https://github.com/Wikidepia/indonesian_datasets/blob/master/dump/mc4/cleanup.py) and we also only included links that have been cited by the Indonesian Wikipedia. ## Training procedure The model was trained on a TPUv3-8 VM provided by the Google Cloud team. The training duration was `4d 14h 50m 47s`. ### Evaluation results The model achieves the following results without any fine-tuning (zero-shot): | dataset | train loss | eval loss | eval perplexity | | ---------- | ---------- | -------------- | ---------- | | ID OSCAR+mc4+wikipedia (29GB) | 3.046 | 2.926 | 18.66 | ### Tracking The training process was tracked in [TensorBoard](https://huggingface.co/flax-community/gpt2-small-indonesian/tensorboard) and [Weights and Biases](https://wandb.ai/wandb/hf-flax-gpt2-indonesian?workspace=user-cahya). ## Team members - Akmal ([@Wikidepia](https://huggingface.co/Wikidepia)) - alvinwatner ([@alvinwatner](https://huggingface.co/alvinwatner)) - Cahya Wirawan ([@cahya](https://huggingface.co/cahya)) - Galuh Sahid ([@Galuh](https://huggingface.co/Galuh)) - Muhammad Agung Hambali ([@AyameRushia](https://huggingface.co/AyameRushia)) - Muhammad Fhadli ([@muhammadfhadli](https://huggingface.co/muhammadfhadli)) - Samsul Rahmadani ([@munggok](https://huggingface.co/munggok)) ## Future work We would like to pre-train further the models with larger and cleaner datasets and fine-tune it to specific domains if we can get the necessary hardware resources.
cto-algo-huggingface/eternity-ring-tiffany-style
cto-algo-huggingface
2023-08-13T01:38:23Z
29
1
diffusers
[ "diffusers", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-13T01:35:34Z
--- license: creativeml-openrail-m tags: - text-to-image --- ### eternity_ring_tiffany_style on Stable Diffusion via Dreambooth #### model by cto-algo-huggingface This your the Stable Diffusion model fine-tuned the eternity_ring_tiffany_style concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **<eternity_ring> tiffany** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Here are the images used for training this concept: ![image 0](https://huggingface.co/cto-algo-huggingface/eternity-ring-tiffany-style/resolve/main/concept_images/9.jpeg) ![image 1](https://huggingface.co/cto-algo-huggingface/eternity-ring-tiffany-style/resolve/main/concept_images/2.jpeg) ![image 2](https://huggingface.co/cto-algo-huggingface/eternity-ring-tiffany-style/resolve/main/concept_images/7.jpeg) ![image 3](https://huggingface.co/cto-algo-huggingface/eternity-ring-tiffany-style/resolve/main/concept_images/13.jpeg) ![image 4](https://huggingface.co/cto-algo-huggingface/eternity-ring-tiffany-style/resolve/main/concept_images/3.jpeg) ![image 5](https://huggingface.co/cto-algo-huggingface/eternity-ring-tiffany-style/resolve/main/concept_images/8.jpeg) ![image 6](https://huggingface.co/cto-algo-huggingface/eternity-ring-tiffany-style/resolve/main/concept_images/5.jpeg) ![image 7](https://huggingface.co/cto-algo-huggingface/eternity-ring-tiffany-style/resolve/main/concept_images/12.jpeg) ![image 8](https://huggingface.co/cto-algo-huggingface/eternity-ring-tiffany-style/resolve/main/concept_images/11.jpeg) ![image 9](https://huggingface.co/cto-algo-huggingface/eternity-ring-tiffany-style/resolve/main/concept_images/1.jpeg) ![image 10](https://huggingface.co/cto-algo-huggingface/eternity-ring-tiffany-style/resolve/main/concept_images/0.jpeg) ![image 11](https://huggingface.co/cto-algo-huggingface/eternity-ring-tiffany-style/resolve/main/concept_images/10.jpeg) ![image 12](https://huggingface.co/cto-algo-huggingface/eternity-ring-tiffany-style/resolve/main/concept_images/6.jpeg) ![image 13](https://huggingface.co/cto-algo-huggingface/eternity-ring-tiffany-style/resolve/main/concept_images/4.jpeg)
degor/ppp-Pyramids
degor
2023-08-13T01:27:21Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-08-13T01:26:19Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: degor/ppp-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀