modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-07 18:30:29
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
544 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-07 18:30:28
card
stringlengths
11
1.01M
hw2942/bert-base-chinese-SSEC
hw2942
2023-08-14T03:38:11Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-chinese", "base_model:finetune:google-bert/bert-base-chinese", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-14T03:25:44Z
--- base_model: bert-base-chinese tags: - generated_from_trainer metrics: - accuracy model-index: - name: bert-base-chinese-wallstreetcn-morning-news-market-overview-SSEC-v3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-chinese-wallstreetcn-morning-news-market-overview-SSEC-v3 This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.1007 - Accuracy: 0.6875 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 34 | 2.2173 | 0.7188 | | No log | 2.0 | 68 | 1.8368 | 0.7188 | | No log | 3.0 | 102 | 2.7822 | 0.625 | | No log | 4.0 | 136 | 2.3597 | 0.7188 | | No log | 5.0 | 170 | 3.3032 | 0.5312 | | No log | 6.0 | 204 | 2.9527 | 0.6562 | | No log | 7.0 | 238 | 2.7575 | 0.6875 | | No log | 8.0 | 272 | 2.9714 | 0.6875 | | No log | 9.0 | 306 | 3.0941 | 0.6875 | | No log | 10.0 | 340 | 3.1007 | 0.6875 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
mrkusypl/Miroslaw-Stabinski
mrkusypl
2023-08-14T02:53:11Z
0
0
null
[ "pl", "region:us" ]
null
2023-08-07T20:26:39Z
--- language: - pl --- <center> <img src="https://cdn.discordapp.com/attachments/1138209218969731183/1138209219384979597/240774873_122099140169811_8790049852222389754_n.jpg"></img> <h1>Mirosław Stabiński (RVC v2) (Mangio Crepe 64) (1125 Epochs)</h1> **Model by:** kusy <br/> **Voice Actor:** Mirosław Stabiński <br/> **Dataset:** 00:21:47 <br/> <audio controls> <source src="https://cdn.discordapp.com/attachments/1138209218969731183/1138209243686776903/example.mp3" type="audio/mpeg"> </audio><br /> <audio controls> <source src="https://cdn.discordapp.com/attachments/1138209218969731183/1138211956268998697/gadanie.wav" type="audio/wav"> </audio> <a href="https://huggingface.co/mrkusypl/Miroslaw-Stabinski/resolve/main/Miros%C5%82aw%20Stabi%C5%84ski%20%5B1125%20epoch%20%2B%20RVC%20v2%5D.zip">Download or copy the link</a> </center>
hoaio/q-Taxi-v3
hoaio
2023-08-14T02:27:14Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-06-20T07:34:26Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="hoaio/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Evan-Lin/Bart-large-abs-amazon-entailment
Evan-Lin
2023-08-14T01:55:53Z
47
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "trl", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
reinforcement-learning
2023-08-14T01:43:21Z
--- license: apache-2.0 tags: - trl - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="Evan-Lin//tmp/tmpetrgbosh/Evan-Lin/Bart-large-abs-amazon-entailment") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("Evan-Lin//tmp/tmpetrgbosh/Evan-Lin/Bart-large-abs-amazon-entailment") model = AutoModelForCausalLMWithValueHead.from_pretrained("Evan-Lin//tmp/tmpetrgbosh/Evan-Lin/Bart-large-abs-amazon-entailment") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
Evan-Lin/Bart-large-abs-amazon-allure2
Evan-Lin
2023-08-14T01:55:14Z
47
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "trl", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
reinforcement-learning
2023-08-14T01:41:17Z
--- license: apache-2.0 tags: - trl - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="Evan-Lin//tmp/tmpe0oa5rsb/Evan-Lin/Bart-large-abs-amazon-allure2") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("Evan-Lin//tmp/tmpe0oa5rsb/Evan-Lin/Bart-large-abs-amazon-allure2") model = AutoModelForCausalLMWithValueHead.from_pretrained("Evan-Lin//tmp/tmpe0oa5rsb/Evan-Lin/Bart-large-abs-amazon-allure2") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
Evan-Lin/Bart-large-abs-amazon-entailment2-rouge
Evan-Lin
2023-08-14T01:33:15Z
45
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "trl", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
reinforcement-learning
2023-08-14T01:15:41Z
--- license: apache-2.0 tags: - trl - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="Evan-Lin//tmp/tmpghor1ugg/Evan-Lin/Bart-large-abs-amazon-entailment2-rouge") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("Evan-Lin//tmp/tmpghor1ugg/Evan-Lin/Bart-large-abs-amazon-entailment2-rouge") model = AutoModelForCausalLMWithValueHead.from_pretrained("Evan-Lin//tmp/tmpghor1ugg/Evan-Lin/Bart-large-abs-amazon-entailment2-rouge") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
csukuangfj/sherpa-onnx-streaming-paraformer-bilingual-zh-en
csukuangfj
2023-08-14T01:27:14Z
0
1
null
[ "onnx", "license:apache-2.0", "region:us" ]
null
2023-08-14T01:25:23Z
--- license: apache-2.0 --- `*.onnx` models are converted from https://www.modelscope.cn/models/damo/speech_paraformer_asr_nat-zh-cn-16k-common-vocab8404-online/summary See also https://huggingface.co/csukuangfj/streaming-paraformer-zh Note: We have used https://huggingface.co/csukuangfj/streaming-paraformer-zh/blob/main/add-model-metadata.py to add meta data to `model.onnx` and renamed it to `encoder.onnx`.
ckandemir/a2c-PandaReachDense-v3
ckandemir
2023-08-14T01:08:29Z
0
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-08-14T01:02:42Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v3 type: PandaReachDense-v3 metrics: - type: mean_reward value: -0.22 +/- 0.12 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v3** This is a trained model of a **A2C** agent playing **PandaReachDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
gregorgabrovsek/SloBertAA_Top10_WithOOC_082023
gregorgabrovsek
2023-08-14T01:03:46Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "camembert", "text-classification", "generated_from_trainer", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-13T17:09:23Z
--- license: cc-by-sa-4.0 tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: SloBertAA_Top10_WithOOC_082023 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SloBertAA_Top10_WithOOC_082023 This model is a fine-tuned version of [EMBEDDIA/sloberta](https://huggingface.co/EMBEDDIA/sloberta) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7250 - Accuracy: 0.9087 - F1: 0.9077 - Precision: 0.9076 - Recall: 0.9087 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.3963 | 1.0 | 16293 | 0.3859 | 0.8775 | 0.8765 | 0.8784 | 0.8775 | | 0.3207 | 2.0 | 32586 | 0.3425 | 0.8928 | 0.8928 | 0.8949 | 0.8928 | | 0.2433 | 3.0 | 48879 | 0.3723 | 0.9011 | 0.8995 | 0.8999 | 0.9011 | | 0.1874 | 4.0 | 65172 | 0.4615 | 0.9018 | 0.8999 | 0.9004 | 0.9018 | | 0.1537 | 5.0 | 81465 | 0.5215 | 0.9026 | 0.9011 | 0.9014 | 0.9026 | | 0.1136 | 6.0 | 97758 | 0.5769 | 0.9044 | 0.9027 | 0.9029 | 0.9044 | | 0.067 | 7.0 | 114051 | 0.6370 | 0.9060 | 0.9039 | 0.9041 | 0.9060 | | 0.0514 | 8.0 | 130344 | 0.6676 | 0.9058 | 0.9047 | 0.9049 | 0.9058 | | 0.0275 | 9.0 | 146637 | 0.7306 | 0.9064 | 0.9054 | 0.9061 | 0.9064 | | 0.0243 | 10.0 | 162930 | 0.7250 | 0.9087 | 0.9077 | 0.9076 | 0.9087 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.8.0 - Datasets 2.10.1 - Tokenizers 0.13.2
rdpb/lora-trained-xl-colab2
rdpb
2023-08-14T00:50:18Z
1
1
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-08-13T23:00:59Z
--- license: openrail++ base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of thaisluna tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - rdpb/lora-trained-xl-colab2 These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of thaisluna using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
brunoboat/ppo-LunarLander-8
brunoboat
2023-08-14T00:42:45Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-08-14T00:11:54Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -145.84 +/- 70.15 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'brunoboat/ppo-LunarLander-8' 'batch_size': 512 'minibatch_size': 128} ```
C-Lo/balanced_gendered-dataset
C-Lo
2023-08-14T00:21:59Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-14T00:18:43Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: balanced_gendered-dataset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # balanced_gendered-dataset This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
Nelver28/grailsolver-test-10
Nelver28
2023-08-14T00:13:43Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-14T00:13:27Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
mkshing/novelai-tokenizer-v1
mkshing
2023-08-14T00:12:17Z
0
0
null
[ "tokenizer", "novelai", "sentencepiece", "en", "ja", "license:gpl-2.0", "region:us" ]
null
2023-07-04T06:41:50Z
--- license: gpl-2.0 language: - en - ja tags: - tokenizer - novelai - sentencepiece --- # NovelAI Tokenizer v1 This repository is exactly the same as [NovelAI/nerdstash-tokenizer-v1](https://huggingface.co/NovelAI/nerdstash-tokenizer-v1), but the config has been changed to address the following points (the sp model itself is not changed). - Load as T5Tokenizer - Enable to decode digits (In the original, digits are registered as `additional_special_tokens`, so if `skip_special_tokens=True` when decoding, the digits are also skipped.) ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("mkshing/novelai-tokenizer-v1", use_fast=False) text = "1+1=3" tokenizer.decode(tokenizer.encode(text), skip_special_tokens=True) # '1+1=3' ```
HexHands/finishABOUTME
HexHands
2023-08-14T00:04:07Z
153
0
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "en", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-01T01:56:24Z
--- license: cc-by-4.0 language: en tags: - text-generation pipeline_tag: text-generation widget: - text: "My name is " - text: "I believe that I need to be more friendly." - text: "Follow @griffpatch!" - text: "How will my projects get better?" --- # finishABOUTME finishABOUTME is a torch model which was trained on 2000 Scratch About Me sections. It is meant to finish any About Me section! # Example Input: This Scratch Studio will reach 100 followers in a few days!\n Output: This Scratch Studio will reach 100 followers in a few days!\nThis studio here so much slower. Sorry for the inconveni have all, but we get every monday feel free to add projects about duckling Pond!\n\nThe Duckling Pond
ckandemir/ML-Agents-Pyramids
ckandemir
2023-08-13T23:58:35Z
4
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-08-13T23:58:32Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: ckandemir/ML-Agents-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e9_s55555_v4_l5_v50
KingKazma
2023-08-13T23:23:18Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T23:23:15Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
AOLCDROM/WAV2LIP-HQ-Updated-MIRROR
AOLCDROM
2023-08-13T23:22:41Z
0
3
null
[ "region:us" ]
null
2023-08-13T23:14:06Z
This is a mirror of the weights for the Wav2Lip-HQ-Updated repo, because the linked files on Google Drive appear to be incorrect or down. License follows oriignal authors intent. --- license: other ---
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e9_s108_v4_l5_v50
KingKazma
2023-08-13T23:20:22Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T23:20:21Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e8_s55555_v4_l5_v50
KingKazma
2023-08-13T23:15:48Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T23:15:45Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
AmelieSchreiber/esm2_t12_35M_UR50D_RNA_LoRA_weighted
AmelieSchreiber
2023-08-13T23:13:58Z
2
1
peft
[ "peft", "transformers", "biology", "esm", "esm2", "protein", "protein language model", "en", "license:mit", "region:us" ]
null
2023-08-13T23:01:51Z
--- library_name: peft license: mit language: - en tags: - transformers - biology - esm - esm2 - protein - protein language model --- # ESM-2 RNA Binding Site LoRA This is a Parameter Efficient Fine Tuning (PEFT) Low Rank Adaptation (LoRA) of the [esm2_t12_35M_UR50D](https://huggingface.co/facebook/esm2_t12_35M_UR50D) model for the (binary) token classification task of predicting RNA binding sites of proteins. You can also find a version of this model that was fine-tuned without LoRA [here](https://huggingface.co/AmelieSchreiber/esm2_t6_8M_UR50D_rna_binding_site_predictor). ## Training procedure This is a Low Rank Adaptation (LoRA) of `esm2_t12_35M_UR50D`, trained on `166` protein sequences in the [RNA binding sites dataset](https://huggingface.co/datasets/AmelieSchreiber/data_of_protein-rna_binding_sites) using a `85/15` train/test split. This model was trained with class weighting due to the imbalanced nature of the RNA binding site dataset (fewer binding sites than non-binding sites). This model has slightly improved precision, recall, and F1 score over [AmelieSchreiber/esm2_t12_35M_weighted_lora_rna_binding](https://huggingface.co/AmelieSchreiber/esm2_t12_35M_weighted_lora_rna_binding) but may suffer from mild overfitting, as indicated by the training loss being slightly lower than the eval loss (see metrics below). If you are searching for binding sites and aren't worried about false positives, the higher recall may make this model preferable to the other RNA binding site predictors. You can train your own version using [this notebook](https://huggingface.co/AmelieSchreiber/esm2_t6_8M_weighted_lora_rna_binding/blob/main/LoRA_binding_sites_no_sweeps_v2.ipynb)! You just need the RNA `binding_sites.xml` file [found here](https://huggingface.co/datasets/AmelieSchreiber/data_of_protein-rna_binding_sites). You may also need to run some `pip install` statements at the beginning of the script. If you are running in colab run: ```python !pip install transformers[torch] datasets peft -q ``` ```python !pip install accelerate -U -q ``` Try to improve upon these metrics by adjusting the hyperparameters: ``` {'eval_loss': 0.500779926776886, 'eval_precision': 0.1708695652173913, 'eval_recall': 0.8397435897435898, 'eval_f1': 0.2839595375722543, 'eval_auc': 0.771835775620126, 'epoch': 11.0} {'loss': 0.4171, 'learning_rate': 0.00032491416877500004, 'epoch': 11.43} ``` A similar model can also be trained using the Github with a training script and conda env YAML, which can be [found here](https://github.com/Amelie-Schreiber/esm2_LoRA_binding_sites/tree/main). This version uses wandb sweeps for hyperparameter search. However, it does not use class weighting. ### Framework versions - PEFT 0.4.0 ## Using the Model To use the model, try running the following pip install statements: ```python !pip install transformers peft -q ``` then try tunning: ```python from transformers import AutoModelForTokenClassification, AutoTokenizer from peft import PeftModel import torch # Path to the saved LoRA model model_path = "AmelieSchreiber/esm2_t12_35M_UR50D_RNA_LoRA_weighted" # ESM2 base model base_model_path = "facebook/esm2_t12_35M_UR50D" # Load the model base_model = AutoModelForTokenClassification.from_pretrained(base_model_path) loaded_model = PeftModel.from_pretrained(base_model, model_path) # Ensure the model is in evaluation mode loaded_model.eval() # Load the tokenizer loaded_tokenizer = AutoTokenizer.from_pretrained(base_model_path) # Protein sequence for inference protein_sequence = "MAVPETRPNHTIYINNLNEKIKKDELKKSLHAIFSRFGQILDILVSRSLKMRGQAFVIFKEVSSATNALRSMQGFPFYDKPMRIQYAKTDSDIIAKMKGT" # Replace with your actual sequence # Tokenize the sequence inputs = loaded_tokenizer(protein_sequence, return_tensors="pt", truncation=True, max_length=1024, padding='max_length') # Run the model with torch.no_grad(): logits = loaded_model(**inputs).logits # Get predictions tokens = loaded_tokenizer.convert_ids_to_tokens(inputs["input_ids"][0]) # Convert input ids back to tokens predictions = torch.argmax(logits, dim=2) # Define labels id2label = { 0: "No binding site", 1: "Binding site" } # Print the predicted labels for each token for token, prediction in zip(tokens, predictions[0].numpy()): if token not in ['<pad>', '<cls>', '<eos>']: print((token, id2label[prediction])) ```
D4ve-R/yellow-lora-sd15
D4ve-R
2023-08-13T23:09:19Z
3
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-08-12T17:29:27Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - D4ve-R/yellow-lora-sd15 These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the None dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
FireHead90544/RudraRVCs
FireHead90544
2023-08-13T23:08:19Z
0
0
null
[ "license:openrail", "region:us" ]
null
2023-08-09T15:39:45Z
--- license: openrail --- # RVCs - Some of the voices I trained **Seiya Ryuuguuin - The Hero Is Overpowered But Overly Cautious (JP VA: Yuuichirou Umehara)** Currently, these ones are available: - ## [Seiya Ryuuguuin RVC v2 Mangio-Crepe (340 Epochs, 5440 Steps)](https://huggingface.co/FireHead90544/RudraRVCs/resolve/main/SeiyaRyuuguuinRVC.zip) - ## [Seiya Ryuuguuin RVC v2 RMVPE (300 Epochs, 6300 Steps)](https://huggingface.co/FireHead90544/RudraRVCs/resolve/main/SeiyaRyuuguuinV2.zip) # This seems to perform better - ## [Seiya Ryuuguuin Max RVC v2 RMVPE (400 Epochs, 8400 Steps)](https://huggingface.co/FireHead90544/RudraRVCs/resolve/main/SeiyaRyuuguuinMax.zip) # Probably the best one ## Samples - ### Mangio-Crepe - [NEFFEX - Cold](https://cdn.discordapp.com/attachments/1090766429785178142/1138861234561753249/Seiya_Ryuuguuin_-_Cold.mp3) - [Kenshi Yonezu - Kick Back](https://cdn.discordapp.com/attachments/1090766429785178142/1138861234951819264/Seiya_Ryuuguuin_-_Kick_Back.mp3) - ### RMVPE - [YOASOBI - Running Into The Night](https://cdn.discordapp.com/attachments/549264174753120267/1138908849076703332/Seiya_Ryuuguuin_-_Racing_Into_The_Night.mp3) - [Tk From Ling Tosite Sigure - Unravel](https://cdn.discordapp.com/attachments/549264174753120267/1138908849789734972/Seiya_Ryuuguuin_-_Unravel.mp3) - [Jin Hashimoto - Stand Proud](https://cdn.discordapp.com/attachments/549264174753120267/1138908849424834741/Seiya_Ryuuguuin_-_Stand_Proud.mp3) - [KSUKE - Contradiction](https://cdn.discordapp.com/attachments/549264174753120267/1138908848749551636/Seiya_Ryuuguuin_-_Contradiction.mp3) - [Smash Mouth - All Star](https://cdn.discordapp.com/attachments/549264174753120267/1138908850137858189/Seiya_Ryuuguuin_-_All_Star.mp3) - [OxT - Clattanoia](https://cdn.discordapp.com/attachments/549264174753120267/1138908850469216327/Seiya_Ryuuguuin_-_Clattanoia.mp3) - <video controls width="640" height="360"> <source src="https://cdn.discordapp.com/attachments/1138965403658362910/1139679982717767870/Cupid.mp4" type="video/mp4"> Your browser does not support the video tag. </video> - <video controls width="640" height="360"> <source src="https://cdn.discordapp.com/attachments/1138965403658362910/1140419271772606474/Yoru_Ni_Kakeru.mp4" type="video/mp4"> Your browser does not support the video tag. </video>
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e7_s55555_v4_l5_v50
KingKazma
2023-08-13T23:08:18Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T23:08:14Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
camus-ng/lora-trained-xl-cory-5
camus-ng
2023-08-13T23:07:09Z
0
0
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-08-13T14:01:35Z
--- license: openrail++ base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of <ntvc> man tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - camus-ng/lora-trained-xl-cory-5 These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of <ntvc> man using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e6_s55555_v4_l5_v50
KingKazma
2023-08-13T23:00:47Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T23:00:44Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
RohitKeswani/flan_t5_base_peft
RohitKeswani
2023-08-13T22:54:32Z
2
1
peft
[ "peft", "Summarization", "summarization", "region:us" ]
summarization
2023-08-13T22:43:54Z
--- library_name: peft tags: - Summarization pipeline_tag: summarization --- ## Training procedure ### Framework versions - PEFT 0.4.0
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e5_s55555_v4_l5_v50
KingKazma
2023-08-13T22:53:17Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T22:53:13Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e4_s55555_v4_l5_v50
KingKazma
2023-08-13T22:45:46Z
1
0
peft
[ "peft", "region:us" ]
null
2023-08-13T22:45:43Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e3_s55555_v4_l5_v50
KingKazma
2023-08-13T22:38:16Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T22:38:13Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e3_s108_v4_l5_v50
KingKazma
2023-08-13T22:32:31Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T22:32:30Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
platzi/platzi-distilroberta-base-mrpc-glue-angrim
platzi
2023-08-13T22:31:39Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-13T21:44:25Z
--- license: apache-2.0 tags: - text-classification - generated_from_trainer datasets: - glue metrics: - accuracy - f1 widget: - text: ["Yucaipa owned Dominick 's before selling the chain to Safeway in 1998 for $ 2.5 billion.", "Yucaipa bought Dominick's in 1995 for $ 693 million and sold it to Safeway for $ 1.8 billion in 1998."] example_title: Not Equivalent - text: ["Revenue in the first quarter of the year dropped 15 percent from the same period a year earlier.", "With the scandal hanging over Stewart's company revenue the first quarter of the year dropped 15 percent from the same period a year earlier."] example_title: Equivalent model-index: - name: platzi-distilroberta-base-mrpc-glue-angrim results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: mrpc split: validation args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8284313725490197 - name: F1 type: f1 value: 0.8771929824561404 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # platzi-distilroberta-base-mrpc-glue-angrim This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue and the mrpc datasets. It achieves the following results on the evaluation set: - Loss: 0.3994 - Accuracy: 0.8284 - F1: 0.8772 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5211 | 1.09 | 500 | 0.3994 | 0.8284 | 0.8772 | | 0.3565 | 2.18 | 1000 | 0.5487 | 0.8456 | 0.8857 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
redstonehero/meinahentai_v4
redstonehero
2023-08-13T22:29:04Z
29
0
diffusers
[ "diffusers", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-13T20:13:29Z
--- license: creativeml-openrail-m library_name: diffusers ---
redstonehero/meinapastel_v6
redstonehero
2023-08-13T22:28:59Z
29
0
diffusers
[ "diffusers", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-13T20:13:32Z
--- license: creativeml-openrail-m library_name: diffusers ---
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e1_s55555_v4_l5_v50
KingKazma
2023-08-13T22:23:15Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T22:23:12Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
ckandemir/ppo-SnowballTarget
ckandemir
2023-08-13T22:16:59Z
1
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-08-13T22:16:57Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: ckandemir/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e1_s108_v4_l5_v50
KingKazma
2023-08-13T22:16:35Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T22:16:34Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e0_s55555_v4_l5_v50
KingKazma
2023-08-13T22:15:45Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T22:15:42Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_5_e9_s55555_v4_l4_v100
KingKazma
2023-08-13T22:03:03Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T22:03:02Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
langdonh/en_student_name_detector
langdonh
2023-08-13T22:02:34Z
0
0
spacy
[ "spacy", "token-classification", "en", "model-index", "region:us" ]
token-classification
2023-08-13T22:02:11Z
--- tags: - spacy - token-classification language: - en model-index: - name: en_student_name_detector results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.7230769231 - name: NER Recall type: recall value: 0.734375 - name: NER F Score type: f_score value: 0.7286821705 ---
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e-1_s108_v4_l5_v50
KingKazma
2023-08-13T22:00:43Z
1
0
peft
[ "peft", "region:us" ]
null
2023-08-13T22:00:41Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e9_s55555_v4_l4_v100
KingKazma
2023-08-13T21:55:36Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T21:55:21Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
BrendaScar/dqn-SpaceInvadersNoFrameskip-v4
BrendaScar
2023-08-13T21:53:30Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-08-13T21:52:53Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 657.50 +/- 163.33 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga BrendaScar -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga BrendaScar -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga BrendaScar ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_5_e7_s55555_v4_l4_v100
KingKazma
2023-08-13T21:45:47Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T21:45:46Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
Wanaldino/lora-trained-xl-colab
Wanaldino
2023-08-13T21:43:30Z
0
1
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-08-13T19:54:46Z
--- license: openrail++ base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of a women tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - Wanaldino/lora-trained-xl-colab These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of a women using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
redstonehero/cetusmix_v4
redstonehero
2023-08-13T21:42:07Z
751
4
diffusers
[ "diffusers", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-13T20:31:26Z
--- license: creativeml-openrail-m library_name: diffusers ---
redstonehero/angrarealflex_v20
redstonehero
2023-08-13T21:42:05Z
29
0
diffusers
[ "diffusers", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-13T20:38:38Z
--- license: creativeml-openrail-m library_name: diffusers ---
redstonehero/cyberrealistic_v33
redstonehero
2023-08-13T21:41:58Z
30
0
diffusers
[ "diffusers", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-13T20:17:37Z
--- license: creativeml-openrail-m library_name: diffusers ---
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e8_s55555_v4_l5_v50
KingKazma
2023-08-13T21:39:52Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T21:39:50Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
SaranaAbidueva/mbart50_ru_bua
SaranaAbidueva
2023-08-13T21:38:06Z
104
1
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "ru", "bua", "bxr", "dataset:SaranaAbidueva/buryat-russian_parallel_corpus", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-11T10:42:25Z
--- language: - ru - bua - bxr datasets: - SaranaAbidueva/buryat-russian_parallel_corpus metrics: - bleu --- This model translates from Russian to Buryat language. How to use in Python: ```python from transformers import MBartForConditionalGeneration, MBart50Tokenizer model = MBartForConditionalGeneration.from_pretrained("SaranaAbidueva/mbart50_ru_bua") tokenizer = MBart50Tokenizer.from_pretrained("SaranaAbidueva/mbart50_ru_bua") def translate(text, max_length=200, num_beams=5, repetition_penalty=5.0, **kwargs): encoded = tokenizer(text, return_tensors="pt") generated_tokens = model.generate( **encoded.to(model.device), max_length=max_length, num_beams=num_beams, repetition_penalty=repetition_penalty ) return tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[0] translate('Евгений Онегин интересная книга') ```
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_5_e6_s55555_v4_l4_v100
KingKazma
2023-08-13T21:37:10Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T21:37:09Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e5_s55555_v4_l4_v100
KingKazma
2023-08-13T21:28:06Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T21:28:01Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
bweln/llama-2-7b-miniguanaco
bweln
2023-08-13T21:21:55Z
4
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-13T21:12:26Z
A model from a finetuning exercise - see more; https://mlabonne.github.io/blog/posts/Fine_Tune_Your_Own_Llama_2_Model_in_a_Colab_Notebook.html
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e4_s55555_v4_l4_v100
KingKazma
2023-08-13T21:21:19Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T21:21:03Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
GeneralRincewind/ShakespeareGPT
GeneralRincewind
2023-08-13T21:20:51Z
6
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-13T05:59:18Z
https://colab.research.google.com/drive/1Dlm8FA9JjjcqJIkfCagaIQWex8Ho5IKI#scrollTo=e8xIjRNsl3Bb ``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("GeneralRincewind/ShakespeareGPT") model = AutoModelForCausalLM.from_pretrained("GeneralRincewind/ShakespeareGPT") #### Generate text from transformers import TextStreamer tokenized_text = tokenizer("", return_tensors="pt", truncation=True) input_ids = tokenized_text.input_ids streamer = TextStreamer(tokenizer) model.eval() full_completion = model.generate(inputs=tokenized_text["input_ids"].to("cuda"), attention_mask=tokenized_text["attention_mask"].to("cuda"), temperature=0.9, top_k=80, top_p=0.65, do_sample=True, streamer=streamer, num_beams=1, max_new_tokens=500, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.pad_token_id, repetition_penalty=1) decoded_text = tokenizer.decode(full_completion[0]) print(decoded_text) ```
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_5_e4_s55555_v4_l4_v100
KingKazma
2023-08-13T21:19:56Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T21:19:55Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e5_s55555_v4_l5_v50
KingKazma
2023-08-13T21:16:27Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T21:16:25Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_5_e3_s55555_v4_l4_v100
KingKazma
2023-08-13T21:11:18Z
1
0
peft
[ "peft", "region:us" ]
null
2023-08-13T18:29:22Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
RazzzHF/kendrick
RazzzHF
2023-08-13T21:10:56Z
0
0
null
[ "license:cc-by-nc-nd-4.0", "region:us" ]
null
2023-08-13T21:10:02Z
--- license: cc-by-nc-nd-4.0 ---
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e9_s55555_v4_l4_v100
KingKazma
2023-08-13T21:07:52Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T21:07:51Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e2_s55555_v4_l4_v100
KingKazma
2023-08-13T21:07:34Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T21:07:30Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_5_e2_s55555_v4_l4_v100
KingKazma
2023-08-13T21:02:41Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T18:20:44Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e8_s55555_v4_l4_v100
KingKazma
2023-08-13T21:00:57Z
2
0
peft
[ "peft", "region:us" ]
null
2023-08-13T21:00:56Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e3_s55555_v4_l5_v50
KingKazma
2023-08-13T21:00:49Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T21:00:48Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e1_s55555_v4_l4_v100
KingKazma
2023-08-13T21:00:48Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T21:00:43Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e7_s55555_v4_l4_v100
KingKazma
2023-08-13T20:54:01Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T20:53:57Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
bigmorning/whisper_charsplit_new_round2__0058
bigmorning
2023-08-13T20:53:24Z
59
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-13T20:53:17Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_keras_callback model-index: - name: whisper_charsplit_new_round2__0058 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_charsplit_new_round2__0058 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0013 - Train Accuracy: 0.0795 - Train Wermet: 7.9766 - Validation Loss: 0.5741 - Validation Accuracy: 0.0768 - Validation Wermet: 6.8820 - Epoch: 57 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 0.0010 | 0.0795 | 8.7507 | 0.5575 | 0.0767 | 7.6778 | 0 | | 0.0013 | 0.0795 | 8.9468 | 0.5652 | 0.0766 | 8.3360 | 1 | | 0.0025 | 0.0795 | 8.7338 | 0.5673 | 0.0765 | 8.3770 | 2 | | 0.0019 | 0.0795 | 8.9450 | 0.5623 | 0.0766 | 7.7117 | 3 | | 0.0011 | 0.0795 | 8.9053 | 0.5609 | 0.0767 | 7.5155 | 4 | | 0.0012 | 0.0795 | 8.8862 | 0.5667 | 0.0767 | 8.2913 | 5 | | 0.0009 | 0.0795 | 8.7510 | 0.5642 | 0.0766 | 7.9083 | 6 | | 0.0037 | 0.0795 | 9.3428 | 0.5717 | 0.0764 | 8.2631 | 7 | | 0.0031 | 0.0795 | 9.2135 | 0.5636 | 0.0766 | 8.2384 | 8 | | 0.0011 | 0.0795 | 8.9730 | 0.5605 | 0.0767 | 8.3958 | 9 | | 0.0005 | 0.0795 | 9.3749 | 0.5552 | 0.0768 | 8.0800 | 10 | | 0.0003 | 0.0795 | 9.3340 | 0.5584 | 0.0768 | 8.1322 | 11 | | 0.0005 | 0.0795 | 9.2292 | 0.5687 | 0.0767 | 8.5576 | 12 | | 0.0037 | 0.0795 | 9.2838 | 0.5751 | 0.0765 | 7.4189 | 13 | | 0.0038 | 0.0795 | 8.7270 | 0.5605 | 0.0767 | 7.7098 | 14 | | 0.0012 | 0.0795 | 8.8259 | 0.5563 | 0.0768 | 8.2647 | 15 | | 0.0005 | 0.0795 | 9.0553 | 0.5620 | 0.0768 | 8.5020 | 16 | | 0.0004 | 0.0795 | 9.1734 | 0.5607 | 0.0768 | 8.0252 | 17 | | 0.0003 | 0.0795 | 9.0084 | 0.5571 | 0.0769 | 8.1563 | 18 | | 0.0014 | 0.0795 | 8.7153 | 0.5804 | 0.0765 | 7.8654 | 19 | | 0.0058 | 0.0794 | 8.8460 | 0.5706 | 0.0766 | 7.4342 | 20 | | 0.0020 | 0.0795 | 8.6599 | 0.5612 | 0.0767 | 7.7369 | 21 | | 0.0007 | 0.0795 | 8.6456 | 0.5543 | 0.0768 | 7.4625 | 22 | | 0.0008 | 0.0795 | 8.3246 | 0.5620 | 0.0768 | 7.4475 | 23 | | 0.0012 | 0.0795 | 7.9451 | 0.5615 | 0.0768 | 7.0907 | 24 | | 0.0025 | 0.0795 | 8.1065 | 0.5619 | 0.0768 | 7.7020 | 25 | | 0.0011 | 0.0795 | 8.4237 | 0.5710 | 0.0768 | 7.4035 | 26 | | 0.0009 | 0.0795 | 8.3074 | 0.5641 | 0.0768 | 7.1747 | 27 | | 0.0007 | 0.0795 | 8.5183 | 0.5688 | 0.0768 | 7.4310 | 28 | | 0.0014 | 0.0795 | 8.6604 | 0.5750 | 0.0767 | 8.0751 | 29 | | 0.0022 | 0.0795 | 8.2353 | 0.5789 | 0.0767 | 7.4442 | 30 | | 0.0019 | 0.0795 | 8.6037 | 0.5715 | 0.0767 | 7.6157 | 31 | | 0.0009 | 0.0795 | 8.4768 | 0.5611 | 0.0769 | 7.6392 | 32 | | 0.0005 | 0.0795 | 8.2728 | 0.5669 | 0.0768 | 7.1451 | 33 | | 0.0010 | 0.0795 | 8.1006 | 0.5918 | 0.0766 | 7.4447 | 34 | | 0.0036 | 0.0795 | 8.9171 | 0.5687 | 0.0767 | 7.6962 | 35 | | 0.0018 | 0.0795 | 8.4062 | 0.5713 | 0.0768 | 7.2127 | 36 | | 0.0012 | 0.0795 | 8.3370 | 0.5683 | 0.0768 | 7.1040 | 37 | | 0.0005 | 0.0795 | 7.9931 | 0.5658 | 0.0769 | 6.8043 | 38 | | 0.0002 | 0.0795 | 7.9500 | 0.5660 | 0.0769 | 7.0891 | 39 | | 0.0001 | 0.0795 | 8.1912 | 0.5632 | 0.0770 | 7.1929 | 40 | | 0.0001 | 0.0795 | 8.2484 | 0.5678 | 0.0769 | 7.6993 | 41 | | 0.0001 | 0.0795 | 8.2925 | 0.5648 | 0.0770 | 7.1917 | 42 | | 0.0001 | 0.0795 | 7.9155 | 0.5752 | 0.0769 | 6.4900 | 43 | | 0.0095 | 0.0793 | 8.3244 | 0.5662 | 0.0767 | 6.9524 | 44 | | 0.0019 | 0.0795 | 7.8491 | 0.5533 | 0.0769 | 6.9541 | 45 | | 0.0006 | 0.0795 | 8.0596 | 0.5573 | 0.0768 | 6.9489 | 46 | | 0.0008 | 0.0795 | 8.0277 | 0.5581 | 0.0769 | 6.9081 | 47 | | 0.0005 | 0.0795 | 7.6084 | 0.5604 | 0.0769 | 6.7158 | 48 | | 0.0006 | 0.0795 | 8.0561 | 0.5729 | 0.0767 | 7.4189 | 49 | | 0.0014 | 0.0795 | 8.2875 | 0.5658 | 0.0768 | 7.5768 | 50 | | 0.0011 | 0.0795 | 8.4376 | 0.5665 | 0.0768 | 7.2469 | 51 | | 0.0018 | 0.0795 | 8.3093 | 0.5771 | 0.0768 | 7.2637 | 52 | | 0.0021 | 0.0795 | 7.8370 | 0.5680 | 0.0768 | 7.0030 | 53 | | 0.0014 | 0.0795 | 7.7408 | 0.5661 | 0.0769 | 7.1664 | 54 | | 0.0009 | 0.0795 | 7.7601 | 0.5639 | 0.0769 | 6.9567 | 55 | | 0.0006 | 0.0795 | 7.8589 | 0.5667 | 0.0769 | 7.3058 | 56 | | 0.0013 | 0.0795 | 7.9766 | 0.5741 | 0.0768 | 6.8820 | 57 | ### Framework versions - Transformers 4.32.0.dev0 - TensorFlow 2.12.0 - Tokenizers 0.13.3
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e2_s55555_v4_l5_v50
KingKazma
2023-08-13T20:53:01Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T20:19:30Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e6_s55555_v4_l4_v100
KingKazma
2023-08-13T20:47:03Z
2
0
peft
[ "peft", "region:us" ]
null
2023-08-13T20:47:02Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_5_e0_s55555_v4_l4_v100
KingKazma
2023-08-13T20:45:28Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T18:03:26Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e9_s108_v4_l4_v100
KingKazma
2023-08-13T20:38:26Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T20:38:21Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e0_s55555_v4_l5_v50
KingKazma
2023-08-13T20:37:26Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T20:04:51Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
bigmorning/whisper_charsplit_new_round2__0054
bigmorning
2023-08-13T20:35:53Z
58
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-13T20:35:47Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_keras_callback model-index: - name: whisper_charsplit_new_round2__0054 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_charsplit_new_round2__0054 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0021 - Train Accuracy: 0.0795 - Train Wermet: 7.8370 - Validation Loss: 0.5680 - Validation Accuracy: 0.0768 - Validation Wermet: 7.0030 - Epoch: 53 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 0.0010 | 0.0795 | 8.7507 | 0.5575 | 0.0767 | 7.6778 | 0 | | 0.0013 | 0.0795 | 8.9468 | 0.5652 | 0.0766 | 8.3360 | 1 | | 0.0025 | 0.0795 | 8.7338 | 0.5673 | 0.0765 | 8.3770 | 2 | | 0.0019 | 0.0795 | 8.9450 | 0.5623 | 0.0766 | 7.7117 | 3 | | 0.0011 | 0.0795 | 8.9053 | 0.5609 | 0.0767 | 7.5155 | 4 | | 0.0012 | 0.0795 | 8.8862 | 0.5667 | 0.0767 | 8.2913 | 5 | | 0.0009 | 0.0795 | 8.7510 | 0.5642 | 0.0766 | 7.9083 | 6 | | 0.0037 | 0.0795 | 9.3428 | 0.5717 | 0.0764 | 8.2631 | 7 | | 0.0031 | 0.0795 | 9.2135 | 0.5636 | 0.0766 | 8.2384 | 8 | | 0.0011 | 0.0795 | 8.9730 | 0.5605 | 0.0767 | 8.3958 | 9 | | 0.0005 | 0.0795 | 9.3749 | 0.5552 | 0.0768 | 8.0800 | 10 | | 0.0003 | 0.0795 | 9.3340 | 0.5584 | 0.0768 | 8.1322 | 11 | | 0.0005 | 0.0795 | 9.2292 | 0.5687 | 0.0767 | 8.5576 | 12 | | 0.0037 | 0.0795 | 9.2838 | 0.5751 | 0.0765 | 7.4189 | 13 | | 0.0038 | 0.0795 | 8.7270 | 0.5605 | 0.0767 | 7.7098 | 14 | | 0.0012 | 0.0795 | 8.8259 | 0.5563 | 0.0768 | 8.2647 | 15 | | 0.0005 | 0.0795 | 9.0553 | 0.5620 | 0.0768 | 8.5020 | 16 | | 0.0004 | 0.0795 | 9.1734 | 0.5607 | 0.0768 | 8.0252 | 17 | | 0.0003 | 0.0795 | 9.0084 | 0.5571 | 0.0769 | 8.1563 | 18 | | 0.0014 | 0.0795 | 8.7153 | 0.5804 | 0.0765 | 7.8654 | 19 | | 0.0058 | 0.0794 | 8.8460 | 0.5706 | 0.0766 | 7.4342 | 20 | | 0.0020 | 0.0795 | 8.6599 | 0.5612 | 0.0767 | 7.7369 | 21 | | 0.0007 | 0.0795 | 8.6456 | 0.5543 | 0.0768 | 7.4625 | 22 | | 0.0008 | 0.0795 | 8.3246 | 0.5620 | 0.0768 | 7.4475 | 23 | | 0.0012 | 0.0795 | 7.9451 | 0.5615 | 0.0768 | 7.0907 | 24 | | 0.0025 | 0.0795 | 8.1065 | 0.5619 | 0.0768 | 7.7020 | 25 | | 0.0011 | 0.0795 | 8.4237 | 0.5710 | 0.0768 | 7.4035 | 26 | | 0.0009 | 0.0795 | 8.3074 | 0.5641 | 0.0768 | 7.1747 | 27 | | 0.0007 | 0.0795 | 8.5183 | 0.5688 | 0.0768 | 7.4310 | 28 | | 0.0014 | 0.0795 | 8.6604 | 0.5750 | 0.0767 | 8.0751 | 29 | | 0.0022 | 0.0795 | 8.2353 | 0.5789 | 0.0767 | 7.4442 | 30 | | 0.0019 | 0.0795 | 8.6037 | 0.5715 | 0.0767 | 7.6157 | 31 | | 0.0009 | 0.0795 | 8.4768 | 0.5611 | 0.0769 | 7.6392 | 32 | | 0.0005 | 0.0795 | 8.2728 | 0.5669 | 0.0768 | 7.1451 | 33 | | 0.0010 | 0.0795 | 8.1006 | 0.5918 | 0.0766 | 7.4447 | 34 | | 0.0036 | 0.0795 | 8.9171 | 0.5687 | 0.0767 | 7.6962 | 35 | | 0.0018 | 0.0795 | 8.4062 | 0.5713 | 0.0768 | 7.2127 | 36 | | 0.0012 | 0.0795 | 8.3370 | 0.5683 | 0.0768 | 7.1040 | 37 | | 0.0005 | 0.0795 | 7.9931 | 0.5658 | 0.0769 | 6.8043 | 38 | | 0.0002 | 0.0795 | 7.9500 | 0.5660 | 0.0769 | 7.0891 | 39 | | 0.0001 | 0.0795 | 8.1912 | 0.5632 | 0.0770 | 7.1929 | 40 | | 0.0001 | 0.0795 | 8.2484 | 0.5678 | 0.0769 | 7.6993 | 41 | | 0.0001 | 0.0795 | 8.2925 | 0.5648 | 0.0770 | 7.1917 | 42 | | 0.0001 | 0.0795 | 7.9155 | 0.5752 | 0.0769 | 6.4900 | 43 | | 0.0095 | 0.0793 | 8.3244 | 0.5662 | 0.0767 | 6.9524 | 44 | | 0.0019 | 0.0795 | 7.8491 | 0.5533 | 0.0769 | 6.9541 | 45 | | 0.0006 | 0.0795 | 8.0596 | 0.5573 | 0.0768 | 6.9489 | 46 | | 0.0008 | 0.0795 | 8.0277 | 0.5581 | 0.0769 | 6.9081 | 47 | | 0.0005 | 0.0795 | 7.6084 | 0.5604 | 0.0769 | 6.7158 | 48 | | 0.0006 | 0.0795 | 8.0561 | 0.5729 | 0.0767 | 7.4189 | 49 | | 0.0014 | 0.0795 | 8.2875 | 0.5658 | 0.0768 | 7.5768 | 50 | | 0.0011 | 0.0795 | 8.4376 | 0.5665 | 0.0768 | 7.2469 | 51 | | 0.0018 | 0.0795 | 8.3093 | 0.5771 | 0.0768 | 7.2637 | 52 | | 0.0021 | 0.0795 | 7.8370 | 0.5680 | 0.0768 | 7.0030 | 53 | ### Framework versions - Transformers 4.32.0.dev0 - TensorFlow 2.12.0 - Tokenizers 0.13.3
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_8_e-1_s55555_v4_l4_v100
KingKazma
2023-08-13T20:27:50Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T20:27:50Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
bigmorning/whisper_charsplit_new_round2__0052
bigmorning
2023-08-13T20:27:13Z
59
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-13T20:27:05Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_keras_callback model-index: - name: whisper_charsplit_new_round2__0052 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_charsplit_new_round2__0052 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0011 - Train Accuracy: 0.0795 - Train Wermet: 8.4376 - Validation Loss: 0.5665 - Validation Accuracy: 0.0768 - Validation Wermet: 7.2469 - Epoch: 51 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 0.0010 | 0.0795 | 8.7507 | 0.5575 | 0.0767 | 7.6778 | 0 | | 0.0013 | 0.0795 | 8.9468 | 0.5652 | 0.0766 | 8.3360 | 1 | | 0.0025 | 0.0795 | 8.7338 | 0.5673 | 0.0765 | 8.3770 | 2 | | 0.0019 | 0.0795 | 8.9450 | 0.5623 | 0.0766 | 7.7117 | 3 | | 0.0011 | 0.0795 | 8.9053 | 0.5609 | 0.0767 | 7.5155 | 4 | | 0.0012 | 0.0795 | 8.8862 | 0.5667 | 0.0767 | 8.2913 | 5 | | 0.0009 | 0.0795 | 8.7510 | 0.5642 | 0.0766 | 7.9083 | 6 | | 0.0037 | 0.0795 | 9.3428 | 0.5717 | 0.0764 | 8.2631 | 7 | | 0.0031 | 0.0795 | 9.2135 | 0.5636 | 0.0766 | 8.2384 | 8 | | 0.0011 | 0.0795 | 8.9730 | 0.5605 | 0.0767 | 8.3958 | 9 | | 0.0005 | 0.0795 | 9.3749 | 0.5552 | 0.0768 | 8.0800 | 10 | | 0.0003 | 0.0795 | 9.3340 | 0.5584 | 0.0768 | 8.1322 | 11 | | 0.0005 | 0.0795 | 9.2292 | 0.5687 | 0.0767 | 8.5576 | 12 | | 0.0037 | 0.0795 | 9.2838 | 0.5751 | 0.0765 | 7.4189 | 13 | | 0.0038 | 0.0795 | 8.7270 | 0.5605 | 0.0767 | 7.7098 | 14 | | 0.0012 | 0.0795 | 8.8259 | 0.5563 | 0.0768 | 8.2647 | 15 | | 0.0005 | 0.0795 | 9.0553 | 0.5620 | 0.0768 | 8.5020 | 16 | | 0.0004 | 0.0795 | 9.1734 | 0.5607 | 0.0768 | 8.0252 | 17 | | 0.0003 | 0.0795 | 9.0084 | 0.5571 | 0.0769 | 8.1563 | 18 | | 0.0014 | 0.0795 | 8.7153 | 0.5804 | 0.0765 | 7.8654 | 19 | | 0.0058 | 0.0794 | 8.8460 | 0.5706 | 0.0766 | 7.4342 | 20 | | 0.0020 | 0.0795 | 8.6599 | 0.5612 | 0.0767 | 7.7369 | 21 | | 0.0007 | 0.0795 | 8.6456 | 0.5543 | 0.0768 | 7.4625 | 22 | | 0.0008 | 0.0795 | 8.3246 | 0.5620 | 0.0768 | 7.4475 | 23 | | 0.0012 | 0.0795 | 7.9451 | 0.5615 | 0.0768 | 7.0907 | 24 | | 0.0025 | 0.0795 | 8.1065 | 0.5619 | 0.0768 | 7.7020 | 25 | | 0.0011 | 0.0795 | 8.4237 | 0.5710 | 0.0768 | 7.4035 | 26 | | 0.0009 | 0.0795 | 8.3074 | 0.5641 | 0.0768 | 7.1747 | 27 | | 0.0007 | 0.0795 | 8.5183 | 0.5688 | 0.0768 | 7.4310 | 28 | | 0.0014 | 0.0795 | 8.6604 | 0.5750 | 0.0767 | 8.0751 | 29 | | 0.0022 | 0.0795 | 8.2353 | 0.5789 | 0.0767 | 7.4442 | 30 | | 0.0019 | 0.0795 | 8.6037 | 0.5715 | 0.0767 | 7.6157 | 31 | | 0.0009 | 0.0795 | 8.4768 | 0.5611 | 0.0769 | 7.6392 | 32 | | 0.0005 | 0.0795 | 8.2728 | 0.5669 | 0.0768 | 7.1451 | 33 | | 0.0010 | 0.0795 | 8.1006 | 0.5918 | 0.0766 | 7.4447 | 34 | | 0.0036 | 0.0795 | 8.9171 | 0.5687 | 0.0767 | 7.6962 | 35 | | 0.0018 | 0.0795 | 8.4062 | 0.5713 | 0.0768 | 7.2127 | 36 | | 0.0012 | 0.0795 | 8.3370 | 0.5683 | 0.0768 | 7.1040 | 37 | | 0.0005 | 0.0795 | 7.9931 | 0.5658 | 0.0769 | 6.8043 | 38 | | 0.0002 | 0.0795 | 7.9500 | 0.5660 | 0.0769 | 7.0891 | 39 | | 0.0001 | 0.0795 | 8.1912 | 0.5632 | 0.0770 | 7.1929 | 40 | | 0.0001 | 0.0795 | 8.2484 | 0.5678 | 0.0769 | 7.6993 | 41 | | 0.0001 | 0.0795 | 8.2925 | 0.5648 | 0.0770 | 7.1917 | 42 | | 0.0001 | 0.0795 | 7.9155 | 0.5752 | 0.0769 | 6.4900 | 43 | | 0.0095 | 0.0793 | 8.3244 | 0.5662 | 0.0767 | 6.9524 | 44 | | 0.0019 | 0.0795 | 7.8491 | 0.5533 | 0.0769 | 6.9541 | 45 | | 0.0006 | 0.0795 | 8.0596 | 0.5573 | 0.0768 | 6.9489 | 46 | | 0.0008 | 0.0795 | 8.0277 | 0.5581 | 0.0769 | 6.9081 | 47 | | 0.0005 | 0.0795 | 7.6084 | 0.5604 | 0.0769 | 6.7158 | 48 | | 0.0006 | 0.0795 | 8.0561 | 0.5729 | 0.0767 | 7.4189 | 49 | | 0.0014 | 0.0795 | 8.2875 | 0.5658 | 0.0768 | 7.5768 | 50 | | 0.0011 | 0.0795 | 8.4376 | 0.5665 | 0.0768 | 7.2469 | 51 | ### Framework versions - Transformers 4.32.0.dev0 - TensorFlow 2.12.0 - Tokenizers 0.13.3
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e2_s55555_v4_l4_v100
KingKazma
2023-08-13T20:19:21Z
1
0
peft
[ "peft", "region:us" ]
null
2023-08-13T20:19:20Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
bigmorning/whisper_charsplit_new_round2__0050
bigmorning
2023-08-13T20:18:38Z
59
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-13T20:18:18Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_keras_callback model-index: - name: whisper_charsplit_new_round2__0050 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_charsplit_new_round2__0050 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0006 - Train Accuracy: 0.0795 - Train Wermet: 8.0561 - Validation Loss: 0.5729 - Validation Accuracy: 0.0767 - Validation Wermet: 7.4189 - Epoch: 49 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 0.0010 | 0.0795 | 8.7507 | 0.5575 | 0.0767 | 7.6778 | 0 | | 0.0013 | 0.0795 | 8.9468 | 0.5652 | 0.0766 | 8.3360 | 1 | | 0.0025 | 0.0795 | 8.7338 | 0.5673 | 0.0765 | 8.3770 | 2 | | 0.0019 | 0.0795 | 8.9450 | 0.5623 | 0.0766 | 7.7117 | 3 | | 0.0011 | 0.0795 | 8.9053 | 0.5609 | 0.0767 | 7.5155 | 4 | | 0.0012 | 0.0795 | 8.8862 | 0.5667 | 0.0767 | 8.2913 | 5 | | 0.0009 | 0.0795 | 8.7510 | 0.5642 | 0.0766 | 7.9083 | 6 | | 0.0037 | 0.0795 | 9.3428 | 0.5717 | 0.0764 | 8.2631 | 7 | | 0.0031 | 0.0795 | 9.2135 | 0.5636 | 0.0766 | 8.2384 | 8 | | 0.0011 | 0.0795 | 8.9730 | 0.5605 | 0.0767 | 8.3958 | 9 | | 0.0005 | 0.0795 | 9.3749 | 0.5552 | 0.0768 | 8.0800 | 10 | | 0.0003 | 0.0795 | 9.3340 | 0.5584 | 0.0768 | 8.1322 | 11 | | 0.0005 | 0.0795 | 9.2292 | 0.5687 | 0.0767 | 8.5576 | 12 | | 0.0037 | 0.0795 | 9.2838 | 0.5751 | 0.0765 | 7.4189 | 13 | | 0.0038 | 0.0795 | 8.7270 | 0.5605 | 0.0767 | 7.7098 | 14 | | 0.0012 | 0.0795 | 8.8259 | 0.5563 | 0.0768 | 8.2647 | 15 | | 0.0005 | 0.0795 | 9.0553 | 0.5620 | 0.0768 | 8.5020 | 16 | | 0.0004 | 0.0795 | 9.1734 | 0.5607 | 0.0768 | 8.0252 | 17 | | 0.0003 | 0.0795 | 9.0084 | 0.5571 | 0.0769 | 8.1563 | 18 | | 0.0014 | 0.0795 | 8.7153 | 0.5804 | 0.0765 | 7.8654 | 19 | | 0.0058 | 0.0794 | 8.8460 | 0.5706 | 0.0766 | 7.4342 | 20 | | 0.0020 | 0.0795 | 8.6599 | 0.5612 | 0.0767 | 7.7369 | 21 | | 0.0007 | 0.0795 | 8.6456 | 0.5543 | 0.0768 | 7.4625 | 22 | | 0.0008 | 0.0795 | 8.3246 | 0.5620 | 0.0768 | 7.4475 | 23 | | 0.0012 | 0.0795 | 7.9451 | 0.5615 | 0.0768 | 7.0907 | 24 | | 0.0025 | 0.0795 | 8.1065 | 0.5619 | 0.0768 | 7.7020 | 25 | | 0.0011 | 0.0795 | 8.4237 | 0.5710 | 0.0768 | 7.4035 | 26 | | 0.0009 | 0.0795 | 8.3074 | 0.5641 | 0.0768 | 7.1747 | 27 | | 0.0007 | 0.0795 | 8.5183 | 0.5688 | 0.0768 | 7.4310 | 28 | | 0.0014 | 0.0795 | 8.6604 | 0.5750 | 0.0767 | 8.0751 | 29 | | 0.0022 | 0.0795 | 8.2353 | 0.5789 | 0.0767 | 7.4442 | 30 | | 0.0019 | 0.0795 | 8.6037 | 0.5715 | 0.0767 | 7.6157 | 31 | | 0.0009 | 0.0795 | 8.4768 | 0.5611 | 0.0769 | 7.6392 | 32 | | 0.0005 | 0.0795 | 8.2728 | 0.5669 | 0.0768 | 7.1451 | 33 | | 0.0010 | 0.0795 | 8.1006 | 0.5918 | 0.0766 | 7.4447 | 34 | | 0.0036 | 0.0795 | 8.9171 | 0.5687 | 0.0767 | 7.6962 | 35 | | 0.0018 | 0.0795 | 8.4062 | 0.5713 | 0.0768 | 7.2127 | 36 | | 0.0012 | 0.0795 | 8.3370 | 0.5683 | 0.0768 | 7.1040 | 37 | | 0.0005 | 0.0795 | 7.9931 | 0.5658 | 0.0769 | 6.8043 | 38 | | 0.0002 | 0.0795 | 7.9500 | 0.5660 | 0.0769 | 7.0891 | 39 | | 0.0001 | 0.0795 | 8.1912 | 0.5632 | 0.0770 | 7.1929 | 40 | | 0.0001 | 0.0795 | 8.2484 | 0.5678 | 0.0769 | 7.6993 | 41 | | 0.0001 | 0.0795 | 8.2925 | 0.5648 | 0.0770 | 7.1917 | 42 | | 0.0001 | 0.0795 | 7.9155 | 0.5752 | 0.0769 | 6.4900 | 43 | | 0.0095 | 0.0793 | 8.3244 | 0.5662 | 0.0767 | 6.9524 | 44 | | 0.0019 | 0.0795 | 7.8491 | 0.5533 | 0.0769 | 6.9541 | 45 | | 0.0006 | 0.0795 | 8.0596 | 0.5573 | 0.0768 | 6.9489 | 46 | | 0.0008 | 0.0795 | 8.0277 | 0.5581 | 0.0769 | 6.9081 | 47 | | 0.0005 | 0.0795 | 7.6084 | 0.5604 | 0.0769 | 6.7158 | 48 | | 0.0006 | 0.0795 | 8.0561 | 0.5729 | 0.0767 | 7.4189 | 49 | ### Framework versions - Transformers 4.32.0.dev0 - TensorFlow 2.12.0 - Tokenizers 0.13.3
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e6_s108_v4_l4_v100
KingKazma
2023-08-13T20:18:06Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T19:22:24Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_8_e9_s108_v4_l4_v100
KingKazma
2023-08-13T20:13:10Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T20:13:09Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
bigmorning/whisper_charsplit_new_round2__0048
bigmorning
2023-08-13T20:09:40Z
59
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-13T20:09:32Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_keras_callback model-index: - name: whisper_charsplit_new_round2__0048 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_charsplit_new_round2__0048 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0008 - Train Accuracy: 0.0795 - Train Wermet: 8.0277 - Validation Loss: 0.5581 - Validation Accuracy: 0.0769 - Validation Wermet: 6.9081 - Epoch: 47 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 0.0010 | 0.0795 | 8.7507 | 0.5575 | 0.0767 | 7.6778 | 0 | | 0.0013 | 0.0795 | 8.9468 | 0.5652 | 0.0766 | 8.3360 | 1 | | 0.0025 | 0.0795 | 8.7338 | 0.5673 | 0.0765 | 8.3770 | 2 | | 0.0019 | 0.0795 | 8.9450 | 0.5623 | 0.0766 | 7.7117 | 3 | | 0.0011 | 0.0795 | 8.9053 | 0.5609 | 0.0767 | 7.5155 | 4 | | 0.0012 | 0.0795 | 8.8862 | 0.5667 | 0.0767 | 8.2913 | 5 | | 0.0009 | 0.0795 | 8.7510 | 0.5642 | 0.0766 | 7.9083 | 6 | | 0.0037 | 0.0795 | 9.3428 | 0.5717 | 0.0764 | 8.2631 | 7 | | 0.0031 | 0.0795 | 9.2135 | 0.5636 | 0.0766 | 8.2384 | 8 | | 0.0011 | 0.0795 | 8.9730 | 0.5605 | 0.0767 | 8.3958 | 9 | | 0.0005 | 0.0795 | 9.3749 | 0.5552 | 0.0768 | 8.0800 | 10 | | 0.0003 | 0.0795 | 9.3340 | 0.5584 | 0.0768 | 8.1322 | 11 | | 0.0005 | 0.0795 | 9.2292 | 0.5687 | 0.0767 | 8.5576 | 12 | | 0.0037 | 0.0795 | 9.2838 | 0.5751 | 0.0765 | 7.4189 | 13 | | 0.0038 | 0.0795 | 8.7270 | 0.5605 | 0.0767 | 7.7098 | 14 | | 0.0012 | 0.0795 | 8.8259 | 0.5563 | 0.0768 | 8.2647 | 15 | | 0.0005 | 0.0795 | 9.0553 | 0.5620 | 0.0768 | 8.5020 | 16 | | 0.0004 | 0.0795 | 9.1734 | 0.5607 | 0.0768 | 8.0252 | 17 | | 0.0003 | 0.0795 | 9.0084 | 0.5571 | 0.0769 | 8.1563 | 18 | | 0.0014 | 0.0795 | 8.7153 | 0.5804 | 0.0765 | 7.8654 | 19 | | 0.0058 | 0.0794 | 8.8460 | 0.5706 | 0.0766 | 7.4342 | 20 | | 0.0020 | 0.0795 | 8.6599 | 0.5612 | 0.0767 | 7.7369 | 21 | | 0.0007 | 0.0795 | 8.6456 | 0.5543 | 0.0768 | 7.4625 | 22 | | 0.0008 | 0.0795 | 8.3246 | 0.5620 | 0.0768 | 7.4475 | 23 | | 0.0012 | 0.0795 | 7.9451 | 0.5615 | 0.0768 | 7.0907 | 24 | | 0.0025 | 0.0795 | 8.1065 | 0.5619 | 0.0768 | 7.7020 | 25 | | 0.0011 | 0.0795 | 8.4237 | 0.5710 | 0.0768 | 7.4035 | 26 | | 0.0009 | 0.0795 | 8.3074 | 0.5641 | 0.0768 | 7.1747 | 27 | | 0.0007 | 0.0795 | 8.5183 | 0.5688 | 0.0768 | 7.4310 | 28 | | 0.0014 | 0.0795 | 8.6604 | 0.5750 | 0.0767 | 8.0751 | 29 | | 0.0022 | 0.0795 | 8.2353 | 0.5789 | 0.0767 | 7.4442 | 30 | | 0.0019 | 0.0795 | 8.6037 | 0.5715 | 0.0767 | 7.6157 | 31 | | 0.0009 | 0.0795 | 8.4768 | 0.5611 | 0.0769 | 7.6392 | 32 | | 0.0005 | 0.0795 | 8.2728 | 0.5669 | 0.0768 | 7.1451 | 33 | | 0.0010 | 0.0795 | 8.1006 | 0.5918 | 0.0766 | 7.4447 | 34 | | 0.0036 | 0.0795 | 8.9171 | 0.5687 | 0.0767 | 7.6962 | 35 | | 0.0018 | 0.0795 | 8.4062 | 0.5713 | 0.0768 | 7.2127 | 36 | | 0.0012 | 0.0795 | 8.3370 | 0.5683 | 0.0768 | 7.1040 | 37 | | 0.0005 | 0.0795 | 7.9931 | 0.5658 | 0.0769 | 6.8043 | 38 | | 0.0002 | 0.0795 | 7.9500 | 0.5660 | 0.0769 | 7.0891 | 39 | | 0.0001 | 0.0795 | 8.1912 | 0.5632 | 0.0770 | 7.1929 | 40 | | 0.0001 | 0.0795 | 8.2484 | 0.5678 | 0.0769 | 7.6993 | 41 | | 0.0001 | 0.0795 | 8.2925 | 0.5648 | 0.0770 | 7.1917 | 42 | | 0.0001 | 0.0795 | 7.9155 | 0.5752 | 0.0769 | 6.4900 | 43 | | 0.0095 | 0.0793 | 8.3244 | 0.5662 | 0.0767 | 6.9524 | 44 | | 0.0019 | 0.0795 | 7.8491 | 0.5533 | 0.0769 | 6.9541 | 45 | | 0.0006 | 0.0795 | 8.0596 | 0.5573 | 0.0768 | 6.9489 | 46 | | 0.0008 | 0.0795 | 8.0277 | 0.5581 | 0.0769 | 6.9081 | 47 | ### Framework versions - Transformers 4.32.0.dev0 - TensorFlow 2.12.0 - Tokenizers 0.13.3
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e0_s55555_v4_l4_v100
KingKazma
2023-08-13T20:05:30Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T20:05:29Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
bigmorning/whisper_charsplit_new_round2__0047
bigmorning
2023-08-13T20:05:14Z
59
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-13T20:05:07Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_keras_callback model-index: - name: whisper_charsplit_new_round2__0047 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_charsplit_new_round2__0047 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0006 - Train Accuracy: 0.0795 - Train Wermet: 8.0596 - Validation Loss: 0.5573 - Validation Accuracy: 0.0768 - Validation Wermet: 6.9489 - Epoch: 46 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 0.0010 | 0.0795 | 8.7507 | 0.5575 | 0.0767 | 7.6778 | 0 | | 0.0013 | 0.0795 | 8.9468 | 0.5652 | 0.0766 | 8.3360 | 1 | | 0.0025 | 0.0795 | 8.7338 | 0.5673 | 0.0765 | 8.3770 | 2 | | 0.0019 | 0.0795 | 8.9450 | 0.5623 | 0.0766 | 7.7117 | 3 | | 0.0011 | 0.0795 | 8.9053 | 0.5609 | 0.0767 | 7.5155 | 4 | | 0.0012 | 0.0795 | 8.8862 | 0.5667 | 0.0767 | 8.2913 | 5 | | 0.0009 | 0.0795 | 8.7510 | 0.5642 | 0.0766 | 7.9083 | 6 | | 0.0037 | 0.0795 | 9.3428 | 0.5717 | 0.0764 | 8.2631 | 7 | | 0.0031 | 0.0795 | 9.2135 | 0.5636 | 0.0766 | 8.2384 | 8 | | 0.0011 | 0.0795 | 8.9730 | 0.5605 | 0.0767 | 8.3958 | 9 | | 0.0005 | 0.0795 | 9.3749 | 0.5552 | 0.0768 | 8.0800 | 10 | | 0.0003 | 0.0795 | 9.3340 | 0.5584 | 0.0768 | 8.1322 | 11 | | 0.0005 | 0.0795 | 9.2292 | 0.5687 | 0.0767 | 8.5576 | 12 | | 0.0037 | 0.0795 | 9.2838 | 0.5751 | 0.0765 | 7.4189 | 13 | | 0.0038 | 0.0795 | 8.7270 | 0.5605 | 0.0767 | 7.7098 | 14 | | 0.0012 | 0.0795 | 8.8259 | 0.5563 | 0.0768 | 8.2647 | 15 | | 0.0005 | 0.0795 | 9.0553 | 0.5620 | 0.0768 | 8.5020 | 16 | | 0.0004 | 0.0795 | 9.1734 | 0.5607 | 0.0768 | 8.0252 | 17 | | 0.0003 | 0.0795 | 9.0084 | 0.5571 | 0.0769 | 8.1563 | 18 | | 0.0014 | 0.0795 | 8.7153 | 0.5804 | 0.0765 | 7.8654 | 19 | | 0.0058 | 0.0794 | 8.8460 | 0.5706 | 0.0766 | 7.4342 | 20 | | 0.0020 | 0.0795 | 8.6599 | 0.5612 | 0.0767 | 7.7369 | 21 | | 0.0007 | 0.0795 | 8.6456 | 0.5543 | 0.0768 | 7.4625 | 22 | | 0.0008 | 0.0795 | 8.3246 | 0.5620 | 0.0768 | 7.4475 | 23 | | 0.0012 | 0.0795 | 7.9451 | 0.5615 | 0.0768 | 7.0907 | 24 | | 0.0025 | 0.0795 | 8.1065 | 0.5619 | 0.0768 | 7.7020 | 25 | | 0.0011 | 0.0795 | 8.4237 | 0.5710 | 0.0768 | 7.4035 | 26 | | 0.0009 | 0.0795 | 8.3074 | 0.5641 | 0.0768 | 7.1747 | 27 | | 0.0007 | 0.0795 | 8.5183 | 0.5688 | 0.0768 | 7.4310 | 28 | | 0.0014 | 0.0795 | 8.6604 | 0.5750 | 0.0767 | 8.0751 | 29 | | 0.0022 | 0.0795 | 8.2353 | 0.5789 | 0.0767 | 7.4442 | 30 | | 0.0019 | 0.0795 | 8.6037 | 0.5715 | 0.0767 | 7.6157 | 31 | | 0.0009 | 0.0795 | 8.4768 | 0.5611 | 0.0769 | 7.6392 | 32 | | 0.0005 | 0.0795 | 8.2728 | 0.5669 | 0.0768 | 7.1451 | 33 | | 0.0010 | 0.0795 | 8.1006 | 0.5918 | 0.0766 | 7.4447 | 34 | | 0.0036 | 0.0795 | 8.9171 | 0.5687 | 0.0767 | 7.6962 | 35 | | 0.0018 | 0.0795 | 8.4062 | 0.5713 | 0.0768 | 7.2127 | 36 | | 0.0012 | 0.0795 | 8.3370 | 0.5683 | 0.0768 | 7.1040 | 37 | | 0.0005 | 0.0795 | 7.9931 | 0.5658 | 0.0769 | 6.8043 | 38 | | 0.0002 | 0.0795 | 7.9500 | 0.5660 | 0.0769 | 7.0891 | 39 | | 0.0001 | 0.0795 | 8.1912 | 0.5632 | 0.0770 | 7.1929 | 40 | | 0.0001 | 0.0795 | 8.2484 | 0.5678 | 0.0769 | 7.6993 | 41 | | 0.0001 | 0.0795 | 8.2925 | 0.5648 | 0.0770 | 7.1917 | 42 | | 0.0001 | 0.0795 | 7.9155 | 0.5752 | 0.0769 | 6.4900 | 43 | | 0.0095 | 0.0793 | 8.3244 | 0.5662 | 0.0767 | 6.9524 | 44 | | 0.0019 | 0.0795 | 7.8491 | 0.5533 | 0.0769 | 6.9541 | 45 | | 0.0006 | 0.0795 | 8.0596 | 0.5573 | 0.0768 | 6.9489 | 46 | ### Framework versions - Transformers 4.32.0.dev0 - TensorFlow 2.12.0 - Tokenizers 0.13.3
redstonehero/lofi_v3
redstonehero
2023-08-13T20:05:07Z
32
0
diffusers
[ "diffusers", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-13T18:40:18Z
--- license: creativeml-openrail-m library_name: diffusers ---
redstonehero/m4rv3lsdungeonsv40
redstonehero
2023-08-13T20:05:01Z
5
0
diffusers
[ "diffusers", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-13T18:43:05Z
--- license: creativeml-openrail-m library_name: diffusers ---
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e4_s108_v4_l4_v100
KingKazma
2023-08-13T20:04:40Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T19:08:30Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
JapGuy/MichalHruza_V1_1000Epochs_RVC_v2
JapGuy
2023-08-13T20:04:01Z
0
0
null
[ "music", "rvc", "michal", "hruza", "model", "audio-to-audio", "cs", "license:openrail", "region:us" ]
audio-to-audio
2023-08-13T19:57:52Z
--- license: openrail language: - cs pipeline_tag: audio-to-audio tags: - music - rvc - michal - hruza - model --- ![image.png](https://i.scdn.co/image/ab6761610000e5eb88c7a16ec398bbe6e7b90538) # Michal Hrůza [CZ] (v1) # 1000 Epochs - RVC V2 - mangio-creep - 64 Hop Length Trained on 14 minutes of isolated acapellas using UVR (Voc FT + Reverb HQ) + Audacity to remove parts with double vocals and vocals from others (+Noise Gate)
Ridhto/TomatsuHaruka
Ridhto
2023-08-13T20:03:25Z
0
1
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-06-04T07:26:17Z
--- license: creativeml-openrail-m ---
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e-1_s55555_v4_l4_v100
KingKazma
2023-08-13T19:58:35Z
1
0
peft
[ "peft", "region:us" ]
null
2023-08-13T19:58:34Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e3_s108_v4_l4_v100
KingKazma
2023-08-13T19:57:57Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T19:01:34Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
sherif1311/flan-t5-base-intent
sherif1311
2023-08-13T19:57:32Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/flan-t5-base", "base_model:finetune:google/flan-t5-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-08-13T17:45:07Z
--- license: apache-2.0 base_model: google/flan-t5-base tags: - generated_from_trainer metrics: - f1 model-index: - name: flan-t5-base-intent results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-base-intent This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 - F1: 100.0 - Gen Len: 2.3333 ## Model description Use double quotation for any tweet. 0: Anti-tobacco 1: Neutral 2: Pro-tobacco ## Intended uses & limitations The fine tuned model by STOP is intended for Anti-tobacco/ Pro-tobacco monitoring for social media. ## Training and evaluation data The model was developed and fine tuned in STOP, University of Bath, UK Data used is sherif1311/intend which was collected, augmented and trained by STOP. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.31.0 - Pytorch 1.12.1+cu116 - Datasets 2.14.4 - Tokenizers 0.12.1
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_8_e7_s108_v4_l4_v100
KingKazma
2023-08-13T19:55:31Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T19:55:30Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
s3nh/flozi00-Llama-2-13B-german-assistant-v3-GGML
s3nh
2023-08-13T19:51:57Z
0
0
transformers
[ "transformers", "text-generation", "zh", "en", "license:openrail", "endpoints_compatible", "region:us" ]
text-generation
2023-08-13T19:51:56Z
--- license: openrail pipeline_tag: text-generation library_name: transformers language: - zh - en --- ## Original model card Buy me a coffee if you like this project ;) <a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a> #### Description GGML Format model files for [This project](https://huggingface.co/Photolens/OpenOrcaxOpenChat-2-13b-langchain-chat). ### inference ```python import ctransformers from ctransformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file, gpu_layers=32, model_type="llama") manual_input: str = "Tell me about your last dream, please." llm(manual_input, max_new_tokens=256, temperature=0.9, top_p= 0.7) ``` # Original model card
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e2_s108_v4_l4_v100
KingKazma
2023-08-13T19:51:14Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T18:54:36Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e9_s108_v4_l4_v100
KingKazma
2023-08-13T19:47:51Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T19:47:50Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e9_s108_v4_l5_v50
KingKazma
2023-08-13T19:47:47Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T19:47:45Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
bigmorning/whisper_charsplit_new_round2__0043
bigmorning
2023-08-13T19:47:37Z
59
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-13T19:47:30Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_keras_callback model-index: - name: whisper_charsplit_new_round2__0043 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_charsplit_new_round2__0043 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0001 - Train Accuracy: 0.0795 - Train Wermet: 8.2925 - Validation Loss: 0.5648 - Validation Accuracy: 0.0770 - Validation Wermet: 7.1917 - Epoch: 42 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 0.0010 | 0.0795 | 8.7507 | 0.5575 | 0.0767 | 7.6778 | 0 | | 0.0013 | 0.0795 | 8.9468 | 0.5652 | 0.0766 | 8.3360 | 1 | | 0.0025 | 0.0795 | 8.7338 | 0.5673 | 0.0765 | 8.3770 | 2 | | 0.0019 | 0.0795 | 8.9450 | 0.5623 | 0.0766 | 7.7117 | 3 | | 0.0011 | 0.0795 | 8.9053 | 0.5609 | 0.0767 | 7.5155 | 4 | | 0.0012 | 0.0795 | 8.8862 | 0.5667 | 0.0767 | 8.2913 | 5 | | 0.0009 | 0.0795 | 8.7510 | 0.5642 | 0.0766 | 7.9083 | 6 | | 0.0037 | 0.0795 | 9.3428 | 0.5717 | 0.0764 | 8.2631 | 7 | | 0.0031 | 0.0795 | 9.2135 | 0.5636 | 0.0766 | 8.2384 | 8 | | 0.0011 | 0.0795 | 8.9730 | 0.5605 | 0.0767 | 8.3958 | 9 | | 0.0005 | 0.0795 | 9.3749 | 0.5552 | 0.0768 | 8.0800 | 10 | | 0.0003 | 0.0795 | 9.3340 | 0.5584 | 0.0768 | 8.1322 | 11 | | 0.0005 | 0.0795 | 9.2292 | 0.5687 | 0.0767 | 8.5576 | 12 | | 0.0037 | 0.0795 | 9.2838 | 0.5751 | 0.0765 | 7.4189 | 13 | | 0.0038 | 0.0795 | 8.7270 | 0.5605 | 0.0767 | 7.7098 | 14 | | 0.0012 | 0.0795 | 8.8259 | 0.5563 | 0.0768 | 8.2647 | 15 | | 0.0005 | 0.0795 | 9.0553 | 0.5620 | 0.0768 | 8.5020 | 16 | | 0.0004 | 0.0795 | 9.1734 | 0.5607 | 0.0768 | 8.0252 | 17 | | 0.0003 | 0.0795 | 9.0084 | 0.5571 | 0.0769 | 8.1563 | 18 | | 0.0014 | 0.0795 | 8.7153 | 0.5804 | 0.0765 | 7.8654 | 19 | | 0.0058 | 0.0794 | 8.8460 | 0.5706 | 0.0766 | 7.4342 | 20 | | 0.0020 | 0.0795 | 8.6599 | 0.5612 | 0.0767 | 7.7369 | 21 | | 0.0007 | 0.0795 | 8.6456 | 0.5543 | 0.0768 | 7.4625 | 22 | | 0.0008 | 0.0795 | 8.3246 | 0.5620 | 0.0768 | 7.4475 | 23 | | 0.0012 | 0.0795 | 7.9451 | 0.5615 | 0.0768 | 7.0907 | 24 | | 0.0025 | 0.0795 | 8.1065 | 0.5619 | 0.0768 | 7.7020 | 25 | | 0.0011 | 0.0795 | 8.4237 | 0.5710 | 0.0768 | 7.4035 | 26 | | 0.0009 | 0.0795 | 8.3074 | 0.5641 | 0.0768 | 7.1747 | 27 | | 0.0007 | 0.0795 | 8.5183 | 0.5688 | 0.0768 | 7.4310 | 28 | | 0.0014 | 0.0795 | 8.6604 | 0.5750 | 0.0767 | 8.0751 | 29 | | 0.0022 | 0.0795 | 8.2353 | 0.5789 | 0.0767 | 7.4442 | 30 | | 0.0019 | 0.0795 | 8.6037 | 0.5715 | 0.0767 | 7.6157 | 31 | | 0.0009 | 0.0795 | 8.4768 | 0.5611 | 0.0769 | 7.6392 | 32 | | 0.0005 | 0.0795 | 8.2728 | 0.5669 | 0.0768 | 7.1451 | 33 | | 0.0010 | 0.0795 | 8.1006 | 0.5918 | 0.0766 | 7.4447 | 34 | | 0.0036 | 0.0795 | 8.9171 | 0.5687 | 0.0767 | 7.6962 | 35 | | 0.0018 | 0.0795 | 8.4062 | 0.5713 | 0.0768 | 7.2127 | 36 | | 0.0012 | 0.0795 | 8.3370 | 0.5683 | 0.0768 | 7.1040 | 37 | | 0.0005 | 0.0795 | 7.9931 | 0.5658 | 0.0769 | 6.8043 | 38 | | 0.0002 | 0.0795 | 7.9500 | 0.5660 | 0.0769 | 7.0891 | 39 | | 0.0001 | 0.0795 | 8.1912 | 0.5632 | 0.0770 | 7.1929 | 40 | | 0.0001 | 0.0795 | 8.2484 | 0.5678 | 0.0769 | 7.6993 | 41 | | 0.0001 | 0.0795 | 8.2925 | 0.5648 | 0.0770 | 7.1917 | 42 | ### Framework versions - Transformers 4.32.0.dev0 - TensorFlow 2.12.0 - Tokenizers 0.13.3
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e1_s108_v4_l4_v100
KingKazma
2023-08-13T19:44:32Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T18:47:40Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
bigmorning/whisper_charsplit_new_round2__0042
bigmorning
2023-08-13T19:43:07Z
59
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-13T19:43:01Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_keras_callback model-index: - name: whisper_charsplit_new_round2__0042 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_charsplit_new_round2__0042 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0001 - Train Accuracy: 0.0795 - Train Wermet: 8.2484 - Validation Loss: 0.5678 - Validation Accuracy: 0.0769 - Validation Wermet: 7.6993 - Epoch: 41 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 0.0010 | 0.0795 | 8.7507 | 0.5575 | 0.0767 | 7.6778 | 0 | | 0.0013 | 0.0795 | 8.9468 | 0.5652 | 0.0766 | 8.3360 | 1 | | 0.0025 | 0.0795 | 8.7338 | 0.5673 | 0.0765 | 8.3770 | 2 | | 0.0019 | 0.0795 | 8.9450 | 0.5623 | 0.0766 | 7.7117 | 3 | | 0.0011 | 0.0795 | 8.9053 | 0.5609 | 0.0767 | 7.5155 | 4 | | 0.0012 | 0.0795 | 8.8862 | 0.5667 | 0.0767 | 8.2913 | 5 | | 0.0009 | 0.0795 | 8.7510 | 0.5642 | 0.0766 | 7.9083 | 6 | | 0.0037 | 0.0795 | 9.3428 | 0.5717 | 0.0764 | 8.2631 | 7 | | 0.0031 | 0.0795 | 9.2135 | 0.5636 | 0.0766 | 8.2384 | 8 | | 0.0011 | 0.0795 | 8.9730 | 0.5605 | 0.0767 | 8.3958 | 9 | | 0.0005 | 0.0795 | 9.3749 | 0.5552 | 0.0768 | 8.0800 | 10 | | 0.0003 | 0.0795 | 9.3340 | 0.5584 | 0.0768 | 8.1322 | 11 | | 0.0005 | 0.0795 | 9.2292 | 0.5687 | 0.0767 | 8.5576 | 12 | | 0.0037 | 0.0795 | 9.2838 | 0.5751 | 0.0765 | 7.4189 | 13 | | 0.0038 | 0.0795 | 8.7270 | 0.5605 | 0.0767 | 7.7098 | 14 | | 0.0012 | 0.0795 | 8.8259 | 0.5563 | 0.0768 | 8.2647 | 15 | | 0.0005 | 0.0795 | 9.0553 | 0.5620 | 0.0768 | 8.5020 | 16 | | 0.0004 | 0.0795 | 9.1734 | 0.5607 | 0.0768 | 8.0252 | 17 | | 0.0003 | 0.0795 | 9.0084 | 0.5571 | 0.0769 | 8.1563 | 18 | | 0.0014 | 0.0795 | 8.7153 | 0.5804 | 0.0765 | 7.8654 | 19 | | 0.0058 | 0.0794 | 8.8460 | 0.5706 | 0.0766 | 7.4342 | 20 | | 0.0020 | 0.0795 | 8.6599 | 0.5612 | 0.0767 | 7.7369 | 21 | | 0.0007 | 0.0795 | 8.6456 | 0.5543 | 0.0768 | 7.4625 | 22 | | 0.0008 | 0.0795 | 8.3246 | 0.5620 | 0.0768 | 7.4475 | 23 | | 0.0012 | 0.0795 | 7.9451 | 0.5615 | 0.0768 | 7.0907 | 24 | | 0.0025 | 0.0795 | 8.1065 | 0.5619 | 0.0768 | 7.7020 | 25 | | 0.0011 | 0.0795 | 8.4237 | 0.5710 | 0.0768 | 7.4035 | 26 | | 0.0009 | 0.0795 | 8.3074 | 0.5641 | 0.0768 | 7.1747 | 27 | | 0.0007 | 0.0795 | 8.5183 | 0.5688 | 0.0768 | 7.4310 | 28 | | 0.0014 | 0.0795 | 8.6604 | 0.5750 | 0.0767 | 8.0751 | 29 | | 0.0022 | 0.0795 | 8.2353 | 0.5789 | 0.0767 | 7.4442 | 30 | | 0.0019 | 0.0795 | 8.6037 | 0.5715 | 0.0767 | 7.6157 | 31 | | 0.0009 | 0.0795 | 8.4768 | 0.5611 | 0.0769 | 7.6392 | 32 | | 0.0005 | 0.0795 | 8.2728 | 0.5669 | 0.0768 | 7.1451 | 33 | | 0.0010 | 0.0795 | 8.1006 | 0.5918 | 0.0766 | 7.4447 | 34 | | 0.0036 | 0.0795 | 8.9171 | 0.5687 | 0.0767 | 7.6962 | 35 | | 0.0018 | 0.0795 | 8.4062 | 0.5713 | 0.0768 | 7.2127 | 36 | | 0.0012 | 0.0795 | 8.3370 | 0.5683 | 0.0768 | 7.1040 | 37 | | 0.0005 | 0.0795 | 7.9931 | 0.5658 | 0.0769 | 6.8043 | 38 | | 0.0002 | 0.0795 | 7.9500 | 0.5660 | 0.0769 | 7.0891 | 39 | | 0.0001 | 0.0795 | 8.1912 | 0.5632 | 0.0770 | 7.1929 | 40 | | 0.0001 | 0.0795 | 8.2484 | 0.5678 | 0.0769 | 7.6993 | 41 | ### Framework versions - Transformers 4.32.0.dev0 - TensorFlow 2.12.0 - Tokenizers 0.13.3
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e8_s108_v4_l4_v100
KingKazma
2023-08-13T19:40:55Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-13T19:40:54Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
bigmorning/whisper_charsplit_new_round2__0041
bigmorning
2023-08-13T19:38:44Z
59
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-13T19:38:36Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_keras_callback model-index: - name: whisper_charsplit_new_round2__0041 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_charsplit_new_round2__0041 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0001 - Train Accuracy: 0.0795 - Train Wermet: 8.1912 - Validation Loss: 0.5632 - Validation Accuracy: 0.0770 - Validation Wermet: 7.1929 - Epoch: 40 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 0.0010 | 0.0795 | 8.7507 | 0.5575 | 0.0767 | 7.6778 | 0 | | 0.0013 | 0.0795 | 8.9468 | 0.5652 | 0.0766 | 8.3360 | 1 | | 0.0025 | 0.0795 | 8.7338 | 0.5673 | 0.0765 | 8.3770 | 2 | | 0.0019 | 0.0795 | 8.9450 | 0.5623 | 0.0766 | 7.7117 | 3 | | 0.0011 | 0.0795 | 8.9053 | 0.5609 | 0.0767 | 7.5155 | 4 | | 0.0012 | 0.0795 | 8.8862 | 0.5667 | 0.0767 | 8.2913 | 5 | | 0.0009 | 0.0795 | 8.7510 | 0.5642 | 0.0766 | 7.9083 | 6 | | 0.0037 | 0.0795 | 9.3428 | 0.5717 | 0.0764 | 8.2631 | 7 | | 0.0031 | 0.0795 | 9.2135 | 0.5636 | 0.0766 | 8.2384 | 8 | | 0.0011 | 0.0795 | 8.9730 | 0.5605 | 0.0767 | 8.3958 | 9 | | 0.0005 | 0.0795 | 9.3749 | 0.5552 | 0.0768 | 8.0800 | 10 | | 0.0003 | 0.0795 | 9.3340 | 0.5584 | 0.0768 | 8.1322 | 11 | | 0.0005 | 0.0795 | 9.2292 | 0.5687 | 0.0767 | 8.5576 | 12 | | 0.0037 | 0.0795 | 9.2838 | 0.5751 | 0.0765 | 7.4189 | 13 | | 0.0038 | 0.0795 | 8.7270 | 0.5605 | 0.0767 | 7.7098 | 14 | | 0.0012 | 0.0795 | 8.8259 | 0.5563 | 0.0768 | 8.2647 | 15 | | 0.0005 | 0.0795 | 9.0553 | 0.5620 | 0.0768 | 8.5020 | 16 | | 0.0004 | 0.0795 | 9.1734 | 0.5607 | 0.0768 | 8.0252 | 17 | | 0.0003 | 0.0795 | 9.0084 | 0.5571 | 0.0769 | 8.1563 | 18 | | 0.0014 | 0.0795 | 8.7153 | 0.5804 | 0.0765 | 7.8654 | 19 | | 0.0058 | 0.0794 | 8.8460 | 0.5706 | 0.0766 | 7.4342 | 20 | | 0.0020 | 0.0795 | 8.6599 | 0.5612 | 0.0767 | 7.7369 | 21 | | 0.0007 | 0.0795 | 8.6456 | 0.5543 | 0.0768 | 7.4625 | 22 | | 0.0008 | 0.0795 | 8.3246 | 0.5620 | 0.0768 | 7.4475 | 23 | | 0.0012 | 0.0795 | 7.9451 | 0.5615 | 0.0768 | 7.0907 | 24 | | 0.0025 | 0.0795 | 8.1065 | 0.5619 | 0.0768 | 7.7020 | 25 | | 0.0011 | 0.0795 | 8.4237 | 0.5710 | 0.0768 | 7.4035 | 26 | | 0.0009 | 0.0795 | 8.3074 | 0.5641 | 0.0768 | 7.1747 | 27 | | 0.0007 | 0.0795 | 8.5183 | 0.5688 | 0.0768 | 7.4310 | 28 | | 0.0014 | 0.0795 | 8.6604 | 0.5750 | 0.0767 | 8.0751 | 29 | | 0.0022 | 0.0795 | 8.2353 | 0.5789 | 0.0767 | 7.4442 | 30 | | 0.0019 | 0.0795 | 8.6037 | 0.5715 | 0.0767 | 7.6157 | 31 | | 0.0009 | 0.0795 | 8.4768 | 0.5611 | 0.0769 | 7.6392 | 32 | | 0.0005 | 0.0795 | 8.2728 | 0.5669 | 0.0768 | 7.1451 | 33 | | 0.0010 | 0.0795 | 8.1006 | 0.5918 | 0.0766 | 7.4447 | 34 | | 0.0036 | 0.0795 | 8.9171 | 0.5687 | 0.0767 | 7.6962 | 35 | | 0.0018 | 0.0795 | 8.4062 | 0.5713 | 0.0768 | 7.2127 | 36 | | 0.0012 | 0.0795 | 8.3370 | 0.5683 | 0.0768 | 7.1040 | 37 | | 0.0005 | 0.0795 | 7.9931 | 0.5658 | 0.0769 | 6.8043 | 38 | | 0.0002 | 0.0795 | 7.9500 | 0.5660 | 0.0769 | 7.0891 | 39 | | 0.0001 | 0.0795 | 8.1912 | 0.5632 | 0.0770 | 7.1929 | 40 | ### Framework versions - Transformers 4.32.0.dev0 - TensorFlow 2.12.0 - Tokenizers 0.13.3