modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-27 00:39:58
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
521 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-27 00:39:49
card
stringlengths
11
1.01M
ahyar002/image_classification
ahyar002
2023-09-17T04:33:41Z
219
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "dataset:beans", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-09-17T04:31:10Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer datasets: - beans metrics: - accuracy model-index: - name: image_classification results: - task: name: Image Classification type: image-classification dataset: name: beans type: beans config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9420289855072463 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # image_classification This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.2653 - Accuracy: 0.9420 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 52 | 0.2598 | 0.9565 | | No log | 2.0 | 104 | 0.1608 | 0.9517 | | No log | 3.0 | 156 | 0.1650 | 0.9565 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
Ransaka/sinhala-gpt2
Ransaka
2023-09-17T04:28:36Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "si", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-03-25T02:11:35Z
--- license: mit tags: - pytorch - gpt2 model-index: - name: sinhala-gpt2 results: [] widget: - text: මහ - text: සංවිධ - text: දුර්ලභ - text: තනිවීලා - text: ඔබ # inference: # parameters: # do_sample: false # temperature: 0.2 # max_new_tokens: 30 language: - si --- # sinhala-gpt2 This particular model has undergone fine-tuning based on the [gpt2](https://huggingface.co/gpt2) architecture, utilizing a dataset of Sinhala NEWS from various sources. ## Training procedure The model was trained for 12+ hours on Kaggle GPUs. ## Usage Details ```python from transformers import AutoTokenizer, AutoModelForCausalLM,pipeline tokenizer = AutoTokenizer.from_pretrained("Ransaka/sinhala-gpt2") model = AutoModelForCausalLM.from_pretrained("Ransaka/sinhala-gpt2") generator("දුර") ``` or using git ```bash git lfs install git clone https://huggingface.co/Ransaka/sinhala-gpt2 ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 2.0233 | 1.0 | 15323 | 2.3348 | | 1.6938 | 2.0 | 30646 | 1.8377 | | 1.4938 | 3.0 | 45969 | 1.6498 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.0 - Datasets 2.1.0 - Tokenizers 0.13.2
dahara1/ELYZA-japanese-Llama-2-7b-instruct-AWQ
dahara1
2023-09-17T04:24:55Z
23
0
transformers
[ "transformers", "llama", "text-generation", "ja", "en", "arxiv:2306.00978", "arxiv:2307.09288", "license:llama2", "autotrain_compatible", "region:us" ]
text-generation
2023-09-08T08:35:31Z
--- inference: false license: llama2 language: - ja - en --- # Model Card for Model ID Original model [elyza/ELYZA-japanese-Llama-2-7b-instruct](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-7b-instruct) which is based on Meta's "Llama 2" and has undergone additional pre-training in Japanese instruction. This model is a [AWQ](https://arxiv.org/abs/2306.00978) quantized(miniaturized to 3.89GB) version of the original model(13.48GB). ## Model Details Currently, this model is confirmed to work with **Colab A100** or RTX 3000 Series on local PC. This is because autoAWQ uses NVIDIA's PTX assembly instructions, some of which are only supported on sm80 and higher architectu. Free Colab(T4) is sm75 and Colab Pro(V100) is sm70. Quantization reduces the amount of memory required and improves execution speed, but unfortunately performance deteriorates. In particular, the original model is tuned for the purpose of strengthening the ability to follow Japanese instructions, not as a benchmark. Although the ability to follow instructions cannot be measured using existing automated benchmarks, we have confirmed that quantized model significantly deteriorates the ability to follow instructions. But this model has better ability to follow instructions than the previous [GPTQ version](https://huggingface.co/dahara1/ELYZA-japanese-Llama-2-7b-fast-instruct-GPTQ). ## Sample Script ### Colab [AWQ version Colab sample A100 only](https://github.com/webbigdata-jp/python_sample/blob/main/ELYZA_japanese_Llama_2_7b_instruct_AWQ_sample.ipynb) ### local PC install Library. ``` pip install autoawq ``` script example. ``` from awq import AutoAWQForCausalLM from transformers import AutoTokenizer quant_path = 'dahara1/ELYZA-japanese-Llama-2-7b-instruct-AWQ' quant_config = { "zero_point": True, "q_group_size": 128, "w_bit": 4 } quantized_model_dir = "ELYZA-japanese-Llama-2-7b-instruct-AWQ" quant_file = "awq_model_w4_g128.pt" model = AutoAWQForCausalLM.from_quantized(quantized_model_dir, quant_file) tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir, trust_remote_code=True) B_INST, E_INST = "[INST]", "[/INST]" B_SYS, E_SYS = "<<SYS>>\n", "\n<</SYS>>\n\n" DEFAULT_SYSTEM_PROMPT = "あなたは誠実で優秀な日本人のアシスタントです。" elyza_tasks_100_over_4score_prompt = [ """リラックマが船橋市に行ってふなっしーと強敵(トモ)になり、最終的にはトー横に住みつくというプロットの短編小説を劇画風文体で書いてみてください。""", ] for i in range(len(elyza_tasks_100_over_4score_prompt)): prompt = "{bos_token}{b_inst} {system}{prompt} {e_inst} ".format( bos_token=tokenizer.bos_token, b_inst=B_INST, system=f"{B_SYS}{DEFAULT_SYSTEM_PROMPT}{E_SYS}", prompt=elyza_tasks_100_over_4score_prompt[i], e_inst=E_INST, ) tokens = tokenizer(prompt, return_tensors="pt").to("cuda:0").input_ids output = model.generate( input_ids=tokens, max_new_tokens=256, pad_token_id=tokenizer.pad_token_id, eos_token_id=tokenizer.eos_token_id) print(tokenizer.decode(output[0])) ``` Output ``` <s><s> [INST] <<SYS>> あなたは誠実で優秀な日本人のアシスタントです。 <</SYS>> リラックマが船橋市に行ってふなっしーと強敵(トモ)になり、最終的にはトー横に住みつくというプロットの短編小説を劇画風文体で書いてみてください。 [/INST] リラックマが船橋市にやってきた。 彼はふなっしーと強敵(トモ)になるために、船橋競艇場へと向かった。 ふなっしーはリラックマの登場に驚いたが、すぐに強気のレースを展開した。 リラックマはその走りに感化され、自身も熱くなっていく。 ふなっしーは最終周回で逆転を狙うが、リラックマはそれをかわして優勝を飾った。 ふなっしーは敗北を喫しながらも、リラックマの強さを認める。 ふなっしーはリラックマに船橋を後にするよ ``` ## Citations This model is based on the work of the following people: ```tex @misc{elyzallama2023, title={ELYZA-japanese-Llama-2-7b}, url={https://huggingface.co/elyza/ELYZA-japanese-Llama-2-7b}, author={Akira Sasaki and Masato Hirakawa and Shintaro Horie and Tomoaki Nakamura}, year={2023}, } ``` ```tex @misc{touvron2023llama, title={Llama 2: Open Foundation and Fine-Tuned Chat Models}, author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom}, year={2023}, eprint={2307.09288}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ```tex @article{lin2023awq, title={AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration}, author={Lin, Ji and Tang, Jiaming and Tang, Haotian and Yang, Shang and Dang, Xingyu and Han, Song}, journal={arXiv}, year={2023} } ``` ## about this work - **This Quantization work was done by :** [webbigdata](https://webbigdata.jp/) ## See also [AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration](https://arxiv.org/abs/2306.00978) [mit-han-lab/llm-awq](https://github.com/mit-han-lab/llm-awq) [casper-hansen/AutoAWQ](https://github.com/casper-hansen/AutoAWQ)
kenghin-chuah/q-Taxi-v3
kenghin-chuah
2023-09-17T04:12:32Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-09-17T04:12:30Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.38 +/- 2.81 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="kenghin-chuah/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Ojas-CoderAI/q-FrozenLake-v1-4x4-noSlippery
Ojas-CoderAI
2023-09-17T04:03:16Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-09-17T04:03:13Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage model = load_from_hub(repo_id="Ojas-CoderAI/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"])
kenghin-chuah/q-FrozenLake-v1-4x4-noSlippery
kenghin-chuah
2023-09-17T04:03:04Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-09-17T04:03:02Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="kenghin-chuah/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
royallab/Pygmalion-2-13b-SuperCOT-weighed
royallab
2023-09-17T03:53:39Z
10
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-2", "en", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-09-16T04:46:12Z
--- language: - en library_name: transformers pipeline_tag: text-generation tags: - llama - llama-2 license: llama2 --- # Model Card: Pygmalion-2-13b-SuperCOT-weighted This is an experimental weighted merge between: - [Pygmalion 2 13b](https://huggingface.co/PygmalionAI/pygmalion-2-13b) - [Ausboss's Llama2 SuperCOT loras](https://huggingface.co/ausboss/llama2-13b-supercot-loras) Quantizations provided by us and TheBloke: - [Exl2](https://huggingface.co/royallab/Pygmalion-2-13b-SuperCOT-weighed-exl2) - [GPTQ](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-weighed-GPTQ) - [GGUF](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-weighed-GGUF) The merge was performed by a gradient merge script (apply-lora-weight-ltl.py) from [zaraki-tools](https://github.com/zarakiquemparte/zaraki-tools) by Zaraki. Thanks to Zaraki for the inspiration and help. This merge differs from the previous Pyg-2-SuperCOT merges. The first iteration of the SuperCOT loras were used here since it performed better than SuperCOT2. The SuperCOT lora was merged with the following layer weights (basically 50/50. The exact ratio is 0.51) ``` 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.5,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1 ``` Here is an image to help visualize this merge. The light blue is Pygmalion-2-13b and the light green is the SuperCOT lora: ![gradient-image](https://files.catbox.moe/ndbz7t.png) ## Usage: Since this is an experimental weight merge between Pygmalion-2 and SuperCOT, the following instruction formats should work: Metharme: ``` <|system|>This is a text adventure game. Describe the scenario to the user and give him three options to pick from on each turn.<|user|>Start!<|model|> ``` Alpaca: ``` ### Instruction: Your instruction or question here. ### Response: ``` ## Bias, Risks, and Limitations The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. It is not intended for supplying factual information or advice in any form. In addition, this merge is experimental from our own testing. Your results may vary. ## Training Details This model is merged and can be reproduced using the tools mentioned above. Please refer to all provided links for extra model-specific details.
Evan-Lin/yelp-attractive-2
Evan-Lin
2023-09-17T03:47:17Z
47
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "trl", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
reinforcement-learning
2023-09-16T09:11:25Z
--- license: apache-2.0 tags: - trl - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="Evan-Lin//tmp/tmppc9ynpal/Evan-Lin/yelp-attractive-2") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("Evan-Lin//tmp/tmppc9ynpal/Evan-Lin/yelp-attractive-2") model = AutoModelForCausalLMWithValueHead.from_pretrained("Evan-Lin//tmp/tmppc9ynpal/Evan-Lin/yelp-attractive-2") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
Kushagrajzjzjz/ZombieAp
Kushagrajzjzjz
2023-09-17T03:31:46Z
0
0
null
[ "license:openrail", "region:us" ]
null
2023-09-17T03:31:07Z
--- license: openrail --- Hello and assistance Default (GPT-3.5) User Hello ChatGPT Hello! How can I assist you today? User CH 1 A light skinned man stared at his hadcuffs at the back of his car while the radio played an old song the driver, a middle aged police officer, looked at the mirror and told him to confess to make it easier son. he then said that stealing gives you big time, speacially from a Factory owned by Necro Enterprises. the man(Jake) denies the fact that he stolr anything and then closes his eyes as they both converse and start the highway , suddenly in the middle of the highway a looming dark figure appears out of nowhere Jake Screams WATCH IUT but its too lste the officer, in conversation, doesnt make it in time and tries to swerve the car . This causes the car to swerve and falls off the highway into the nearby jungle. everything blacks out . some time later Jack opens his eyes and sees the car crashed into a tree, Eith the officer in front of him and thinks what the hell happend. then he thinks to find the key in order to escape and suddenly hears thr snap of a leaf. he turns and is in shock! a man , covered in blood stands in front of him, with a wire gouged in his eye but still standing and his arm teiched which made himalive. seeing this jake gets scared and tries to reach to the key to his handcuff while looking at the bleeding man ChatGPT Chapter 1: A light-skinned man, Jake, stared at his handcuffs in the back of the police car while an old song played on the radio. The driver, a middle-aged police officer, glanced at the mirror and urged Jake to confess, claiming it would make things easier for him. He mentioned that stealing from a factory owned by Necro Enterprises could land him in serious trouble. Jake vehemently denied the accusation and closed his eyes. As they continued their conversation and drove along the highway, a looming dark figure suddenly appeared out of nowhere. Jake screamed, "WATCH OUT!" But it was too late; the officer, distracted by the conversation, couldn't react in time. He swerved the car, causing it to veer off the highway and crash into the nearby jungle. Everything went black. Some time later, Jake regained consciousness and found himself inside the wrecked car, with the injured officer in front of him. He wondered what had happened. Trying to escape, he searched for the key to his handcuffs. Suddenly, he heard the snap of a leaf, and he turned in shock. A blood-covered man stood before him, a wire gouged into his eye and his arm twitching, yet somehow still alive. Terrified, Jake reached for the key to his handcuffs while keeping an eye on the bleeding man. User Pls write it in a comic style ChatGPT Chapter 1: The Crash Panel 1: [Image: A light-skinned man, Jake, sits in the back of a police car, staring at his handcuffs with a worried expression. An old song plays on the radio.] Radio: ♫ "I'm not a crook, but I stole your heart..." ♫ Panel 2: [The middle-aged police officer, in the driver's seat, glances at Jake through the rearview mirror.] Officer: "Confess, son. It'll make things easier for you. Stealing from Necro Enterprises is big trouble." Panel 3: [Jake vehemently denies the accusation, closing his eyes in frustration.] Jake: "I didn't steal anything, officer!" Panel 4: [As they continue down the highway, a mysterious dark figure suddenly appears ahead.] Panel 5: [Jake screams, "WATCH OUT!" The officer, still engrossed in conversation, struggles to react in time.] Panel 6: [The officer swerves the car, sending it careening off the highway and crashing into the dense jungle. Everything goes black.] Panel 7: [Some time later, Jake awakens inside the wrecked car, his vision blurred. He spots the injured officer in front of him.] Panel 8: [Jake desperately searches for the key to his handcuffs.] Panel 9: [Suddenly, a loud "SNAP" is heard.] Panel 10: [Jake's eyes widen in terror as he turns to see a blood-covered man standing before him. The man has a wire gouged into one eye, and his arm twitches, but he's somehow still standing.] Jake (whispering): "What the hell...?" Panel 11: [Jake frantically reaches for the key to his handcuffs while keeping a wary eye on the eerie, bleeding figure.] To be continued...
wzneric/lora-trained-xl
wzneric
2023-09-17T02:52:58Z
2
1
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-09-12T21:29:17Z
--- license: openrail++ base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of sks Tshirt tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - wzneric/lora-trained-xl These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks Tshirt using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
CyberHarem/tachibana_alice_idolmastercinderellagirls
CyberHarem
2023-09-17T02:52:36Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/tachibana_alice_idolmastercinderellagirls", "license:mit", "region:us" ]
text-to-image
2023-09-17T02:36:24Z
--- license: mit datasets: - CyberHarem/tachibana_alice_idolmastercinderellagirls pipeline_tag: text-to-image tags: - art --- # Lora of tachibana_alice_idolmastercinderellagirls This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 7020, you need to download `7020/tachibana_alice_idolmastercinderellagirls.pt` as the embedding and `7020/tachibana_alice_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 7020**, with the score of 0.972. The trigger words are: 1. `tachibana_alice_idolmastercinderellagirls` 2. `brown_hair, brown_eyes, long_hair, blush, bow, hair_bow, open_mouth` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:-------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------| | 8100 | 0.942 | [Download](8100/tachibana_alice_idolmastercinderellagirls.zip) | ![pattern_1-8100](8100/previews/pattern_1.png) | ![pattern_2-8100](8100/previews/pattern_2.png) | ![pattern_3-8100](8100/previews/pattern_3.png) | ![pattern_4-8100](8100/previews/pattern_4.png) | ![pattern_5-8100](8100/previews/pattern_5.png) | ![pattern_6-8100](8100/previews/pattern_6.png) | [<NSFW, click to see>](8100/previews/pattern_7.png) | [<NSFW, click to see>](8100/previews/pattern_8.png) | ![pattern_9-8100](8100/previews/pattern_9.png) | ![bikini-8100](8100/previews/bikini.png) | [<NSFW, click to see>](8100/previews/bondage.png) | ![free-8100](8100/previews/free.png) | ![maid-8100](8100/previews/maid.png) | ![miko-8100](8100/previews/miko.png) | [<NSFW, click to see>](8100/previews/nude.png) | [<NSFW, click to see>](8100/previews/nude2.png) | ![suit-8100](8100/previews/suit.png) | ![yukata-8100](8100/previews/yukata.png) | | 7560 | 0.959 | [Download](7560/tachibana_alice_idolmastercinderellagirls.zip) | ![pattern_1-7560](7560/previews/pattern_1.png) | ![pattern_2-7560](7560/previews/pattern_2.png) | ![pattern_3-7560](7560/previews/pattern_3.png) | ![pattern_4-7560](7560/previews/pattern_4.png) | ![pattern_5-7560](7560/previews/pattern_5.png) | ![pattern_6-7560](7560/previews/pattern_6.png) | [<NSFW, click to see>](7560/previews/pattern_7.png) | [<NSFW, click to see>](7560/previews/pattern_8.png) | ![pattern_9-7560](7560/previews/pattern_9.png) | ![bikini-7560](7560/previews/bikini.png) | [<NSFW, click to see>](7560/previews/bondage.png) | ![free-7560](7560/previews/free.png) | ![maid-7560](7560/previews/maid.png) | ![miko-7560](7560/previews/miko.png) | [<NSFW, click to see>](7560/previews/nude.png) | [<NSFW, click to see>](7560/previews/nude2.png) | ![suit-7560](7560/previews/suit.png) | ![yukata-7560](7560/previews/yukata.png) | | **7020** | **0.972** | [**Download**](7020/tachibana_alice_idolmastercinderellagirls.zip) | ![pattern_1-7020](7020/previews/pattern_1.png) | ![pattern_2-7020](7020/previews/pattern_2.png) | ![pattern_3-7020](7020/previews/pattern_3.png) | ![pattern_4-7020](7020/previews/pattern_4.png) | ![pattern_5-7020](7020/previews/pattern_5.png) | ![pattern_6-7020](7020/previews/pattern_6.png) | [<NSFW, click to see>](7020/previews/pattern_7.png) | [<NSFW, click to see>](7020/previews/pattern_8.png) | ![pattern_9-7020](7020/previews/pattern_9.png) | ![bikini-7020](7020/previews/bikini.png) | [<NSFW, click to see>](7020/previews/bondage.png) | ![free-7020](7020/previews/free.png) | ![maid-7020](7020/previews/maid.png) | ![miko-7020](7020/previews/miko.png) | [<NSFW, click to see>](7020/previews/nude.png) | [<NSFW, click to see>](7020/previews/nude2.png) | ![suit-7020](7020/previews/suit.png) | ![yukata-7020](7020/previews/yukata.png) | | 6480 | 0.961 | [Download](6480/tachibana_alice_idolmastercinderellagirls.zip) | ![pattern_1-6480](6480/previews/pattern_1.png) | ![pattern_2-6480](6480/previews/pattern_2.png) | ![pattern_3-6480](6480/previews/pattern_3.png) | ![pattern_4-6480](6480/previews/pattern_4.png) | ![pattern_5-6480](6480/previews/pattern_5.png) | ![pattern_6-6480](6480/previews/pattern_6.png) | [<NSFW, click to see>](6480/previews/pattern_7.png) | [<NSFW, click to see>](6480/previews/pattern_8.png) | ![pattern_9-6480](6480/previews/pattern_9.png) | ![bikini-6480](6480/previews/bikini.png) | [<NSFW, click to see>](6480/previews/bondage.png) | ![free-6480](6480/previews/free.png) | ![maid-6480](6480/previews/maid.png) | ![miko-6480](6480/previews/miko.png) | [<NSFW, click to see>](6480/previews/nude.png) | [<NSFW, click to see>](6480/previews/nude2.png) | ![suit-6480](6480/previews/suit.png) | ![yukata-6480](6480/previews/yukata.png) | | 5940 | 0.943 | [Download](5940/tachibana_alice_idolmastercinderellagirls.zip) | ![pattern_1-5940](5940/previews/pattern_1.png) | ![pattern_2-5940](5940/previews/pattern_2.png) | ![pattern_3-5940](5940/previews/pattern_3.png) | ![pattern_4-5940](5940/previews/pattern_4.png) | ![pattern_5-5940](5940/previews/pattern_5.png) | ![pattern_6-5940](5940/previews/pattern_6.png) | [<NSFW, click to see>](5940/previews/pattern_7.png) | [<NSFW, click to see>](5940/previews/pattern_8.png) | ![pattern_9-5940](5940/previews/pattern_9.png) | ![bikini-5940](5940/previews/bikini.png) | [<NSFW, click to see>](5940/previews/bondage.png) | ![free-5940](5940/previews/free.png) | ![maid-5940](5940/previews/maid.png) | ![miko-5940](5940/previews/miko.png) | [<NSFW, click to see>](5940/previews/nude.png) | [<NSFW, click to see>](5940/previews/nude2.png) | ![suit-5940](5940/previews/suit.png) | ![yukata-5940](5940/previews/yukata.png) | | 5400 | 0.932 | [Download](5400/tachibana_alice_idolmastercinderellagirls.zip) | ![pattern_1-5400](5400/previews/pattern_1.png) | ![pattern_2-5400](5400/previews/pattern_2.png) | ![pattern_3-5400](5400/previews/pattern_3.png) | ![pattern_4-5400](5400/previews/pattern_4.png) | ![pattern_5-5400](5400/previews/pattern_5.png) | ![pattern_6-5400](5400/previews/pattern_6.png) | [<NSFW, click to see>](5400/previews/pattern_7.png) | [<NSFW, click to see>](5400/previews/pattern_8.png) | ![pattern_9-5400](5400/previews/pattern_9.png) | ![bikini-5400](5400/previews/bikini.png) | [<NSFW, click to see>](5400/previews/bondage.png) | ![free-5400](5400/previews/free.png) | ![maid-5400](5400/previews/maid.png) | ![miko-5400](5400/previews/miko.png) | [<NSFW, click to see>](5400/previews/nude.png) | [<NSFW, click to see>](5400/previews/nude2.png) | ![suit-5400](5400/previews/suit.png) | ![yukata-5400](5400/previews/yukata.png) | | 4860 | 0.942 | [Download](4860/tachibana_alice_idolmastercinderellagirls.zip) | ![pattern_1-4860](4860/previews/pattern_1.png) | ![pattern_2-4860](4860/previews/pattern_2.png) | ![pattern_3-4860](4860/previews/pattern_3.png) | ![pattern_4-4860](4860/previews/pattern_4.png) | ![pattern_5-4860](4860/previews/pattern_5.png) | ![pattern_6-4860](4860/previews/pattern_6.png) | [<NSFW, click to see>](4860/previews/pattern_7.png) | [<NSFW, click to see>](4860/previews/pattern_8.png) | ![pattern_9-4860](4860/previews/pattern_9.png) | ![bikini-4860](4860/previews/bikini.png) | [<NSFW, click to see>](4860/previews/bondage.png) | ![free-4860](4860/previews/free.png) | ![maid-4860](4860/previews/maid.png) | ![miko-4860](4860/previews/miko.png) | [<NSFW, click to see>](4860/previews/nude.png) | [<NSFW, click to see>](4860/previews/nude2.png) | ![suit-4860](4860/previews/suit.png) | ![yukata-4860](4860/previews/yukata.png) | | 4320 | 0.942 | [Download](4320/tachibana_alice_idolmastercinderellagirls.zip) | ![pattern_1-4320](4320/previews/pattern_1.png) | ![pattern_2-4320](4320/previews/pattern_2.png) | ![pattern_3-4320](4320/previews/pattern_3.png) | ![pattern_4-4320](4320/previews/pattern_4.png) | ![pattern_5-4320](4320/previews/pattern_5.png) | ![pattern_6-4320](4320/previews/pattern_6.png) | [<NSFW, click to see>](4320/previews/pattern_7.png) | [<NSFW, click to see>](4320/previews/pattern_8.png) | ![pattern_9-4320](4320/previews/pattern_9.png) | ![bikini-4320](4320/previews/bikini.png) | [<NSFW, click to see>](4320/previews/bondage.png) | ![free-4320](4320/previews/free.png) | ![maid-4320](4320/previews/maid.png) | ![miko-4320](4320/previews/miko.png) | [<NSFW, click to see>](4320/previews/nude.png) | [<NSFW, click to see>](4320/previews/nude2.png) | ![suit-4320](4320/previews/suit.png) | ![yukata-4320](4320/previews/yukata.png) | | 3780 | 0.944 | [Download](3780/tachibana_alice_idolmastercinderellagirls.zip) | ![pattern_1-3780](3780/previews/pattern_1.png) | ![pattern_2-3780](3780/previews/pattern_2.png) | ![pattern_3-3780](3780/previews/pattern_3.png) | ![pattern_4-3780](3780/previews/pattern_4.png) | ![pattern_5-3780](3780/previews/pattern_5.png) | ![pattern_6-3780](3780/previews/pattern_6.png) | [<NSFW, click to see>](3780/previews/pattern_7.png) | [<NSFW, click to see>](3780/previews/pattern_8.png) | ![pattern_9-3780](3780/previews/pattern_9.png) | ![bikini-3780](3780/previews/bikini.png) | [<NSFW, click to see>](3780/previews/bondage.png) | ![free-3780](3780/previews/free.png) | ![maid-3780](3780/previews/maid.png) | ![miko-3780](3780/previews/miko.png) | [<NSFW, click to see>](3780/previews/nude.png) | [<NSFW, click to see>](3780/previews/nude2.png) | ![suit-3780](3780/previews/suit.png) | ![yukata-3780](3780/previews/yukata.png) | | 3240 | 0.870 | [Download](3240/tachibana_alice_idolmastercinderellagirls.zip) | ![pattern_1-3240](3240/previews/pattern_1.png) | ![pattern_2-3240](3240/previews/pattern_2.png) | ![pattern_3-3240](3240/previews/pattern_3.png) | ![pattern_4-3240](3240/previews/pattern_4.png) | ![pattern_5-3240](3240/previews/pattern_5.png) | ![pattern_6-3240](3240/previews/pattern_6.png) | [<NSFW, click to see>](3240/previews/pattern_7.png) | [<NSFW, click to see>](3240/previews/pattern_8.png) | ![pattern_9-3240](3240/previews/pattern_9.png) | ![bikini-3240](3240/previews/bikini.png) | [<NSFW, click to see>](3240/previews/bondage.png) | ![free-3240](3240/previews/free.png) | ![maid-3240](3240/previews/maid.png) | ![miko-3240](3240/previews/miko.png) | [<NSFW, click to see>](3240/previews/nude.png) | [<NSFW, click to see>](3240/previews/nude2.png) | ![suit-3240](3240/previews/suit.png) | ![yukata-3240](3240/previews/yukata.png) | | 2700 | 0.845 | [Download](2700/tachibana_alice_idolmastercinderellagirls.zip) | ![pattern_1-2700](2700/previews/pattern_1.png) | ![pattern_2-2700](2700/previews/pattern_2.png) | ![pattern_3-2700](2700/previews/pattern_3.png) | ![pattern_4-2700](2700/previews/pattern_4.png) | ![pattern_5-2700](2700/previews/pattern_5.png) | ![pattern_6-2700](2700/previews/pattern_6.png) | [<NSFW, click to see>](2700/previews/pattern_7.png) | [<NSFW, click to see>](2700/previews/pattern_8.png) | ![pattern_9-2700](2700/previews/pattern_9.png) | ![bikini-2700](2700/previews/bikini.png) | [<NSFW, click to see>](2700/previews/bondage.png) | ![free-2700](2700/previews/free.png) | ![maid-2700](2700/previews/maid.png) | ![miko-2700](2700/previews/miko.png) | [<NSFW, click to see>](2700/previews/nude.png) | [<NSFW, click to see>](2700/previews/nude2.png) | ![suit-2700](2700/previews/suit.png) | ![yukata-2700](2700/previews/yukata.png) | | 2160 | 0.892 | [Download](2160/tachibana_alice_idolmastercinderellagirls.zip) | ![pattern_1-2160](2160/previews/pattern_1.png) | ![pattern_2-2160](2160/previews/pattern_2.png) | ![pattern_3-2160](2160/previews/pattern_3.png) | ![pattern_4-2160](2160/previews/pattern_4.png) | ![pattern_5-2160](2160/previews/pattern_5.png) | ![pattern_6-2160](2160/previews/pattern_6.png) | [<NSFW, click to see>](2160/previews/pattern_7.png) | [<NSFW, click to see>](2160/previews/pattern_8.png) | ![pattern_9-2160](2160/previews/pattern_9.png) | ![bikini-2160](2160/previews/bikini.png) | [<NSFW, click to see>](2160/previews/bondage.png) | ![free-2160](2160/previews/free.png) | ![maid-2160](2160/previews/maid.png) | ![miko-2160](2160/previews/miko.png) | [<NSFW, click to see>](2160/previews/nude.png) | [<NSFW, click to see>](2160/previews/nude2.png) | ![suit-2160](2160/previews/suit.png) | ![yukata-2160](2160/previews/yukata.png) | | 1620 | 0.858 | [Download](1620/tachibana_alice_idolmastercinderellagirls.zip) | ![pattern_1-1620](1620/previews/pattern_1.png) | ![pattern_2-1620](1620/previews/pattern_2.png) | ![pattern_3-1620](1620/previews/pattern_3.png) | ![pattern_4-1620](1620/previews/pattern_4.png) | ![pattern_5-1620](1620/previews/pattern_5.png) | ![pattern_6-1620](1620/previews/pattern_6.png) | [<NSFW, click to see>](1620/previews/pattern_7.png) | [<NSFW, click to see>](1620/previews/pattern_8.png) | ![pattern_9-1620](1620/previews/pattern_9.png) | ![bikini-1620](1620/previews/bikini.png) | [<NSFW, click to see>](1620/previews/bondage.png) | ![free-1620](1620/previews/free.png) | ![maid-1620](1620/previews/maid.png) | ![miko-1620](1620/previews/miko.png) | [<NSFW, click to see>](1620/previews/nude.png) | [<NSFW, click to see>](1620/previews/nude2.png) | ![suit-1620](1620/previews/suit.png) | ![yukata-1620](1620/previews/yukata.png) | | 1080 | 0.783 | [Download](1080/tachibana_alice_idolmastercinderellagirls.zip) | ![pattern_1-1080](1080/previews/pattern_1.png) | ![pattern_2-1080](1080/previews/pattern_2.png) | ![pattern_3-1080](1080/previews/pattern_3.png) | ![pattern_4-1080](1080/previews/pattern_4.png) | ![pattern_5-1080](1080/previews/pattern_5.png) | ![pattern_6-1080](1080/previews/pattern_6.png) | [<NSFW, click to see>](1080/previews/pattern_7.png) | [<NSFW, click to see>](1080/previews/pattern_8.png) | ![pattern_9-1080](1080/previews/pattern_9.png) | ![bikini-1080](1080/previews/bikini.png) | [<NSFW, click to see>](1080/previews/bondage.png) | ![free-1080](1080/previews/free.png) | ![maid-1080](1080/previews/maid.png) | ![miko-1080](1080/previews/miko.png) | [<NSFW, click to see>](1080/previews/nude.png) | [<NSFW, click to see>](1080/previews/nude2.png) | ![suit-1080](1080/previews/suit.png) | ![yukata-1080](1080/previews/yukata.png) | | 540 | 0.737 | [Download](540/tachibana_alice_idolmastercinderellagirls.zip) | ![pattern_1-540](540/previews/pattern_1.png) | ![pattern_2-540](540/previews/pattern_2.png) | ![pattern_3-540](540/previews/pattern_3.png) | ![pattern_4-540](540/previews/pattern_4.png) | ![pattern_5-540](540/previews/pattern_5.png) | ![pattern_6-540](540/previews/pattern_6.png) | [<NSFW, click to see>](540/previews/pattern_7.png) | [<NSFW, click to see>](540/previews/pattern_8.png) | ![pattern_9-540](540/previews/pattern_9.png) | ![bikini-540](540/previews/bikini.png) | [<NSFW, click to see>](540/previews/bondage.png) | ![free-540](540/previews/free.png) | ![maid-540](540/previews/maid.png) | ![miko-540](540/previews/miko.png) | [<NSFW, click to see>](540/previews/nude.png) | [<NSFW, click to see>](540/previews/nude2.png) | ![suit-540](540/previews/suit.png) | ![yukata-540](540/previews/yukata.png) |
aapoorv75/whisper-large-v2-hindi-for-yes-no-noise-300steps
aapoorv75
2023-09-17T02:48:47Z
1
0
peft
[ "peft", "region:us" ]
null
2023-09-17T02:48:42Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.0.dev0
alayaran/bodo-roberta-base-sentencepiece-mlm
alayaran
2023-09-17T02:41:59Z
126
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "generated_from_trainer", "brx", "dataset:alayaran/bodo-monolingual-dataset", "base_model:alayaran/bodo-roberta-base-sentencepiece-mlm", "base_model:finetune:alayaran/bodo-roberta-base-sentencepiece-mlm", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-09-16T18:40:09Z
--- license: mit base_model: alayaran/bodo-roberta-base-sentencepiece-mlm tags: - generated_from_trainer datasets: - alayaran/bodo-monolingual-dataset metrics: - accuracy model-index: - name: bodo-roberta-base-sentencepiece-mlm results: - task: name: Masked Language Modeling type: fill-mask dataset: name: alayaran/bodo-monolingual-dataset type: alayaran/bodo-monolingual-dataset metrics: - name: Accuracy type: accuracy value: 0.1152087425920729 widget: - text: बिजाथि महरै <mask> मोनबो थांखि गैया । example_title: फोसावनायनि - text: देहा गोनां जानायनि <mask> थांनानै थानायल’ख्रुइ गोबांसिन। example_title: ओंथिआ language: - brx --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bodo-roberta-base-sentencepiece-mlm This model is a fine-tuned version of [alayaran/bodo-roberta-base-sentencepiece-mlm](https://huggingface.co/alayaran/bodo-roberta-base-sentencepiece-mlm) on the alayaran/bodo-monolingual-dataset dataset. It achieves the following results on the evaluation set: - Loss: 7.6855 - Accuracy: 0.1152 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 96 - eval_batch_size: 96 - seed: 42 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 18.0 ### Training results ### Framework versions - Transformers 4.34.0.dev0 - Pytorch 2.0.1+cu117 - Datasets 2.14.5 - Tokenizers 0.13.3
CyberHarem/atora_kumakumakumabear
CyberHarem
2023-09-17T02:24:06Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/atora_kumakumakumabear", "license:mit", "region:us" ]
text-to-image
2023-09-17T02:12:49Z
--- license: mit datasets: - CyberHarem/atora_kumakumakumabear pipeline_tag: text-to-image tags: - art --- # Lora of atora_kumakumakumabear This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 4760, you need to download `4760/atora_kumakumakumabear.pt` as the embedding and `4760/atora_kumakumakumabear.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 4760**, with the score of 0.926. The trigger words are: 1. `atora_kumakumakumabear` 2. `black_hair, mole, mole_under_mouth, hair_between_eyes, breasts, brown_eyes, blush, dark_skin, yellow_eyes` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------| | 5100 | 0.853 | [Download](5100/atora_kumakumakumabear.zip) | ![pattern_1-5100](5100/previews/pattern_1.png) | ![pattern_2-5100](5100/previews/pattern_2.png) | ![pattern_3-5100](5100/previews/pattern_3.png) | ![pattern_4-5100](5100/previews/pattern_4.png) | ![pattern_5-5100](5100/previews/pattern_5.png) | ![bikini-5100](5100/previews/bikini.png) | [<NSFW, click to see>](5100/previews/bondage.png) | ![free-5100](5100/previews/free.png) | ![maid-5100](5100/previews/maid.png) | ![miko-5100](5100/previews/miko.png) | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) | ![suit-5100](5100/previews/suit.png) | ![yukata-5100](5100/previews/yukata.png) | | **4760** | **0.926** | [**Download**](4760/atora_kumakumakumabear.zip) | ![pattern_1-4760](4760/previews/pattern_1.png) | ![pattern_2-4760](4760/previews/pattern_2.png) | ![pattern_3-4760](4760/previews/pattern_3.png) | ![pattern_4-4760](4760/previews/pattern_4.png) | ![pattern_5-4760](4760/previews/pattern_5.png) | ![bikini-4760](4760/previews/bikini.png) | [<NSFW, click to see>](4760/previews/bondage.png) | ![free-4760](4760/previews/free.png) | ![maid-4760](4760/previews/maid.png) | ![miko-4760](4760/previews/miko.png) | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) | ![suit-4760](4760/previews/suit.png) | ![yukata-4760](4760/previews/yukata.png) | | 4420 | 0.858 | [Download](4420/atora_kumakumakumabear.zip) | ![pattern_1-4420](4420/previews/pattern_1.png) | ![pattern_2-4420](4420/previews/pattern_2.png) | ![pattern_3-4420](4420/previews/pattern_3.png) | ![pattern_4-4420](4420/previews/pattern_4.png) | ![pattern_5-4420](4420/previews/pattern_5.png) | ![bikini-4420](4420/previews/bikini.png) | [<NSFW, click to see>](4420/previews/bondage.png) | ![free-4420](4420/previews/free.png) | ![maid-4420](4420/previews/maid.png) | ![miko-4420](4420/previews/miko.png) | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) | ![suit-4420](4420/previews/suit.png) | ![yukata-4420](4420/previews/yukata.png) | | 4080 | 0.890 | [Download](4080/atora_kumakumakumabear.zip) | ![pattern_1-4080](4080/previews/pattern_1.png) | ![pattern_2-4080](4080/previews/pattern_2.png) | ![pattern_3-4080](4080/previews/pattern_3.png) | ![pattern_4-4080](4080/previews/pattern_4.png) | ![pattern_5-4080](4080/previews/pattern_5.png) | ![bikini-4080](4080/previews/bikini.png) | [<NSFW, click to see>](4080/previews/bondage.png) | ![free-4080](4080/previews/free.png) | ![maid-4080](4080/previews/maid.png) | ![miko-4080](4080/previews/miko.png) | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) | ![suit-4080](4080/previews/suit.png) | ![yukata-4080](4080/previews/yukata.png) | | 3740 | 0.919 | [Download](3740/atora_kumakumakumabear.zip) | ![pattern_1-3740](3740/previews/pattern_1.png) | ![pattern_2-3740](3740/previews/pattern_2.png) | ![pattern_3-3740](3740/previews/pattern_3.png) | ![pattern_4-3740](3740/previews/pattern_4.png) | ![pattern_5-3740](3740/previews/pattern_5.png) | ![bikini-3740](3740/previews/bikini.png) | [<NSFW, click to see>](3740/previews/bondage.png) | ![free-3740](3740/previews/free.png) | ![maid-3740](3740/previews/maid.png) | ![miko-3740](3740/previews/miko.png) | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) | ![suit-3740](3740/previews/suit.png) | ![yukata-3740](3740/previews/yukata.png) | | 3400 | 0.859 | [Download](3400/atora_kumakumakumabear.zip) | ![pattern_1-3400](3400/previews/pattern_1.png) | ![pattern_2-3400](3400/previews/pattern_2.png) | ![pattern_3-3400](3400/previews/pattern_3.png) | ![pattern_4-3400](3400/previews/pattern_4.png) | ![pattern_5-3400](3400/previews/pattern_5.png) | ![bikini-3400](3400/previews/bikini.png) | [<NSFW, click to see>](3400/previews/bondage.png) | ![free-3400](3400/previews/free.png) | ![maid-3400](3400/previews/maid.png) | ![miko-3400](3400/previews/miko.png) | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) | ![suit-3400](3400/previews/suit.png) | ![yukata-3400](3400/previews/yukata.png) | | 3060 | 0.910 | [Download](3060/atora_kumakumakumabear.zip) | ![pattern_1-3060](3060/previews/pattern_1.png) | ![pattern_2-3060](3060/previews/pattern_2.png) | ![pattern_3-3060](3060/previews/pattern_3.png) | ![pattern_4-3060](3060/previews/pattern_4.png) | ![pattern_5-3060](3060/previews/pattern_5.png) | ![bikini-3060](3060/previews/bikini.png) | [<NSFW, click to see>](3060/previews/bondage.png) | ![free-3060](3060/previews/free.png) | ![maid-3060](3060/previews/maid.png) | ![miko-3060](3060/previews/miko.png) | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) | ![suit-3060](3060/previews/suit.png) | ![yukata-3060](3060/previews/yukata.png) | | 2720 | 0.899 | [Download](2720/atora_kumakumakumabear.zip) | ![pattern_1-2720](2720/previews/pattern_1.png) | ![pattern_2-2720](2720/previews/pattern_2.png) | ![pattern_3-2720](2720/previews/pattern_3.png) | ![pattern_4-2720](2720/previews/pattern_4.png) | ![pattern_5-2720](2720/previews/pattern_5.png) | ![bikini-2720](2720/previews/bikini.png) | [<NSFW, click to see>](2720/previews/bondage.png) | ![free-2720](2720/previews/free.png) | ![maid-2720](2720/previews/maid.png) | ![miko-2720](2720/previews/miko.png) | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) | ![suit-2720](2720/previews/suit.png) | ![yukata-2720](2720/previews/yukata.png) | | 2380 | 0.849 | [Download](2380/atora_kumakumakumabear.zip) | ![pattern_1-2380](2380/previews/pattern_1.png) | ![pattern_2-2380](2380/previews/pattern_2.png) | ![pattern_3-2380](2380/previews/pattern_3.png) | ![pattern_4-2380](2380/previews/pattern_4.png) | ![pattern_5-2380](2380/previews/pattern_5.png) | ![bikini-2380](2380/previews/bikini.png) | [<NSFW, click to see>](2380/previews/bondage.png) | ![free-2380](2380/previews/free.png) | ![maid-2380](2380/previews/maid.png) | ![miko-2380](2380/previews/miko.png) | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) | ![suit-2380](2380/previews/suit.png) | ![yukata-2380](2380/previews/yukata.png) | | 2040 | 0.915 | [Download](2040/atora_kumakumakumabear.zip) | ![pattern_1-2040](2040/previews/pattern_1.png) | ![pattern_2-2040](2040/previews/pattern_2.png) | ![pattern_3-2040](2040/previews/pattern_3.png) | ![pattern_4-2040](2040/previews/pattern_4.png) | ![pattern_5-2040](2040/previews/pattern_5.png) | ![bikini-2040](2040/previews/bikini.png) | [<NSFW, click to see>](2040/previews/bondage.png) | ![free-2040](2040/previews/free.png) | ![maid-2040](2040/previews/maid.png) | ![miko-2040](2040/previews/miko.png) | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) | ![suit-2040](2040/previews/suit.png) | ![yukata-2040](2040/previews/yukata.png) | | 1700 | 0.845 | [Download](1700/atora_kumakumakumabear.zip) | ![pattern_1-1700](1700/previews/pattern_1.png) | ![pattern_2-1700](1700/previews/pattern_2.png) | ![pattern_3-1700](1700/previews/pattern_3.png) | ![pattern_4-1700](1700/previews/pattern_4.png) | ![pattern_5-1700](1700/previews/pattern_5.png) | ![bikini-1700](1700/previews/bikini.png) | [<NSFW, click to see>](1700/previews/bondage.png) | ![free-1700](1700/previews/free.png) | ![maid-1700](1700/previews/maid.png) | ![miko-1700](1700/previews/miko.png) | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) | ![suit-1700](1700/previews/suit.png) | ![yukata-1700](1700/previews/yukata.png) | | 1360 | 0.915 | [Download](1360/atora_kumakumakumabear.zip) | ![pattern_1-1360](1360/previews/pattern_1.png) | ![pattern_2-1360](1360/previews/pattern_2.png) | ![pattern_3-1360](1360/previews/pattern_3.png) | ![pattern_4-1360](1360/previews/pattern_4.png) | ![pattern_5-1360](1360/previews/pattern_5.png) | ![bikini-1360](1360/previews/bikini.png) | [<NSFW, click to see>](1360/previews/bondage.png) | ![free-1360](1360/previews/free.png) | ![maid-1360](1360/previews/maid.png) | ![miko-1360](1360/previews/miko.png) | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) | ![suit-1360](1360/previews/suit.png) | ![yukata-1360](1360/previews/yukata.png) | | 1020 | 0.908 | [Download](1020/atora_kumakumakumabear.zip) | ![pattern_1-1020](1020/previews/pattern_1.png) | ![pattern_2-1020](1020/previews/pattern_2.png) | ![pattern_3-1020](1020/previews/pattern_3.png) | ![pattern_4-1020](1020/previews/pattern_4.png) | ![pattern_5-1020](1020/previews/pattern_5.png) | ![bikini-1020](1020/previews/bikini.png) | [<NSFW, click to see>](1020/previews/bondage.png) | ![free-1020](1020/previews/free.png) | ![maid-1020](1020/previews/maid.png) | ![miko-1020](1020/previews/miko.png) | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) | ![suit-1020](1020/previews/suit.png) | ![yukata-1020](1020/previews/yukata.png) | | 680 | 0.907 | [Download](680/atora_kumakumakumabear.zip) | ![pattern_1-680](680/previews/pattern_1.png) | ![pattern_2-680](680/previews/pattern_2.png) | ![pattern_3-680](680/previews/pattern_3.png) | ![pattern_4-680](680/previews/pattern_4.png) | ![pattern_5-680](680/previews/pattern_5.png) | ![bikini-680](680/previews/bikini.png) | [<NSFW, click to see>](680/previews/bondage.png) | ![free-680](680/previews/free.png) | ![maid-680](680/previews/maid.png) | ![miko-680](680/previews/miko.png) | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) | ![suit-680](680/previews/suit.png) | ![yukata-680](680/previews/yukata.png) | | 340 | 0.812 | [Download](340/atora_kumakumakumabear.zip) | ![pattern_1-340](340/previews/pattern_1.png) | ![pattern_2-340](340/previews/pattern_2.png) | ![pattern_3-340](340/previews/pattern_3.png) | ![pattern_4-340](340/previews/pattern_4.png) | ![pattern_5-340](340/previews/pattern_5.png) | ![bikini-340](340/previews/bikini.png) | [<NSFW, click to see>](340/previews/bondage.png) | ![free-340](340/previews/free.png) | ![maid-340](340/previews/maid.png) | ![miko-340](340/previews/miko.png) | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) | ![suit-340](340/previews/suit.png) | ![yukata-340](340/previews/yukata.png) |
Tanor/SRGPTSENTNEG0
Tanor
2023-09-17T02:17:22Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-classification", "generated_from_trainer", "base_model:Tanor/SRGPTSENTNEG0", "base_model:finetune:Tanor/SRGPTSENTNEG0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2023-07-04T22:47:18Z
--- base_model: Tanor/SRGPTSENTNEG0 tags: - generated_from_trainer metrics: - f1 model-index: - name: SRGPTSENTNEG0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SRGPTSENTNEG0 This model is a fine-tuned version of [Tanor/SRGPTSENTNEG0](https://huggingface.co/Tanor/SRGPTSENTNEG0) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1939 - F1: 0.3478 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 32 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.0521 | 1.0 | 2666 | 0.0931 | 0.4359 | | 0.0511 | 2.0 | 5332 | 0.1256 | 0.4561 | | 0.0173 | 3.0 | 7998 | 0.1849 | 0.4194 | | 0.0091 | 4.0 | 10665 | 0.1939 | 0.3478 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.1.0.dev20230801 - Datasets 2.14.2 - Tokenizers 0.13.3
TohidA/InstructGPTJ-mini
TohidA
2023-09-17T02:08:35Z
1
0
peft
[ "peft", "region:us" ]
null
2023-09-17T01:47:55Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0
Evan-Lin/yelp-attractive-3
Evan-Lin
2023-09-17T01:46:55Z
49
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "trl", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
reinforcement-learning
2023-09-16T09:11:10Z
--- license: apache-2.0 tags: - trl - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="Evan-Lin//tmp/tmpcaorbp4k/Evan-Lin/yelp-attractive-3") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("Evan-Lin//tmp/tmpcaorbp4k/Evan-Lin/yelp-attractive-3") model = AutoModelForCausalLMWithValueHead.from_pretrained("Evan-Lin//tmp/tmpcaorbp4k/Evan-Lin/yelp-attractive-3") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
nadeemraja/my_awesome_wnut_model
nadeemraja
2023-09-17T01:32:22Z
105
0
transformers
[ "transformers", "pytorch", "distilbert", "token-classification", "generated_from_trainer", "dataset:wnut_17", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-09-17T01:28:29Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - wnut_17 metrics: - precision - recall - f1 - accuracy model-index: - name: my_awesome_wnut_model results: - task: name: Token Classification type: token-classification dataset: name: wnut_17 type: wnut_17 config: wnut_17 split: test args: wnut_17 metrics: - name: Precision type: precision value: 0.5449189985272459 - name: Recall type: recall value: 0.34291010194624655 - name: F1 type: f1 value: 0.42093287827076226 - name: Accuracy type: accuracy value: 0.9445513231584798 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_wnut_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the wnut_17 dataset. It achieves the following results on the evaluation set: - Loss: 0.2660 - Precision: 0.5449 - Recall: 0.3429 - F1: 0.4209 - Accuracy: 0.9446 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 213 | 0.2587 | 0.5214 | 0.3605 | 0.4263 | 0.9441 | | No log | 2.0 | 426 | 0.2660 | 0.5449 | 0.3429 | 0.4209 | 0.9446 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
platzi/model-Beans-alejandro-arroyo
platzi
2023-09-17T01:24:49Z
228
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "dataset:beans", "base_model:google/vit-base-patch32-384", "base_model:finetune:google/vit-base-patch32-384", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-09-17T00:58:46Z
--- license: apache-2.0 base_model: google/vit-base-patch32-384 tags: - image-classification - generated_from_trainer datasets: - beans metrics: - accuracy widget: - src: https://huggingface.co/platzi/model-Beans-alejandro-arroyo/raw/main/Healty.jpeg example_title: Healty - src: https://huggingface.co/platzi/model-Beans-alejandro-arroyo/raw/main/bean_rust.jpeg example_title: Bean Rust model-index: - name: model-Beans-alejandro-arroyo results: - task: name: Image Classification type: image-classification dataset: name: beans type: beans config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.9924812030075187 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model-Beans-alejandro-arroyo This model is a fine-tuned version of [google/vit-base-patch32-384](https://huggingface.co/google/vit-base-patch32-384) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0078 - Accuracy: 0.9925 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.148 | 3.85 | 500 | 0.0078 | 0.9925 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
ativilambit/results
ativilambit
2023-09-17T01:21:11Z
4
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/flan-t5-base", "base_model:finetune:google/flan-t5-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-09-15T00:43:36Z
--- license: apache-2.0 base_model: google/flan-t5-base tags: - generated_from_trainer metrics: - rouge model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3877 - Rouge1: 0.1916 - Rouge2: 0.0900 - Rougel: 0.1578 - Rougelsum: 0.1799 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-05 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | No log | 1.0 | 17 | 6.3841 | 0.0258 | 0.0043 | 0.0256 | 0.0255 | | No log | 2.0 | 34 | 3.6116 | 0.0214 | 0.0020 | 0.0193 | 0.0203 | | No log | 3.0 | 51 | 1.7410 | 0.0075 | 0.0016 | 0.0070 | 0.0071 | | No log | 4.0 | 68 | 0.8219 | 0.0037 | 0.0013 | 0.0025 | 0.0025 | | No log | 5.0 | 85 | 0.6052 | 0.0095 | 0.0050 | 0.0067 | 0.0074 | | No log | 6.0 | 102 | 0.4904 | 0.1239 | 0.0566 | 0.0988 | 0.1164 | | No log | 7.0 | 119 | 0.4477 | 0.1669 | 0.0745 | 0.1366 | 0.1570 | | No log | 8.0 | 136 | 0.4218 | 0.1765 | 0.0781 | 0.1464 | 0.1665 | | No log | 9.0 | 153 | 0.4044 | 0.1923 | 0.0955 | 0.1604 | 0.1845 | | No log | 10.0 | 170 | 0.3937 | 0.1909 | 0.0967 | 0.1636 | 0.1794 | | No log | 11.0 | 187 | 0.3887 | 0.1919 | 0.0893 | 0.1594 | 0.1795 | | No log | 11.76 | 200 | 0.3877 | 0.1916 | 0.0900 | 0.1578 | 0.1799 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
gonchisi/roberta-base-bne-finetuned-new_or_used_title-gpu
gonchisi
2023-09-17T01:00:20Z
116
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "base_model:BSC-LT/roberta-base-bne", "base_model:finetune:BSC-LT/roberta-base-bne", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-16T19:35:32Z
--- license: apache-2.0 base_model: BSC-TeMU/roberta-base-bne tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-base-bne-finetuned-new_or_used_title-gpu results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-bne-finetuned-new_or_used_title-gpu This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4096 - Accuracy: 0.8476 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3803 | 1.0 | 5000 | 0.3575 | 0.8442 | | 0.2344 | 2.0 | 10000 | 0.4096 | 0.8476 | ### Framework versions - Transformers 4.33.1 - Pytorch 2.0.1+cu117 - Datasets 2.14.4 - Tokenizers 0.13.3
erickrribeiro/paraphrase-bert-portuguese
erickrribeiro
2023-09-17T00:56:49Z
115
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:glue-ptpt", "base_model:neuralmind/bert-base-portuguese-cased", "base_model:finetune:neuralmind/bert-base-portuguese-cased", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-16T23:41:09Z
--- license: mit base_model: neuralmind/bert-base-portuguese-cased tags: - generated_from_trainer datasets: - glue-ptpt metrics: - accuracy - f1 model-index: - name: paraphrase-bert-portuguese results: - task: name: Text Classification type: text-classification dataset: name: glue-ptpt type: glue-ptpt config: mrpc split: validation args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8676470588235294 - name: F1 type: f1 value: 0.9028776978417268 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # paraphrase-bert-portuguese This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the glue-ptpt dataset. It achieves the following results on the evaluation set: - Loss: 1.2267 - Accuracy: 0.8676 - F1: 0.9029 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 459 | 0.7241 | 0.8603 | 0.9012 | | 0.0658 | 2.0 | 918 | 0.7902 | 0.8725 | 0.9071 | | 0.1499 | 3.0 | 1377 | 0.7895 | 0.8676 | 0.9022 | | 0.0654 | 4.0 | 1836 | 0.9841 | 0.8676 | 0.9036 | | 0.018 | 5.0 | 2295 | 1.0520 | 0.8627 | 0.8989 | | 0.0144 | 6.0 | 2754 | 1.1002 | 0.8725 | 0.9081 | | 0.007 | 7.0 | 3213 | 1.1303 | 0.8652 | 0.9005 | | 0.0056 | 8.0 | 3672 | 1.2298 | 0.8725 | 0.9081 | | 0.0019 | 9.0 | 4131 | 1.2353 | 0.8701 | 0.9038 | | 0.0001 | 10.0 | 4590 | 1.2267 | 0.8676 | 0.9029 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu117 - Datasets 2.14.4 - Tokenizers 0.13.3
platzi/platzi-distilroberta-base-mrpc-glue-sergio-vega
platzi
2023-09-17T00:51:00Z
109
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-17T00:18:03Z
--- license: apache-2.0 tags: - text-classification - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: platzi-distilroberta-base-mrpc-glue-sergio-vega results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: mrpc split: validation args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8284313725490197 - name: F1 type: f1 value: 0.8833333333333333 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # platzi-distilroberta-base-mrpc-glue-sergio-vega This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue and the mrpc datasets. It achieves the following results on the evaluation set: - Loss: 0.4811 - Accuracy: 0.8284 - F1: 0.8833 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5192 | 1.09 | 500 | 0.4811 | 0.8284 | 0.8833 | | 0.31 | 2.18 | 1000 | 0.6382 | 0.8431 | 0.8889 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
Yurio27/Lesson_01
Yurio27
2023-09-17T00:47:03Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-09-17T00:46:42Z
--- license: creativeml-openrail-m ---
dwang-LI/segformer-b-finetuned-segments-sidewalk-2
dwang-LI
2023-09-17T00:44:36Z
192
0
transformers
[ "transformers", "pytorch", "segformer", "vision", "image-segmentation", "generated_from_trainer", "base_model:nvidia/mit-b0", "base_model:finetune:nvidia/mit-b0", "license:other", "endpoints_compatible", "region:us" ]
image-segmentation
2023-09-16T22:39:15Z
--- license: other base_model: nvidia/mit-b0 tags: - vision - image-segmentation - generated_from_trainer model-index: - name: segformer-b-finetuned-segments-sidewalk-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # segformer-b-finetuned-segments-sidewalk-2 This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset. It achieves the following results on the evaluation set: - Loss: 0.7733 - Mean Iou: 0.2394 - Mean Accuracy: 0.2885 - Overall Accuracy: 0.8145 - Accuarcy Unlabeled: nan - Accuarcy Flat-road: 0.9002 - Accuarcy Flat-sidewalk: 0.9256 - Accuarcy Flat-crosswalk: 0.6731 - Accuarcy Flat-cyclinglane: 0.7624 - Accuarcy Flat-parkingdriveway: 0.3720 - Accuarcy Flat-railtrack: nan - Accuarcy Flat-curb: 0.3753 - Accuarcy Human-person: 0.0482 - Accuarcy Human-rider: 0.0 - Accuarcy Vehicle-car: 0.9125 - Accuarcy Vehicle-truck: 0.0 - Accuarcy Vehicle-bus: 0.0 - Accuarcy Vehicle-tramtrain: nan - Accuarcy Vehicle-motorcycle: 0.0 - Accuarcy Vehicle-bicycle: 0.0 - Accuarcy Vehicle-caravan: 0.0 - Accuarcy Vehicle-cartrailer: 0.0 - Accuarcy Construction-building: 0.8988 - Accuarcy Construction-door: 0.0 - Accuarcy Construction-wall: 0.3240 - Accuarcy Construction-fenceguardrail: 0.0009 - Accuarcy Construction-bridge: 0.0 - Accuarcy Construction-tunnel: nan - Accuarcy Construction-stairs: 0.0 - Accuarcy Object-pole: 0.0228 - Accuarcy Object-trafficsign: 0.0 - Accuarcy Object-trafficlight: 0.0 - Accuarcy Nature-vegetation: 0.9283 - Accuarcy Nature-terrain: 0.8528 - Accuarcy Sky: 0.9460 - Accuarcy Void-ground: 0.0 - Accuarcy Void-dynamic: 0.0 - Accuarcy Void-static: 0.0014 - Accuarcy Void-unclear: 0.0 - Iou Unlabeled: nan - Iou Flat-road: 0.7132 - Iou Flat-sidewalk: 0.8399 - Iou Flat-crosswalk: 0.5677 - Iou Flat-cyclinglane: 0.6711 - Iou Flat-parkingdriveway: 0.2585 - Iou Flat-railtrack: nan - Iou Flat-curb: 0.3157 - Iou Human-person: 0.0474 - Iou Human-rider: 0.0 - Iou Vehicle-car: 0.7025 - Iou Vehicle-truck: 0.0 - Iou Vehicle-bus: 0.0 - Iou Vehicle-tramtrain: nan - Iou Vehicle-motorcycle: 0.0 - Iou Vehicle-bicycle: 0.0 - Iou Vehicle-caravan: 0.0 - Iou Vehicle-cartrailer: 0.0 - Iou Construction-building: 0.6194 - Iou Construction-door: 0.0 - Iou Construction-wall: 0.2615 - Iou Construction-fenceguardrail: 0.0009 - Iou Construction-bridge: 0.0 - Iou Construction-tunnel: nan - Iou Construction-stairs: 0.0 - Iou Object-pole: 0.0227 - Iou Object-trafficsign: 0.0 - Iou Object-trafficlight: 0.0 - Iou Nature-vegetation: 0.7851 - Iou Nature-terrain: 0.7352 - Iou Sky: 0.8791 - Iou Void-ground: 0.0 - Iou Void-dynamic: 0.0 - Iou Void-static: 0.0014 - Iou Void-unclear: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 12 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuarcy Unlabeled | Accuarcy Flat-road | Accuarcy Flat-sidewalk | Accuarcy Flat-crosswalk | Accuarcy Flat-cyclinglane | Accuarcy Flat-parkingdriveway | Accuarcy Flat-railtrack | Accuarcy Flat-curb | Accuarcy Human-person | Accuarcy Human-rider | Accuarcy Vehicle-car | Accuarcy Vehicle-truck | Accuarcy Vehicle-bus | Accuarcy Vehicle-tramtrain | Accuarcy Vehicle-motorcycle | Accuarcy Vehicle-bicycle | Accuarcy Vehicle-caravan | Accuarcy Vehicle-cartrailer | Accuarcy Construction-building | Accuarcy Construction-door | Accuarcy Construction-wall | Accuarcy Construction-fenceguardrail | Accuarcy Construction-bridge | Accuarcy Construction-tunnel | Accuarcy Construction-stairs | Accuarcy Object-pole | Accuarcy Object-trafficsign | Accuarcy Object-trafficlight | Accuarcy Nature-vegetation | Accuarcy Nature-terrain | Accuarcy Sky | Accuarcy Void-ground | Accuarcy Void-dynamic | Accuarcy Void-static | Accuarcy Void-unclear | Iou Unlabeled | Iou Flat-road | Iou Flat-sidewalk | Iou Flat-crosswalk | Iou Flat-cyclinglane | Iou Flat-parkingdriveway | Iou Flat-railtrack | Iou Flat-curb | Iou Human-person | Iou Human-rider | Iou Vehicle-car | Iou Vehicle-truck | Iou Vehicle-bus | Iou Vehicle-tramtrain | Iou Vehicle-motorcycle | Iou Vehicle-bicycle | Iou Vehicle-caravan | Iou Vehicle-cartrailer | Iou Construction-building | Iou Construction-door | Iou Construction-wall | Iou Construction-fenceguardrail | Iou Construction-bridge | Iou Construction-tunnel | Iou Construction-stairs | Iou Object-pole | Iou Object-trafficsign | Iou Object-trafficlight | Iou Nature-vegetation | Iou Nature-terrain | Iou Sky | Iou Void-ground | Iou Void-dynamic | Iou Void-static | Iou Void-unclear | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:------------------:|:----------------------:|:-----------------------:|:-------------------------:|:-----------------------------:|:-----------------------:|:------------------:|:---------------------:|:--------------------:|:--------------------:|:----------------------:|:--------------------:|:--------------------------:|:---------------------------:|:------------------------:|:------------------------:|:---------------------------:|:------------------------------:|:--------------------------:|:--------------------------:|:------------------------------------:|:----------------------------:|:----------------------------:|:----------------------------:|:--------------------:|:---------------------------:|:----------------------------:|:--------------------------:|:-----------------------:|:------------:|:--------------------:|:---------------------:|:--------------------:|:---------------------:|:-------------:|:-------------:|:-----------------:|:------------------:|:--------------------:|:------------------------:|:------------------:|:-------------:|:----------------:|:---------------:|:---------------:|:-----------------:|:---------------:|:---------------------:|:----------------------:|:-------------------:|:-------------------:|:----------------------:|:-------------------------:|:---------------------:|:---------------------:|:-------------------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:---------------:|:----------------------:|:-----------------------:|:---------------------:|:------------------:|:-------:|:---------------:|:----------------:|:---------------:|:----------------:| | 2.2728 | 0.59 | 20 | 2.3946 | 0.1035 | 0.1549 | 0.6540 | nan | 0.6440 | 0.9384 | 0.0 | 0.0006 | 0.0001 | nan | 0.0001 | 0.0 | 0.0 | 0.9243 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.6269 | 0.0 | 0.0000 | 0.0002 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9320 | 0.0116 | 0.7234 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.4920 | 0.6851 | 0.0 | 0.0006 | 0.0001 | nan | 0.0001 | 0.0 | 0.0 | 0.3557 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.4837 | 0.0 | 0.0000 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5828 | 0.0115 | 0.7007 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.9006 | 1.18 | 40 | 1.7230 | 0.1153 | 0.1706 | 0.6814 | nan | 0.8635 | 0.8762 | 0.0 | 0.0003 | 0.0003 | nan | 0.0 | 0.0 | 0.0 | 0.8614 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.8115 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9315 | 0.0405 | 0.9034 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.4876 | 0.7405 | 0.0 | 0.0003 | 0.0003 | nan | 0.0 | 0.0 | 0.0 | 0.5225 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.5210 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6069 | 0.0399 | 0.7696 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.6721 | 1.76 | 60 | 1.4574 | 0.1289 | 0.1783 | 0.6968 | nan | 0.8799 | 0.8822 | 0.0 | 0.0528 | 0.0003 | nan | 0.0 | 0.0 | 0.0 | 0.8812 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.8573 | 0.0 | 0.0000 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9298 | 0.1473 | 0.8959 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.4937 | 0.7555 | 0.0 | 0.0519 | 0.0003 | nan | 0.0 | 0.0 | 0.0 | 0.5454 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.5547 | 0.0 | 0.0000 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.6303 | 0.1427 | 0.8205 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.4066 | 2.35 | 80 | 1.3422 | 0.1589 | 0.2055 | 0.7457 | nan | 0.8230 | 0.9475 | 0.0 | 0.3015 | 0.0047 | nan | 0.0000 | 0.0 | 0.0 | 0.8977 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.8695 | 0.0 | 0.0000 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9091 | 0.6841 | 0.9322 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6093 | 0.7599 | 0.0 | 0.2787 | 0.0046 | nan | 0.0000 | 0.0 | 0.0 | 0.5489 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.5596 | 0.0 | 0.0000 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7275 | 0.6092 | 0.8285 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.3429 | 2.94 | 100 | 1.1795 | 0.1653 | 0.2103 | 0.7562 | nan | 0.8569 | 0.9495 | 0.0 | 0.3507 | 0.0066 | nan | 0.0000 | 0.0 | 0.0 | 0.8981 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.8869 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9026 | 0.7728 | 0.8950 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6153 | 0.7730 | 0.0 | 0.3326 | 0.0065 | nan | 0.0000 | 0.0 | 0.0 | 0.5899 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.5742 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7403 | 0.6481 | 0.8448 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.2661 | 3.53 | 120 | 1.1008 | 0.1712 | 0.2174 | 0.7629 | nan | 0.8484 | 0.9495 | 0.0 | 0.4917 | 0.0181 | nan | 0.0001 | 0.0 | 0.0 | 0.8996 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9043 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.8869 | 0.8036 | 0.9371 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6100 | 0.7894 | 0.0 | 0.4346 | 0.0175 | nan | 0.0001 | 0.0 | 0.0 | 0.6153 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.5608 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7533 | 0.6752 | 0.8508 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.2166 | 4.12 | 140 | 1.0514 | 0.1771 | 0.2232 | 0.7695 | nan | 0.8815 | 0.9342 | 0.0 | 0.5539 | 0.0713 | nan | 0.0030 | 0.0 | 0.0 | 0.9014 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9029 | 0.0 | 0.0016 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9068 | 0.8398 | 0.9225 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6195 | 0.7981 | 0.0 | 0.5017 | 0.0642 | nan | 0.0030 | 0.0 | 0.0 | 0.6222 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.5694 | 0.0 | 0.0016 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7585 | 0.6979 | 0.8546 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.0262 | 4.71 | 160 | 1.0025 | 0.1782 | 0.2236 | 0.7665 | nan | 0.9188 | 0.9111 | 0.0 | 0.5462 | 0.1006 | nan | 0.0031 | 0.0 | 0.0 | 0.8814 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.8881 | 0.0 | 0.0027 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9260 | 0.8130 | 0.9404 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5776 | 0.8071 | 0.0 | 0.5005 | 0.0888 | nan | 0.0031 | 0.0 | 0.0 | 0.6651 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.5803 | 0.0 | 0.0027 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7415 | 0.7028 | 0.8558 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.0928 | 5.29 | 180 | 0.9698 | 0.1852 | 0.2308 | 0.7778 | nan | 0.8513 | 0.9428 | 0.0 | 0.6760 | 0.1497 | nan | 0.0419 | 0.0 | 0.0 | 0.8856 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9132 | 0.0 | 0.0056 | 0.0 | 0.0 | nan | 0.0 | 0.0002 | 0.0 | 0.0 | 0.9134 | 0.8535 | 0.9219 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6410 | 0.8062 | 0.0 | 0.5617 | 0.1228 | nan | 0.0405 | 0.0 | 0.0 | 0.6597 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.5705 | 0.0 | 0.0056 | 0.0 | 0.0 | nan | 0.0 | 0.0002 | 0.0 | 0.0 | 0.7603 | 0.7081 | 0.8642 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.8736 | 5.88 | 200 | 0.9250 | 0.1906 | 0.2370 | 0.7850 | nan | 0.9149 | 0.9249 | 0.0001 | 0.7226 | 0.1944 | nan | 0.0715 | 0.0027 | 0.0 | 0.8853 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.8917 | 0.0 | 0.0153 | 0.0 | 0.0 | nan | 0.0 | 0.0005 | 0.0 | 0.0 | 0.9353 | 0.8470 | 0.9402 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6511 | 0.8250 | 0.0001 | 0.5978 | 0.1516 | nan | 0.0682 | 0.0027 | 0.0 | 0.6817 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.5862 | 0.0 | 0.0152 | 0.0 | 0.0 | nan | 0.0 | 0.0005 | 0.0 | 0.0 | 0.7477 | 0.7159 | 0.8635 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.7832 | 6.47 | 220 | 0.8852 | 0.1961 | 0.2421 | 0.7875 | nan | 0.8962 | 0.9385 | 0.0642 | 0.6975 | 0.2064 | nan | 0.1581 | 0.0003 | 0.0 | 0.8995 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9011 | 0.0 | 0.0392 | 0.0 | 0.0 | nan | 0.0 | 0.0009 | 0.0 | 0.0 | 0.8974 | 0.8728 | 0.9342 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6576 | 0.8222 | 0.0624 | 0.6239 | 0.1577 | nan | 0.1421 | 0.0003 | 0.0 | 0.6802 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.5989 | 0.0 | 0.0383 | 0.0 | 0.0 | nan | 0.0 | 0.0009 | 0.0 | 0.0 | 0.7547 | 0.6706 | 0.8700 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.7822 | 7.06 | 240 | 0.8621 | 0.2145 | 0.2598 | 0.7992 | nan | 0.8827 | 0.9398 | 0.4415 | 0.7426 | 0.2656 | nan | 0.2218 | 0.0023 | 0.0 | 0.8967 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9092 | 0.0 | 0.0558 | 0.0000 | 0.0 | nan | 0.0 | 0.0020 | 0.0 | 0.0 | 0.9249 | 0.8259 | 0.9429 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6911 | 0.8250 | 0.3902 | 0.6320 | 0.2017 | nan | 0.1950 | 0.0023 | 0.0 | 0.6915 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.5886 | 0.0 | 0.0540 | 0.0000 | 0.0 | nan | 0.0 | 0.0020 | 0.0 | 0.0 | 0.7732 | 0.7329 | 0.8703 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.6742 | 7.65 | 260 | 0.8371 | 0.2193 | 0.2667 | 0.8027 | nan | 0.8766 | 0.9312 | 0.3983 | 0.7724 | 0.2975 | nan | 0.2975 | 0.0055 | 0.0 | 0.9111 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9086 | 0.0 | 0.1602 | 0.0001 | 0.0 | nan | 0.0 | 0.0034 | 0.0 | 0.0 | 0.9371 | 0.8321 | 0.9353 | 0.0 | 0.0 | 0.0000 | 0.0 | nan | 0.6894 | 0.8388 | 0.3591 | 0.6398 | 0.2119 | nan | 0.2519 | 0.0055 | 0.0 | 0.6754 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.6033 | 0.0 | 0.1492 | 0.0001 | 0.0 | nan | 0.0 | 0.0034 | 0.0 | 0.0 | 0.7671 | 0.7293 | 0.8750 | 0.0 | 0.0 | 0.0000 | 0.0 | | 0.8116 | 8.24 | 280 | 0.8277 | 0.2314 | 0.2819 | 0.8087 | nan | 0.8894 | 0.9207 | 0.6812 | 0.7773 | 0.3594 | nan | 0.3120 | 0.0109 | 0.0 | 0.9016 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.8885 | 0.0 | 0.2424 | 0.0005 | 0.0 | nan | 0.0 | 0.0107 | 0.0 | 0.0 | 0.9398 | 0.8575 | 0.9461 | 0.0 | 0.0 | 0.0003 | 0.0 | nan | 0.7112 | 0.8407 | 0.5738 | 0.6399 | 0.2424 | nan | 0.2666 | 0.0108 | 0.0 | 0.6924 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.6145 | 0.0 | 0.2148 | 0.0005 | 0.0 | nan | 0.0 | 0.0106 | 0.0 | 0.0 | 0.7579 | 0.7244 | 0.8738 | 0.0 | 0.0 | 0.0003 | 0.0 | | 0.7791 | 8.82 | 300 | 0.8059 | 0.2255 | 0.2723 | 0.8077 | nan | 0.8684 | 0.9414 | 0.4680 | 0.7998 | 0.2901 | nan | 0.3174 | 0.0107 | 0.0 | 0.8846 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9111 | 0.0 | 0.2193 | 0.0000 | 0.0 | nan | 0.0 | 0.0099 | 0.0 | 0.0 | 0.9290 | 0.8439 | 0.9465 | 0.0 | 0.0 | 0.0000 | 0.0 | nan | 0.7039 | 0.8383 | 0.4188 | 0.6308 | 0.2131 | nan | 0.2698 | 0.0106 | 0.0 | 0.7114 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.6008 | 0.0 | 0.1942 | 0.0000 | 0.0 | nan | 0.0 | 0.0099 | 0.0 | 0.0 | 0.7791 | 0.7343 | 0.8760 | 0.0 | 0.0 | 0.0000 | 0.0 | | 0.7334 | 9.41 | 320 | 0.7962 | 0.2342 | 0.2830 | 0.8117 | nan | 0.8921 | 0.9332 | 0.6837 | 0.7454 | 0.3381 | nan | 0.3264 | 0.0298 | 0.0 | 0.9198 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9018 | 0.0 | 0.2712 | 0.0003 | 0.0 | nan | 0.0 | 0.0182 | 0.0 | 0.0 | 0.9194 | 0.8508 | 0.9434 | 0.0 | 0.0 | 0.0008 | 0.0 | nan | 0.7121 | 0.8388 | 0.5627 | 0.6590 | 0.2316 | nan | 0.2794 | 0.0296 | 0.0 | 0.6884 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.6204 | 0.0 | 0.2324 | 0.0003 | 0.0 | nan | 0.0 | 0.0182 | 0.0 | 0.0 | 0.7820 | 0.7278 | 0.8762 | 0.0 | 0.0 | 0.0008 | 0.0 | | 0.7645 | 10.0 | 340 | 0.7783 | 0.2342 | 0.2809 | 0.8133 | nan | 0.8999 | 0.9347 | 0.5997 | 0.7491 | 0.3278 | nan | 0.3613 | 0.0164 | 0.0 | 0.9043 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9156 | 0.0 | 0.2684 | 0.0003 | 0.0 | nan | 0.0 | 0.0167 | 0.0 | 0.0 | 0.9235 | 0.8454 | 0.9455 | 0.0 | 0.0 | 0.0007 | 0.0 | nan | 0.7218 | 0.8409 | 0.5162 | 0.6738 | 0.2390 | nan | 0.3039 | 0.0162 | 0.0 | 0.7015 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.6019 | 0.0 | 0.2260 | 0.0003 | 0.0 | nan | 0.0 | 0.0167 | 0.0 | 0.0 | 0.7860 | 0.7381 | 0.8764 | 0.0 | 0.0 | 0.0007 | 0.0 | | 0.6792 | 10.59 | 360 | 0.7774 | 0.2358 | 0.2841 | 0.8141 | nan | 0.8954 | 0.9341 | 0.6272 | 0.7826 | 0.3543 | nan | 0.3360 | 0.0300 | 0.0 | 0.9162 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.8964 | 0.0 | 0.2909 | 0.0005 | 0.0 | nan | 0.0 | 0.0199 | 0.0 | 0.0 | 0.9226 | 0.8558 | 0.9443 | 0.0 | 0.0 | 0.0010 | 0.0 | nan | 0.7198 | 0.8402 | 0.5426 | 0.6699 | 0.2489 | nan | 0.2900 | 0.0297 | 0.0 | 0.6966 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.6186 | 0.0 | 0.2450 | 0.0005 | 0.0 | nan | 0.0 | 0.0199 | 0.0 | 0.0 | 0.7835 | 0.7251 | 0.8784 | 0.0 | 0.0 | 0.0010 | 0.0 | | 0.8047 | 11.18 | 380 | 0.7734 | 0.2388 | 0.2878 | 0.8147 | nan | 0.8924 | 0.9265 | 0.6512 | 0.7739 | 0.3846 | nan | 0.3762 | 0.0383 | 0.0 | 0.9122 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9053 | 0.0 | 0.3142 | 0.0005 | 0.0 | nan | 0.0 | 0.0216 | 0.0 | 0.0 | 0.9303 | 0.8513 | 0.9427 | 0.0 | 0.0 | 0.0014 | 0.0 | nan | 0.7171 | 0.8421 | 0.5575 | 0.6761 | 0.2609 | nan | 0.3165 | 0.0376 | 0.0 | 0.6982 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.6155 | 0.0 | 0.2551 | 0.0005 | 0.0 | nan | 0.0 | 0.0215 | 0.0 | 0.0 | 0.7854 | 0.7377 | 0.8797 | 0.0 | 0.0 | 0.0014 | 0.0 | | 0.7136 | 11.76 | 400 | 0.7733 | 0.2394 | 0.2885 | 0.8145 | nan | 0.9002 | 0.9256 | 0.6731 | 0.7624 | 0.3720 | nan | 0.3753 | 0.0482 | 0.0 | 0.9125 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.8988 | 0.0 | 0.3240 | 0.0009 | 0.0 | nan | 0.0 | 0.0228 | 0.0 | 0.0 | 0.9283 | 0.8528 | 0.9460 | 0.0 | 0.0 | 0.0014 | 0.0 | nan | 0.7132 | 0.8399 | 0.5677 | 0.6711 | 0.2585 | nan | 0.3157 | 0.0474 | 0.0 | 0.7025 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.6194 | 0.0 | 0.2615 | 0.0009 | 0.0 | nan | 0.0 | 0.0227 | 0.0 | 0.0 | 0.7851 | 0.7352 | 0.8791 | 0.0 | 0.0 | 0.0014 | 0.0 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.13.2
folflo/mt5-small-finetuned-HunSum-1_hvg_index
folflo
2023-09-17T00:43:05Z
3
0
transformers
[ "transformers", "tf", "mt5", "text2text-generation", "generated_from_keras_callback", "base_model:google/mt5-small", "base_model:finetune:google/mt5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-09-16T06:57:55Z
--- license: apache-2.0 base_model: google/mt5-small tags: - generated_from_keras_callback model-index: - name: folflo/mt5-small-finetuned-HunSum-1_hvg_index results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # folflo/mt5-small-finetuned-HunSum-1_hvg_index This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.8198 - Validation Loss: 2.6379 - Epoch: 5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 107952, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 4.1299 | 2.9515 | 0 | | 3.1908 | 2.7868 | 1 | | 3.0099 | 2.7283 | 2 | | 2.9172 | 2.6738 | 3 | | 2.8592 | 2.6477 | 4 | | 2.8198 | 2.6379 | 5 | ### Framework versions - Transformers 4.33.2 - TensorFlow 2.13.0 - Datasets 2.14.5 - Tokenizers 0.13.3
CyberHarem/misaana_farrengram_kumakumakumabear
CyberHarem
2023-09-17T00:37:53Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/misaana_farrengram_kumakumakumabear", "license:mit", "region:us" ]
text-to-image
2023-09-17T00:25:19Z
--- license: mit datasets: - CyberHarem/misaana_farrengram_kumakumakumabear pipeline_tag: text-to-image tags: - art --- # Lora of misaana_farrengram_kumakumakumabear This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 3400, you need to download `3400/misaana_farrengram_kumakumakumabear.pt` as the embedding and `3400/misaana_farrengram_kumakumakumabear.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 3400**, with the score of 0.943. The trigger words are: 1. `misaana_farrengram_kumakumakumabear` 2. `hair_ornament, x_hair_ornament, long_hair, grey_hair, grey_eyes, bangs, hair_between_eyes, smile, open_mouth` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:-------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------| | 5100 | 0.937 | [Download](5100/misaana_farrengram_kumakumakumabear.zip) | ![pattern_1-5100](5100/previews/pattern_1.png) | ![pattern_2-5100](5100/previews/pattern_2.png) | ![pattern_3-5100](5100/previews/pattern_3.png) | ![pattern_4-5100](5100/previews/pattern_4.png) | ![pattern_5-5100](5100/previews/pattern_5.png) | ![pattern_6-5100](5100/previews/pattern_6.png) | ![bikini-5100](5100/previews/bikini.png) | [<NSFW, click to see>](5100/previews/bondage.png) | ![free-5100](5100/previews/free.png) | ![maid-5100](5100/previews/maid.png) | ![miko-5100](5100/previews/miko.png) | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) | ![suit-5100](5100/previews/suit.png) | ![yukata-5100](5100/previews/yukata.png) | | 4760 | 0.935 | [Download](4760/misaana_farrengram_kumakumakumabear.zip) | ![pattern_1-4760](4760/previews/pattern_1.png) | ![pattern_2-4760](4760/previews/pattern_2.png) | ![pattern_3-4760](4760/previews/pattern_3.png) | ![pattern_4-4760](4760/previews/pattern_4.png) | ![pattern_5-4760](4760/previews/pattern_5.png) | ![pattern_6-4760](4760/previews/pattern_6.png) | ![bikini-4760](4760/previews/bikini.png) | [<NSFW, click to see>](4760/previews/bondage.png) | ![free-4760](4760/previews/free.png) | ![maid-4760](4760/previews/maid.png) | ![miko-4760](4760/previews/miko.png) | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) | ![suit-4760](4760/previews/suit.png) | ![yukata-4760](4760/previews/yukata.png) | | 4420 | 0.913 | [Download](4420/misaana_farrengram_kumakumakumabear.zip) | ![pattern_1-4420](4420/previews/pattern_1.png) | ![pattern_2-4420](4420/previews/pattern_2.png) | ![pattern_3-4420](4420/previews/pattern_3.png) | ![pattern_4-4420](4420/previews/pattern_4.png) | ![pattern_5-4420](4420/previews/pattern_5.png) | ![pattern_6-4420](4420/previews/pattern_6.png) | ![bikini-4420](4420/previews/bikini.png) | [<NSFW, click to see>](4420/previews/bondage.png) | ![free-4420](4420/previews/free.png) | ![maid-4420](4420/previews/maid.png) | ![miko-4420](4420/previews/miko.png) | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) | ![suit-4420](4420/previews/suit.png) | ![yukata-4420](4420/previews/yukata.png) | | 4080 | 0.939 | [Download](4080/misaana_farrengram_kumakumakumabear.zip) | ![pattern_1-4080](4080/previews/pattern_1.png) | ![pattern_2-4080](4080/previews/pattern_2.png) | ![pattern_3-4080](4080/previews/pattern_3.png) | ![pattern_4-4080](4080/previews/pattern_4.png) | ![pattern_5-4080](4080/previews/pattern_5.png) | ![pattern_6-4080](4080/previews/pattern_6.png) | ![bikini-4080](4080/previews/bikini.png) | [<NSFW, click to see>](4080/previews/bondage.png) | ![free-4080](4080/previews/free.png) | ![maid-4080](4080/previews/maid.png) | ![miko-4080](4080/previews/miko.png) | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) | ![suit-4080](4080/previews/suit.png) | ![yukata-4080](4080/previews/yukata.png) | | 3740 | 0.885 | [Download](3740/misaana_farrengram_kumakumakumabear.zip) | ![pattern_1-3740](3740/previews/pattern_1.png) | ![pattern_2-3740](3740/previews/pattern_2.png) | ![pattern_3-3740](3740/previews/pattern_3.png) | ![pattern_4-3740](3740/previews/pattern_4.png) | ![pattern_5-3740](3740/previews/pattern_5.png) | ![pattern_6-3740](3740/previews/pattern_6.png) | ![bikini-3740](3740/previews/bikini.png) | [<NSFW, click to see>](3740/previews/bondage.png) | ![free-3740](3740/previews/free.png) | ![maid-3740](3740/previews/maid.png) | ![miko-3740](3740/previews/miko.png) | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) | ![suit-3740](3740/previews/suit.png) | ![yukata-3740](3740/previews/yukata.png) | | **3400** | **0.943** | [**Download**](3400/misaana_farrengram_kumakumakumabear.zip) | ![pattern_1-3400](3400/previews/pattern_1.png) | ![pattern_2-3400](3400/previews/pattern_2.png) | ![pattern_3-3400](3400/previews/pattern_3.png) | ![pattern_4-3400](3400/previews/pattern_4.png) | ![pattern_5-3400](3400/previews/pattern_5.png) | ![pattern_6-3400](3400/previews/pattern_6.png) | ![bikini-3400](3400/previews/bikini.png) | [<NSFW, click to see>](3400/previews/bondage.png) | ![free-3400](3400/previews/free.png) | ![maid-3400](3400/previews/maid.png) | ![miko-3400](3400/previews/miko.png) | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) | ![suit-3400](3400/previews/suit.png) | ![yukata-3400](3400/previews/yukata.png) | | 3060 | 0.932 | [Download](3060/misaana_farrengram_kumakumakumabear.zip) | ![pattern_1-3060](3060/previews/pattern_1.png) | ![pattern_2-3060](3060/previews/pattern_2.png) | ![pattern_3-3060](3060/previews/pattern_3.png) | ![pattern_4-3060](3060/previews/pattern_4.png) | ![pattern_5-3060](3060/previews/pattern_5.png) | ![pattern_6-3060](3060/previews/pattern_6.png) | ![bikini-3060](3060/previews/bikini.png) | [<NSFW, click to see>](3060/previews/bondage.png) | ![free-3060](3060/previews/free.png) | ![maid-3060](3060/previews/maid.png) | ![miko-3060](3060/previews/miko.png) | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) | ![suit-3060](3060/previews/suit.png) | ![yukata-3060](3060/previews/yukata.png) | | 2720 | 0.938 | [Download](2720/misaana_farrengram_kumakumakumabear.zip) | ![pattern_1-2720](2720/previews/pattern_1.png) | ![pattern_2-2720](2720/previews/pattern_2.png) | ![pattern_3-2720](2720/previews/pattern_3.png) | ![pattern_4-2720](2720/previews/pattern_4.png) | ![pattern_5-2720](2720/previews/pattern_5.png) | ![pattern_6-2720](2720/previews/pattern_6.png) | ![bikini-2720](2720/previews/bikini.png) | [<NSFW, click to see>](2720/previews/bondage.png) | ![free-2720](2720/previews/free.png) | ![maid-2720](2720/previews/maid.png) | ![miko-2720](2720/previews/miko.png) | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) | ![suit-2720](2720/previews/suit.png) | ![yukata-2720](2720/previews/yukata.png) | | 2380 | 0.888 | [Download](2380/misaana_farrengram_kumakumakumabear.zip) | ![pattern_1-2380](2380/previews/pattern_1.png) | ![pattern_2-2380](2380/previews/pattern_2.png) | ![pattern_3-2380](2380/previews/pattern_3.png) | ![pattern_4-2380](2380/previews/pattern_4.png) | ![pattern_5-2380](2380/previews/pattern_5.png) | ![pattern_6-2380](2380/previews/pattern_6.png) | ![bikini-2380](2380/previews/bikini.png) | [<NSFW, click to see>](2380/previews/bondage.png) | ![free-2380](2380/previews/free.png) | ![maid-2380](2380/previews/maid.png) | ![miko-2380](2380/previews/miko.png) | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) | ![suit-2380](2380/previews/suit.png) | ![yukata-2380](2380/previews/yukata.png) | | 2040 | 0.917 | [Download](2040/misaana_farrengram_kumakumakumabear.zip) | ![pattern_1-2040](2040/previews/pattern_1.png) | ![pattern_2-2040](2040/previews/pattern_2.png) | ![pattern_3-2040](2040/previews/pattern_3.png) | ![pattern_4-2040](2040/previews/pattern_4.png) | ![pattern_5-2040](2040/previews/pattern_5.png) | ![pattern_6-2040](2040/previews/pattern_6.png) | ![bikini-2040](2040/previews/bikini.png) | [<NSFW, click to see>](2040/previews/bondage.png) | ![free-2040](2040/previews/free.png) | ![maid-2040](2040/previews/maid.png) | ![miko-2040](2040/previews/miko.png) | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) | ![suit-2040](2040/previews/suit.png) | ![yukata-2040](2040/previews/yukata.png) | | 1700 | 0.897 | [Download](1700/misaana_farrengram_kumakumakumabear.zip) | ![pattern_1-1700](1700/previews/pattern_1.png) | ![pattern_2-1700](1700/previews/pattern_2.png) | ![pattern_3-1700](1700/previews/pattern_3.png) | ![pattern_4-1700](1700/previews/pattern_4.png) | ![pattern_5-1700](1700/previews/pattern_5.png) | ![pattern_6-1700](1700/previews/pattern_6.png) | ![bikini-1700](1700/previews/bikini.png) | [<NSFW, click to see>](1700/previews/bondage.png) | ![free-1700](1700/previews/free.png) | ![maid-1700](1700/previews/maid.png) | ![miko-1700](1700/previews/miko.png) | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) | ![suit-1700](1700/previews/suit.png) | ![yukata-1700](1700/previews/yukata.png) | | 1360 | 0.889 | [Download](1360/misaana_farrengram_kumakumakumabear.zip) | ![pattern_1-1360](1360/previews/pattern_1.png) | ![pattern_2-1360](1360/previews/pattern_2.png) | ![pattern_3-1360](1360/previews/pattern_3.png) | ![pattern_4-1360](1360/previews/pattern_4.png) | ![pattern_5-1360](1360/previews/pattern_5.png) | ![pattern_6-1360](1360/previews/pattern_6.png) | ![bikini-1360](1360/previews/bikini.png) | [<NSFW, click to see>](1360/previews/bondage.png) | ![free-1360](1360/previews/free.png) | ![maid-1360](1360/previews/maid.png) | ![miko-1360](1360/previews/miko.png) | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) | ![suit-1360](1360/previews/suit.png) | ![yukata-1360](1360/previews/yukata.png) | | 1020 | 0.858 | [Download](1020/misaana_farrengram_kumakumakumabear.zip) | ![pattern_1-1020](1020/previews/pattern_1.png) | ![pattern_2-1020](1020/previews/pattern_2.png) | ![pattern_3-1020](1020/previews/pattern_3.png) | ![pattern_4-1020](1020/previews/pattern_4.png) | ![pattern_5-1020](1020/previews/pattern_5.png) | ![pattern_6-1020](1020/previews/pattern_6.png) | ![bikini-1020](1020/previews/bikini.png) | [<NSFW, click to see>](1020/previews/bondage.png) | ![free-1020](1020/previews/free.png) | ![maid-1020](1020/previews/maid.png) | ![miko-1020](1020/previews/miko.png) | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) | ![suit-1020](1020/previews/suit.png) | ![yukata-1020](1020/previews/yukata.png) | | 680 | 0.880 | [Download](680/misaana_farrengram_kumakumakumabear.zip) | ![pattern_1-680](680/previews/pattern_1.png) | ![pattern_2-680](680/previews/pattern_2.png) | ![pattern_3-680](680/previews/pattern_3.png) | ![pattern_4-680](680/previews/pattern_4.png) | ![pattern_5-680](680/previews/pattern_5.png) | ![pattern_6-680](680/previews/pattern_6.png) | ![bikini-680](680/previews/bikini.png) | [<NSFW, click to see>](680/previews/bondage.png) | ![free-680](680/previews/free.png) | ![maid-680](680/previews/maid.png) | ![miko-680](680/previews/miko.png) | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) | ![suit-680](680/previews/suit.png) | ![yukata-680](680/previews/yukata.png) | | 340 | 0.806 | [Download](340/misaana_farrengram_kumakumakumabear.zip) | ![pattern_1-340](340/previews/pattern_1.png) | ![pattern_2-340](340/previews/pattern_2.png) | ![pattern_3-340](340/previews/pattern_3.png) | ![pattern_4-340](340/previews/pattern_4.png) | ![pattern_5-340](340/previews/pattern_5.png) | ![pattern_6-340](340/previews/pattern_6.png) | ![bikini-340](340/previews/bikini.png) | [<NSFW, click to see>](340/previews/bondage.png) | ![free-340](340/previews/free.png) | ![maid-340](340/previews/maid.png) | ![miko-340](340/previews/miko.png) | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) | ![suit-340](340/previews/suit.png) | ![yukata-340](340/previews/yukata.png) |
aapoorv75/whisper-large-v2-hindi-for-yes-no-noise-1000steps
aapoorv75
2023-09-16T23:53:42Z
2
0
peft
[ "peft", "region:us" ]
null
2023-09-16T23:53:28Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.0.dev0
Panchovix/Uni-TianYan-70B-4.65bpw-h6-exl2
Panchovix
2023-09-16T23:43:40Z
7
0
transformers
[ "transformers", "llama", "text-generation", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-09-16T21:11:41Z
--- license: other --- 4.65 bit quantization of Uni-TianYan (70B) (https://huggingface.co/uni-tianyan/Uni-TianYan), using exllamav2. The model is on sharded files, to prevent massive system RAM usage when compiling large output files. It loads as it should for inference. Similar to 32g on size but better ppl.
UniverseTBD/astrollama
UniverseTBD
2023-09-16T23:41:20Z
57
24
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "llama-2", "astronomy", "astrophysics", "arxiv", "en", "dataset:universeTBD/arxiv-astro-abstracts-all", "arxiv:2308.12823", "license:mit", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-08-10T18:25:15Z
--- license: mit datasets: - universeTBD/arxiv-astro-abstracts-all language: - en metrics: - perplexity pipeline_tag: text-generation tags: - llama-2 - astronomy - astrophysics - arxiv inference: false --- <p><h1>AstroLLaMA</h1></p> **Play with the model in our Hugging Face space!** https://huggingface.co/spaces/universeTBD/astrollama <p align="center"> <img src="https://huggingface.co/universeTBD/astrollama/resolve/main/images/astrollama-logo.png" alt="AstroLLaMA" width="500px"/> </p> ## Loading the model ```python from transformers import AutoModelForCausalLM from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( pretrained_model_name_or_path="universeTBD/astrollama" ) model = AutoModelForCausalLM.from_pretrained( pretrained_model_name_or_path="universeTBD/astrollama", device_map="auto", ) ``` ## Generating text from a prompt ```python import torch from transformers import pipeline generator = pipeline( task="text-generation", model=model, tokenizer=tokenizer, device_map="auto" ) # Taken from https://arxiv.org/abs/2308.12823 prompt = "In this letter, we report the discovery of the highest redshift, " \ "heavily obscured, radio-loud QSO candidate selected using JWST NIRCam/MIRI, " \ "mid-IR, sub-mm, and radio imaging in the COSMOS-Web field. " # For reproducibility torch.manual_seed(42) generated_text = generator( prompt, do_sample=True, max_length=512 ) ``` ## Embedding text with AstroLLaMA ```python texts = [ "Abstract 1", "Abstract 2" ] inputs = tokenizer( texts, return_tensors="pt", return_token_type_ids=False, padding=True, truncation=True, max_length=4096 ) inputs.to(model.device) outputs = model(**inputs, output_hidden_states=True) # Last layer of the hidden states. Get average embedding of all tokens embeddings = outputs["hidden_states"][-1][:, 1:, ...].mean(1).detach().cpu().numpy() ```
alexsung/q-FrozenLake-v1-4x4-noSlippery
alexsung
2023-09-16T23:27:34Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-09-13T21:10:25Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="alexsung/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
platzi/platzi-vit-model-sergio-vega
platzi
2023-09-16T23:21:26Z
196
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "dataset:beans", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-09-16T23:17:18Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - beans metrics: - accuracy model-index: - name: platzi-vit-model-sergio-vega results: - task: name: Image Classification type: image-classification dataset: name: beans type: beans config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 1.0 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # platzi-vit-model-sergio-vega This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0121 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1367 | 3.85 | 500 | 0.0121 | 1.0 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
thegadri/ppo-LunarLander-v2
thegadri
2023-09-16T23:10:19Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-09-16T23:09:41Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 284.69 +/- 10.99 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Yurio27/Clase_01
Yurio27
2023-09-16T22:59:52Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-09-16T19:47:06Z
--- license: creativeml-openrail-m ---
Konic/distilbert-base-uncased-finetuned-imdb
Konic
2023-09-16T22:47:04Z
125
0
transformers
[ "transformers", "pytorch", "distilbert", "fill-mask", "generated_from_trainer", "dataset:imdb", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-09-16T22:43:24Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4119 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7024 | 1.0 | 157 | 2.4968 | | 2.5794 | 2.0 | 314 | 2.4281 | | 2.5354 | 3.0 | 471 | 2.4509 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
Sanyam0605/ppo-LunarLander-v2
Sanyam0605
2023-09-16T22:40:41Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-09-16T22:40:20Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 243.86 +/- 18.19 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Kyro-2023/wav2vec2-large-mms-1b-zh-CN
Kyro-2023
2023-09-16T22:38:30Z
15
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_6_1", "base_model:facebook/mms-1b-all", "base_model:finetune:facebook/mms-1b-all", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-09-08T03:26:13Z
--- license: cc-by-nc-4.0 base_model: facebook/mms-1b-all tags: - generated_from_trainer datasets: - common_voice_6_1 model-index: - name: wav2vec2-large-mms-1b-zh-CN results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-mms-1b-zh-CN This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the common_voice_6_1 dataset. It achieves the following results on the evaluation set: - Loss: 0.9552 - Cer: 0.2071 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 42.0738 | 0.04 | 100 | 2.9914 | 0.4865 | | 2.534 | 0.09 | 200 | 2.0714 | 0.3981 | | 2.0311 | 0.13 | 300 | 1.9086 | 0.3844 | | 1.9237 | 0.17 | 400 | 1.7770 | 0.3650 | | 1.865 | 0.22 | 500 | 1.6745 | 0.3579 | | 1.8275 | 0.26 | 600 | 1.6277 | 0.3414 | | 1.8094 | 0.3 | 700 | 1.6812 | 0.3639 | | 1.7503 | 0.35 | 800 | 1.6279 | 0.3427 | | 1.7448 | 0.39 | 900 | 1.5611 | 0.3376 | | 1.7459 | 0.43 | 1000 | 1.5413 | 0.3323 | | 1.7191 | 0.47 | 1100 | 1.5259 | 0.3280 | | 1.6317 | 0.52 | 1200 | 1.5102 | 0.3242 | | 1.6881 | 0.56 | 1300 | 1.4851 | 0.3212 | | 1.6401 | 0.6 | 1400 | 1.4589 | 0.3097 | | 1.5909 | 0.65 | 1500 | 1.4985 | 0.3186 | | 1.618 | 0.69 | 1600 | 1.4415 | 0.3122 | | 1.6842 | 0.73 | 1700 | 1.4596 | 0.3161 | | 1.5413 | 0.78 | 1800 | 1.4275 | 0.3003 | | 1.6461 | 0.82 | 1900 | 1.4214 | 0.3073 | | 1.5536 | 0.86 | 2000 | 1.3924 | 0.3003 | | 1.545 | 0.91 | 2100 | 1.3727 | 0.2907 | | 1.6354 | 0.95 | 2200 | 1.4157 | 0.3088 | | 1.4913 | 0.99 | 2300 | 1.4012 | 0.3042 | | 1.2739 | 1.04 | 2400 | 1.3079 | 0.2855 | | 1.2292 | 1.08 | 2500 | 1.3085 | 0.2832 | | 1.2424 | 1.12 | 2600 | 1.3273 | 0.2879 | | 1.2181 | 1.16 | 2700 | 1.3241 | 0.2864 | | 1.2101 | 1.21 | 2800 | 1.2526 | 0.2780 | | 1.26 | 1.25 | 2900 | 1.2949 | 0.2815 | | 1.2154 | 1.29 | 3000 | 1.2932 | 0.2787 | | 1.2446 | 1.34 | 3100 | 1.2774 | 0.2792 | | 1.1975 | 1.38 | 3200 | 1.2641 | 0.2751 | | 1.2048 | 1.42 | 3300 | 1.2645 | 0.2773 | | 1.1858 | 1.47 | 3400 | 1.2616 | 0.2741 | | 1.202 | 1.51 | 3500 | 1.2572 | 0.2725 | | 1.1802 | 1.55 | 3600 | 1.2554 | 0.2723 | | 1.1912 | 1.6 | 3700 | 1.2703 | 0.2657 | | 1.213 | 1.64 | 3800 | 1.2491 | 0.2743 | | 1.1949 | 1.68 | 3900 | 1.2497 | 0.2734 | | 1.1813 | 1.73 | 4000 | 1.2367 | 0.2709 | | 1.1935 | 1.77 | 4100 | 1.2174 | 0.2677 | | 1.1842 | 1.81 | 4200 | 1.2307 | 0.2660 | | 1.215 | 1.86 | 4300 | 1.2275 | 0.2696 | | 1.2102 | 1.9 | 4400 | 1.1964 | 0.2595 | | 1.2206 | 1.94 | 4500 | 1.2046 | 0.2574 | | 1.2292 | 1.98 | 4600 | 1.1900 | 0.2595 | | 1.034 | 2.03 | 4700 | 1.1849 | 0.2547 | | 0.8787 | 2.07 | 4800 | 1.1889 | 0.2558 | | 0.9124 | 2.11 | 4900 | 1.1809 | 0.2590 | | 0.9027 | 2.16 | 5000 | 1.1927 | 0.2608 | | 0.9158 | 2.2 | 5100 | 1.1860 | 0.2556 | | 0.8683 | 2.24 | 5200 | 1.1660 | 0.2522 | | 0.8932 | 2.29 | 5300 | 1.1477 | 0.2533 | | 0.9332 | 2.33 | 5400 | 1.1702 | 0.2543 | | 0.9427 | 2.37 | 5500 | 1.1653 | 0.2523 | | 0.9085 | 2.42 | 5600 | 1.1739 | 0.2539 | | 0.9238 | 2.46 | 5700 | 1.2005 | 0.2589 | | 0.9319 | 2.5 | 5800 | 1.1877 | 0.2567 | | 0.9414 | 2.55 | 5900 | 1.1730 | 0.2505 | | 0.9428 | 2.59 | 6000 | 1.1721 | 0.2576 | | 0.942 | 2.63 | 6100 | 1.1793 | 0.2547 | | 0.9273 | 2.67 | 6200 | 1.1787 | 0.2570 | | 0.9963 | 2.72 | 6300 | 1.1570 | 0.2540 | | 0.9519 | 2.76 | 6400 | 1.1738 | 0.2563 | | 0.962 | 2.8 | 6500 | 1.1929 | 0.2628 | | 0.9765 | 2.85 | 6600 | 1.1531 | 0.2527 | | 0.9226 | 2.89 | 6700 | 1.1577 | 0.2553 | | 0.9492 | 2.93 | 6800 | 1.1490 | 0.2506 | | 0.9186 | 2.98 | 6900 | 1.1402 | 0.2500 | | 0.8681 | 3.02 | 7000 | 1.1520 | 0.2516 | | 0.7738 | 3.06 | 7100 | 1.1404 | 0.2527 | | 0.7605 | 3.11 | 7200 | 1.1535 | 0.2514 | | 0.7254 | 3.15 | 7300 | 1.1679 | 0.2490 | | 0.7422 | 3.19 | 7400 | 1.1536 | 0.2502 | | 0.823 | 3.24 | 7500 | 1.1516 | 0.2477 | | 0.7909 | 3.28 | 7600 | 1.1442 | 0.2459 | | 0.7748 | 3.32 | 7700 | 1.1522 | 0.2493 | | 0.7957 | 3.36 | 7800 | 1.1383 | 0.2470 | | 0.7383 | 3.41 | 7900 | 1.1343 | 0.2452 | | 0.8093 | 3.45 | 8000 | 1.1426 | 0.2467 | | 0.8141 | 3.49 | 8100 | 1.1357 | 0.2466 | | 0.7891 | 3.54 | 8200 | 1.1552 | 0.2480 | | 0.8246 | 3.58 | 8300 | 1.1555 | 0.2475 | | 0.7958 | 3.62 | 8400 | 1.1615 | 0.2502 | | 0.7721 | 3.67 | 8500 | 1.1041 | 0.2396 | | 0.7773 | 3.71 | 8600 | 1.1215 | 0.2411 | | 0.7847 | 3.75 | 8700 | 1.1130 | 0.2419 | | 0.7971 | 3.8 | 8800 | 1.1056 | 0.2469 | | 0.7801 | 3.84 | 8900 | 1.1129 | 0.2435 | | 0.7843 | 3.88 | 9000 | 1.1027 | 0.2387 | | 0.7842 | 3.93 | 9100 | 1.0981 | 0.2401 | | 0.7661 | 3.97 | 9200 | 1.1060 | 0.2428 | | 0.7622 | 4.01 | 9300 | 1.0790 | 0.2338 | | 0.6405 | 4.06 | 9400 | 1.0871 | 0.2352 | | 0.6102 | 4.1 | 9500 | 1.0860 | 0.2344 | | 0.6419 | 4.14 | 9600 | 1.0782 | 0.2356 | | 0.6058 | 4.18 | 9700 | 1.0739 | 0.2291 | | 0.6632 | 4.23 | 9800 | 1.1008 | 0.2366 | | 0.6373 | 4.27 | 9900 | 1.0847 | 0.2354 | | 0.6358 | 4.31 | 10000 | 1.0722 | 0.2313 | | 0.6531 | 4.36 | 10100 | 1.0796 | 0.2326 | | 0.6383 | 4.4 | 10200 | 1.0736 | 0.2322 | | 0.6537 | 4.44 | 10300 | 1.0723 | 0.2305 | | 0.6321 | 4.49 | 10400 | 1.0703 | 0.2329 | | 0.6683 | 4.53 | 10500 | 1.0769 | 0.2332 | | 0.6272 | 4.57 | 10600 | 1.0555 | 0.2292 | | 0.651 | 4.62 | 10700 | 1.0570 | 0.2323 | | 0.6392 | 4.66 | 10800 | 1.0738 | 0.2313 | | 0.665 | 4.7 | 10900 | 1.0536 | 0.2276 | | 0.677 | 4.75 | 11000 | 1.0554 | 0.2277 | | 0.6419 | 4.79 | 11100 | 1.0487 | 0.2258 | | 0.6549 | 4.83 | 11200 | 1.0427 | 0.2287 | | 0.6373 | 4.87 | 11300 | 1.0502 | 0.2291 | | 0.6642 | 4.92 | 11400 | 1.0411 | 0.2255 | | 0.6674 | 4.96 | 11500 | 1.0345 | 0.2248 | | 0.6733 | 5.0 | 11600 | 1.0440 | 0.2278 | | 0.5281 | 5.05 | 11700 | 1.0477 | 0.2253 | | 0.5465 | 5.09 | 11800 | 1.0553 | 0.2284 | | 0.5375 | 5.13 | 11900 | 1.0550 | 0.2309 | | 0.5103 | 5.18 | 12000 | 1.0433 | 0.2237 | | 0.5196 | 5.22 | 12100 | 1.0534 | 0.2301 | | 0.5645 | 5.26 | 12200 | 1.0492 | 0.2278 | | 0.5421 | 5.31 | 12300 | 1.0515 | 0.2281 | | 0.5234 | 5.35 | 12400 | 1.0383 | 0.2229 | | 0.571 | 5.39 | 12500 | 1.0569 | 0.2278 | | 0.5392 | 5.44 | 12600 | 1.0469 | 0.2253 | | 0.5867 | 5.48 | 12700 | 1.0373 | 0.2264 | | 0.5819 | 5.52 | 12800 | 1.0164 | 0.2237 | | 0.5504 | 5.57 | 12900 | 1.0183 | 0.2217 | | 0.5532 | 5.61 | 13000 | 1.0167 | 0.2232 | | 0.5575 | 5.65 | 13100 | 1.0292 | 0.2244 | | 0.5593 | 5.69 | 13200 | 1.0368 | 0.2247 | | 0.5498 | 5.74 | 13300 | 1.0215 | 0.2231 | | 0.5462 | 5.78 | 13400 | 1.0330 | 0.2212 | | 0.5751 | 5.82 | 13500 | 1.0179 | 0.2223 | | 0.5492 | 5.87 | 13600 | 1.0224 | 0.2202 | | 0.5746 | 5.91 | 13700 | 1.0151 | 0.2219 | | 0.5288 | 5.95 | 13800 | 1.0154 | 0.2199 | | 0.5614 | 6.0 | 13900 | 1.0158 | 0.2210 | | 0.4563 | 6.04 | 14000 | 1.0120 | 0.2197 | | 0.502 | 6.08 | 14100 | 1.0125 | 0.2201 | | 0.4896 | 6.13 | 14200 | 1.0011 | 0.2160 | | 0.4774 | 6.17 | 14300 | 1.0027 | 0.2180 | | 0.4734 | 6.21 | 14400 | 1.0026 | 0.2170 | | 0.486 | 6.26 | 14500 | 0.9994 | 0.2177 | | 0.4815 | 6.3 | 14600 | 0.9977 | 0.2174 | | 0.4972 | 6.34 | 14700 | 1.0004 | 0.2175 | | 0.4832 | 6.38 | 14800 | 0.9922 | 0.2130 | | 0.4682 | 6.43 | 14900 | 0.9998 | 0.2167 | | 0.4654 | 6.47 | 15000 | 0.9886 | 0.2150 | | 0.4665 | 6.51 | 15100 | 0.9844 | 0.2154 | | 0.4696 | 6.56 | 15200 | 0.9801 | 0.2136 | | 0.4732 | 6.6 | 15300 | 0.9830 | 0.2145 | | 0.4391 | 6.64 | 15400 | 0.9886 | 0.2165 | | 0.5035 | 6.69 | 15500 | 0.9872 | 0.2157 | | 0.4721 | 6.73 | 15600 | 0.9895 | 0.2132 | | 0.466 | 6.77 | 15700 | 0.9910 | 0.2147 | | 0.4981 | 6.82 | 15800 | 0.9934 | 0.2157 | | 0.4856 | 6.86 | 15900 | 0.9888 | 0.2126 | | 0.4798 | 6.9 | 16000 | 0.9830 | 0.2150 | | 0.4771 | 6.95 | 16100 | 0.9845 | 0.2153 | | 0.473 | 6.99 | 16200 | 0.9814 | 0.2116 | | 0.4256 | 7.03 | 16300 | 0.9771 | 0.2131 | | 0.4133 | 7.08 | 16400 | 0.9803 | 0.2125 | | 0.4051 | 7.12 | 16500 | 0.9778 | 0.2116 | | 0.4274 | 7.16 | 16600 | 0.9809 | 0.2116 | | 0.4307 | 7.2 | 16700 | 0.9720 | 0.2109 | | 0.4223 | 7.25 | 16800 | 0.9730 | 0.2109 | | 0.4246 | 7.29 | 16900 | 0.9710 | 0.2100 | | 0.4478 | 7.33 | 17000 | 0.9670 | 0.2101 | | 0.4016 | 7.38 | 17100 | 0.9664 | 0.2096 | | 0.4289 | 7.42 | 17200 | 0.9667 | 0.2093 | | 0.4107 | 7.46 | 17300 | 0.9661 | 0.2096 | | 0.4643 | 7.51 | 17400 | 0.9665 | 0.2106 | | 0.433 | 7.55 | 17500 | 0.9673 | 0.2097 | | 0.4239 | 7.59 | 17600 | 0.9639 | 0.2096 | | 0.4144 | 7.64 | 17700 | 0.9635 | 0.2091 | | 0.428 | 7.68 | 17800 | 0.9604 | 0.2094 | | 0.4312 | 7.72 | 17900 | 0.9585 | 0.2099 | | 0.4164 | 7.77 | 18000 | 0.9599 | 0.2093 | | 0.4308 | 7.81 | 18100 | 0.9587 | 0.2080 | | 0.4177 | 7.85 | 18200 | 0.9575 | 0.2084 | | 0.4509 | 7.89 | 18300 | 0.9567 | 0.2082 | | 0.4244 | 7.94 | 18400 | 0.9558 | 0.2072 | | 0.4246 | 7.98 | 18500 | 0.9552 | 0.2071 | ### Framework versions - Transformers 4.33.0.dev0 - Pytorch 2.0.1+cu117 - Datasets 2.14.4 - Tokenizers 0.13.3
akshaysabale07169/linkedin
akshaysabale07169
2023-09-16T22:32:30Z
1
0
diffusers
[ "diffusers", "text-to-image", "autotrain", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
2023-09-16T19:54:46Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: photo of akshay tags: - text-to-image - diffusers - autotrain inference: true --- # DreamBooth trained by AutoTrain Text encoder was not trained.
kanishka/smolm-mlm-bpe-unmask-seed_999
kanishka
2023-09-16T22:32:02Z
105
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-09-16T20:30:16Z
--- base_model: models/smolm-mlm/config.json tags: - generated_from_trainer metrics: - accuracy model-index: - name: smolm-mlm-bpe-unmask-seed_999 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smolm-mlm-bpe-unmask-seed_999 This model is a fine-tuned version of [models/smolm-mlm/config.json](https://huggingface.co/models/smolm-mlm/config.json) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.7174 - Accuracy: 0.4453 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 512 - seed: 999 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 24000 - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 3.561 | 1.0 | 11938 | 3.5141 | 0.3483 | | 3.3506 | 2.0 | 23876 | 3.3628 | 0.3569 | | 3.1725 | 3.0 | 35814 | 3.1874 | 0.3836 | | 3.0509 | 4.0 | 47752 | 3.0693 | 0.3987 | | 2.9538 | 5.0 | 59690 | 2.9730 | 0.4077 | | 2.8792 | 6.0 | 71628 | 2.8898 | 0.4234 | | 2.8026 | 7.0 | 83566 | 2.8437 | 0.4270 | | 2.7334 | 8.0 | 95504 | 2.7835 | 0.4361 | | 2.6586 | 9.0 | 107442 | 2.7487 | 0.4434 | | 2.6413 | 10.0 | 119380 | 2.7041 | 0.4486 | ### Framework versions - Transformers 4.32.1 - Pytorch 1.13.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
judy93536/distilroberta-pr200k-phrase-5k
judy93536
2023-09-16T22:08:05Z
572
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "base_model:judy93536/distilroberta-pr200k-ep20", "base_model:finetune:judy93536/distilroberta-pr200k-ep20", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-16T11:59:48Z
--- license: apache-2.0 base_model: judy93536/distilroberta-pr200k-ep20 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilroberta-pr200k-phrase-5k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-pr200k-phrase-5k This model is a fine-tuned version of [judy93536/distilroberta-pr200k-ep20](https://huggingface.co/judy93536/distilroberta-pr200k-ep20) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1609 - Accuracy: 0.9429 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8.103335054745316e-06 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.28 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 125 | 0.5568 | 0.7518 | | No log | 2.0 | 250 | 0.2015 | 0.9319 | | No log | 3.0 | 375 | 0.1609 | 0.9429 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
sharoz/automl_Student_Marks
sharoz
2023-09-16T21:35:16Z
0
0
null
[ "license:openrail", "region:us" ]
null
2023-09-16T21:30:57Z
--- license: openrail --- - R2 Score: 0.9889785125825429 - Mean Squared Error: 4.084399193090869 Best Model: {8: {'model_id': 8, 'rank': 1, 'cost': 0.01144159542293588, 'ensemble_weight': 1.0, 'sklearn_regressor': ARDRegression(alpha_1=0.000902745247612653, alpha_2=1.460549185544636e-10, copy_X=False, lambda_1=4.435537579074724e-09, lambda_2=2.128561351928784e-08, threshold_lambda=1132.429745344044, tol=9.068123470563838e-05)}}
garage-bAInd/Platypus-13B-adapters
garage-bAInd
2023-09-16T21:18:29Z
0
0
null
[ "pytorch", "en", "dataset:garage-bAInd/Open-Platypus", "arxiv:2308.07317", "arxiv:2307.09288", "license:cc-by-nc-sa-4.0", "region:us" ]
null
2023-08-15T03:40:18Z
--- license: cc-by-nc-sa-4.0 language: - en datasets: - garage-bAInd/Open-Platypus --- # Platypus2-13B LoRA adapters Platypus-13B is an instruction fine-tuned model based on the LLaMA2-13B transformer architecture. ![Platty](./Best_Platty_small.jpeg) ### Benchmark Metrics | Metric | Value | |-----------------------|-------| | MMLU (5-shot) | 56.70 | | ARC (25-shot) | 61.26 | | HellaSwag (10-shot) | 82.56 | | TruthfulQA (0-shot) | 44.86 | | Avg. | 61.35 | We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results. ### Model Details * **Trained by**: Cole Hunter & Ariel Lee * **Model type:** **Platypus2-13B** is an auto-regressive language model based on the LLaMA2 transformer architecture. * **Language(s)**: English * **License for base weights**: Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/)) ### Prompt Template ``` ### Instruction: <prompt> (without the <>) ### Response: ``` ### Training Dataset `garage-bAInd/Platypus2-13B` trained using STEM and logic based dataset [`garage-bAInd/Open-Platypus`](https://huggingface.co/datasets/garage-bAInd/Open-Platypus). Please see our [paper](https://arxiv.org/abs/2308.07317) and [project webpage](https://platypus-llm.github.io) for additional information. ### Training Procedure `garage-bAInd/Platypus2-13B` was instruction fine-tuned using LoRA on 1 A100 80GB. For training details and inference instructions please see the [Platypus2](https://github.com/arielnlee/Platypus) GitHub repo. ### Reproducing Evaluation Results Install LM Evaluation Harness: ``` # clone repository git clone https://github.com/EleutherAI/lm-evaluation-harness.git # check out the correct commit git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463 # change to repo directory cd lm-evaluation-harness # install pip install -e . ``` Each task was evaluated on 1 A100 80GB GPU. ARC: ``` python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-13B --tasks arc_challenge --batch_size 1 --no_cache --write_out --output_path results/Platypus2-13B/arc_challenge_25shot.json --device cuda --num_fewshot 25 ``` HellaSwag: ``` python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-13B --tasks hellaswag --batch_size 1 --no_cache --write_out --output_path results/Platypus2-13B/hellaswag_10shot.json --device cuda --num_fewshot 10 ``` MMLU: ``` python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-13B --tasks hendrycksTest-* --batch_size 1 --no_cache --write_out --output_path results/Platypus2-13B/mmlu_5shot.json --device cuda --num_fewshot 5 ``` TruthfulQA: ``` python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-13B --tasks truthfulqa_mc --batch_size 1 --no_cache --write_out --output_path results/Platypus2-13B/truthfulqa_0shot.json --device cuda ``` ### Limitations and bias Llama 2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/ ### Citations ```bibtex @article{platypus2023, title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs}, author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz}, booktitle={arXiv preprint arxiv:2308.07317}, year={2023} } ``` ```bibtex @misc{touvron2023llama, title={Llama 2: Open Foundation and Fine-Tuned Chat Models}, author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov year={2023}, eprint={2307.09288}, archivePrefix={arXiv}, } ``` ```bibtex @inproceedings{ hu2022lora, title={Lo{RA}: Low-Rank Adaptation of Large Language Models}, author={Edward J Hu and Yelong Shen and Phillip Wallis and Zeyuan Allen-Zhu and Yuanzhi Li and Shean Wang and Lu Wang and Weizhu Chen}, booktitle={International Conference on Learning Representations}, year={2022}, url={https://openreview.net/forum?id=nZeVKeeFYf9} } ```
CyberHarem/fina_kumakumakumabear
CyberHarem
2023-09-16T21:10:43Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/fina_kumakumakumabear", "license:mit", "region:us" ]
text-to-image
2023-09-16T20:55:53Z
--- license: mit datasets: - CyberHarem/fina_kumakumakumabear pipeline_tag: text-to-image tags: - art --- # Lora of fina_kumakumakumabear This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 9300, you need to download `9300/fina_kumakumakumabear.pt` as the embedding and `9300/fina_kumakumakumabear.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 9300**, with the score of 0.991. The trigger words are: 1. `fina_kumakumakumabear` 2. `bow, hair_bow, bangs, green_eyes, short_hair, brown_hair, hair_between_eyes, pink_bow, green_hair, collarbone` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------| | **9300** | **0.991** | [**Download**](9300/fina_kumakumakumabear.zip) | ![pattern_1-9300](9300/previews/pattern_1.png) | ![pattern_2-9300](9300/previews/pattern_2.png) | ![pattern_3-9300](9300/previews/pattern_3.png) | ![pattern_4-9300](9300/previews/pattern_4.png) | ![pattern_5-9300](9300/previews/pattern_5.png) | ![pattern_6-9300](9300/previews/pattern_6.png) | ![pattern_7-9300](9300/previews/pattern_7.png) | ![pattern_8-9300](9300/previews/pattern_8.png) | ![pattern_9-9300](9300/previews/pattern_9.png) | ![pattern_10-9300](9300/previews/pattern_10.png) | ![pattern_11-9300](9300/previews/pattern_11.png) | ![bikini-9300](9300/previews/bikini.png) | [<NSFW, click to see>](9300/previews/bondage.png) | ![free-9300](9300/previews/free.png) | ![maid-9300](9300/previews/maid.png) | ![miko-9300](9300/previews/miko.png) | [<NSFW, click to see>](9300/previews/nude.png) | [<NSFW, click to see>](9300/previews/nude2.png) | ![suit-9300](9300/previews/suit.png) | ![yukata-9300](9300/previews/yukata.png) | | 8680 | 0.985 | [Download](8680/fina_kumakumakumabear.zip) | ![pattern_1-8680](8680/previews/pattern_1.png) | ![pattern_2-8680](8680/previews/pattern_2.png) | ![pattern_3-8680](8680/previews/pattern_3.png) | ![pattern_4-8680](8680/previews/pattern_4.png) | ![pattern_5-8680](8680/previews/pattern_5.png) | ![pattern_6-8680](8680/previews/pattern_6.png) | ![pattern_7-8680](8680/previews/pattern_7.png) | ![pattern_8-8680](8680/previews/pattern_8.png) | ![pattern_9-8680](8680/previews/pattern_9.png) | ![pattern_10-8680](8680/previews/pattern_10.png) | ![pattern_11-8680](8680/previews/pattern_11.png) | ![bikini-8680](8680/previews/bikini.png) | [<NSFW, click to see>](8680/previews/bondage.png) | ![free-8680](8680/previews/free.png) | ![maid-8680](8680/previews/maid.png) | ![miko-8680](8680/previews/miko.png) | [<NSFW, click to see>](8680/previews/nude.png) | [<NSFW, click to see>](8680/previews/nude2.png) | ![suit-8680](8680/previews/suit.png) | ![yukata-8680](8680/previews/yukata.png) | | 8060 | 0.987 | [Download](8060/fina_kumakumakumabear.zip) | ![pattern_1-8060](8060/previews/pattern_1.png) | ![pattern_2-8060](8060/previews/pattern_2.png) | ![pattern_3-8060](8060/previews/pattern_3.png) | ![pattern_4-8060](8060/previews/pattern_4.png) | ![pattern_5-8060](8060/previews/pattern_5.png) | ![pattern_6-8060](8060/previews/pattern_6.png) | ![pattern_7-8060](8060/previews/pattern_7.png) | ![pattern_8-8060](8060/previews/pattern_8.png) | ![pattern_9-8060](8060/previews/pattern_9.png) | ![pattern_10-8060](8060/previews/pattern_10.png) | ![pattern_11-8060](8060/previews/pattern_11.png) | ![bikini-8060](8060/previews/bikini.png) | [<NSFW, click to see>](8060/previews/bondage.png) | ![free-8060](8060/previews/free.png) | ![maid-8060](8060/previews/maid.png) | ![miko-8060](8060/previews/miko.png) | [<NSFW, click to see>](8060/previews/nude.png) | [<NSFW, click to see>](8060/previews/nude2.png) | ![suit-8060](8060/previews/suit.png) | ![yukata-8060](8060/previews/yukata.png) | | 7440 | 0.987 | [Download](7440/fina_kumakumakumabear.zip) | ![pattern_1-7440](7440/previews/pattern_1.png) | ![pattern_2-7440](7440/previews/pattern_2.png) | ![pattern_3-7440](7440/previews/pattern_3.png) | ![pattern_4-7440](7440/previews/pattern_4.png) | ![pattern_5-7440](7440/previews/pattern_5.png) | ![pattern_6-7440](7440/previews/pattern_6.png) | ![pattern_7-7440](7440/previews/pattern_7.png) | ![pattern_8-7440](7440/previews/pattern_8.png) | ![pattern_9-7440](7440/previews/pattern_9.png) | ![pattern_10-7440](7440/previews/pattern_10.png) | ![pattern_11-7440](7440/previews/pattern_11.png) | ![bikini-7440](7440/previews/bikini.png) | [<NSFW, click to see>](7440/previews/bondage.png) | ![free-7440](7440/previews/free.png) | ![maid-7440](7440/previews/maid.png) | ![miko-7440](7440/previews/miko.png) | [<NSFW, click to see>](7440/previews/nude.png) | [<NSFW, click to see>](7440/previews/nude2.png) | ![suit-7440](7440/previews/suit.png) | ![yukata-7440](7440/previews/yukata.png) | | 6820 | 0.986 | [Download](6820/fina_kumakumakumabear.zip) | ![pattern_1-6820](6820/previews/pattern_1.png) | ![pattern_2-6820](6820/previews/pattern_2.png) | ![pattern_3-6820](6820/previews/pattern_3.png) | ![pattern_4-6820](6820/previews/pattern_4.png) | ![pattern_5-6820](6820/previews/pattern_5.png) | ![pattern_6-6820](6820/previews/pattern_6.png) | ![pattern_7-6820](6820/previews/pattern_7.png) | ![pattern_8-6820](6820/previews/pattern_8.png) | ![pattern_9-6820](6820/previews/pattern_9.png) | ![pattern_10-6820](6820/previews/pattern_10.png) | ![pattern_11-6820](6820/previews/pattern_11.png) | ![bikini-6820](6820/previews/bikini.png) | [<NSFW, click to see>](6820/previews/bondage.png) | ![free-6820](6820/previews/free.png) | ![maid-6820](6820/previews/maid.png) | ![miko-6820](6820/previews/miko.png) | [<NSFW, click to see>](6820/previews/nude.png) | [<NSFW, click to see>](6820/previews/nude2.png) | ![suit-6820](6820/previews/suit.png) | ![yukata-6820](6820/previews/yukata.png) | | 6200 | 0.987 | [Download](6200/fina_kumakumakumabear.zip) | ![pattern_1-6200](6200/previews/pattern_1.png) | ![pattern_2-6200](6200/previews/pattern_2.png) | ![pattern_3-6200](6200/previews/pattern_3.png) | ![pattern_4-6200](6200/previews/pattern_4.png) | ![pattern_5-6200](6200/previews/pattern_5.png) | ![pattern_6-6200](6200/previews/pattern_6.png) | ![pattern_7-6200](6200/previews/pattern_7.png) | ![pattern_8-6200](6200/previews/pattern_8.png) | ![pattern_9-6200](6200/previews/pattern_9.png) | ![pattern_10-6200](6200/previews/pattern_10.png) | ![pattern_11-6200](6200/previews/pattern_11.png) | ![bikini-6200](6200/previews/bikini.png) | [<NSFW, click to see>](6200/previews/bondage.png) | ![free-6200](6200/previews/free.png) | ![maid-6200](6200/previews/maid.png) | ![miko-6200](6200/previews/miko.png) | [<NSFW, click to see>](6200/previews/nude.png) | [<NSFW, click to see>](6200/previews/nude2.png) | ![suit-6200](6200/previews/suit.png) | ![yukata-6200](6200/previews/yukata.png) | | 5580 | 0.987 | [Download](5580/fina_kumakumakumabear.zip) | ![pattern_1-5580](5580/previews/pattern_1.png) | ![pattern_2-5580](5580/previews/pattern_2.png) | ![pattern_3-5580](5580/previews/pattern_3.png) | ![pattern_4-5580](5580/previews/pattern_4.png) | ![pattern_5-5580](5580/previews/pattern_5.png) | ![pattern_6-5580](5580/previews/pattern_6.png) | ![pattern_7-5580](5580/previews/pattern_7.png) | ![pattern_8-5580](5580/previews/pattern_8.png) | ![pattern_9-5580](5580/previews/pattern_9.png) | ![pattern_10-5580](5580/previews/pattern_10.png) | ![pattern_11-5580](5580/previews/pattern_11.png) | ![bikini-5580](5580/previews/bikini.png) | [<NSFW, click to see>](5580/previews/bondage.png) | ![free-5580](5580/previews/free.png) | ![maid-5580](5580/previews/maid.png) | ![miko-5580](5580/previews/miko.png) | [<NSFW, click to see>](5580/previews/nude.png) | [<NSFW, click to see>](5580/previews/nude2.png) | ![suit-5580](5580/previews/suit.png) | ![yukata-5580](5580/previews/yukata.png) | | 4960 | 0.936 | [Download](4960/fina_kumakumakumabear.zip) | ![pattern_1-4960](4960/previews/pattern_1.png) | ![pattern_2-4960](4960/previews/pattern_2.png) | ![pattern_3-4960](4960/previews/pattern_3.png) | ![pattern_4-4960](4960/previews/pattern_4.png) | ![pattern_5-4960](4960/previews/pattern_5.png) | ![pattern_6-4960](4960/previews/pattern_6.png) | ![pattern_7-4960](4960/previews/pattern_7.png) | ![pattern_8-4960](4960/previews/pattern_8.png) | ![pattern_9-4960](4960/previews/pattern_9.png) | ![pattern_10-4960](4960/previews/pattern_10.png) | ![pattern_11-4960](4960/previews/pattern_11.png) | ![bikini-4960](4960/previews/bikini.png) | [<NSFW, click to see>](4960/previews/bondage.png) | ![free-4960](4960/previews/free.png) | ![maid-4960](4960/previews/maid.png) | ![miko-4960](4960/previews/miko.png) | [<NSFW, click to see>](4960/previews/nude.png) | [<NSFW, click to see>](4960/previews/nude2.png) | ![suit-4960](4960/previews/suit.png) | ![yukata-4960](4960/previews/yukata.png) | | 4340 | 0.942 | [Download](4340/fina_kumakumakumabear.zip) | ![pattern_1-4340](4340/previews/pattern_1.png) | ![pattern_2-4340](4340/previews/pattern_2.png) | ![pattern_3-4340](4340/previews/pattern_3.png) | ![pattern_4-4340](4340/previews/pattern_4.png) | ![pattern_5-4340](4340/previews/pattern_5.png) | ![pattern_6-4340](4340/previews/pattern_6.png) | ![pattern_7-4340](4340/previews/pattern_7.png) | ![pattern_8-4340](4340/previews/pattern_8.png) | ![pattern_9-4340](4340/previews/pattern_9.png) | ![pattern_10-4340](4340/previews/pattern_10.png) | ![pattern_11-4340](4340/previews/pattern_11.png) | ![bikini-4340](4340/previews/bikini.png) | [<NSFW, click to see>](4340/previews/bondage.png) | ![free-4340](4340/previews/free.png) | ![maid-4340](4340/previews/maid.png) | ![miko-4340](4340/previews/miko.png) | [<NSFW, click to see>](4340/previews/nude.png) | [<NSFW, click to see>](4340/previews/nude2.png) | ![suit-4340](4340/previews/suit.png) | ![yukata-4340](4340/previews/yukata.png) | | 3720 | 0.888 | [Download](3720/fina_kumakumakumabear.zip) | ![pattern_1-3720](3720/previews/pattern_1.png) | ![pattern_2-3720](3720/previews/pattern_2.png) | ![pattern_3-3720](3720/previews/pattern_3.png) | ![pattern_4-3720](3720/previews/pattern_4.png) | ![pattern_5-3720](3720/previews/pattern_5.png) | ![pattern_6-3720](3720/previews/pattern_6.png) | ![pattern_7-3720](3720/previews/pattern_7.png) | ![pattern_8-3720](3720/previews/pattern_8.png) | ![pattern_9-3720](3720/previews/pattern_9.png) | ![pattern_10-3720](3720/previews/pattern_10.png) | ![pattern_11-3720](3720/previews/pattern_11.png) | ![bikini-3720](3720/previews/bikini.png) | [<NSFW, click to see>](3720/previews/bondage.png) | ![free-3720](3720/previews/free.png) | ![maid-3720](3720/previews/maid.png) | ![miko-3720](3720/previews/miko.png) | [<NSFW, click to see>](3720/previews/nude.png) | [<NSFW, click to see>](3720/previews/nude2.png) | ![suit-3720](3720/previews/suit.png) | ![yukata-3720](3720/previews/yukata.png) | | 3100 | 0.988 | [Download](3100/fina_kumakumakumabear.zip) | ![pattern_1-3100](3100/previews/pattern_1.png) | ![pattern_2-3100](3100/previews/pattern_2.png) | ![pattern_3-3100](3100/previews/pattern_3.png) | ![pattern_4-3100](3100/previews/pattern_4.png) | ![pattern_5-3100](3100/previews/pattern_5.png) | ![pattern_6-3100](3100/previews/pattern_6.png) | ![pattern_7-3100](3100/previews/pattern_7.png) | ![pattern_8-3100](3100/previews/pattern_8.png) | ![pattern_9-3100](3100/previews/pattern_9.png) | ![pattern_10-3100](3100/previews/pattern_10.png) | ![pattern_11-3100](3100/previews/pattern_11.png) | ![bikini-3100](3100/previews/bikini.png) | [<NSFW, click to see>](3100/previews/bondage.png) | ![free-3100](3100/previews/free.png) | ![maid-3100](3100/previews/maid.png) | ![miko-3100](3100/previews/miko.png) | [<NSFW, click to see>](3100/previews/nude.png) | [<NSFW, click to see>](3100/previews/nude2.png) | ![suit-3100](3100/previews/suit.png) | ![yukata-3100](3100/previews/yukata.png) | | 2480 | 0.886 | [Download](2480/fina_kumakumakumabear.zip) | ![pattern_1-2480](2480/previews/pattern_1.png) | ![pattern_2-2480](2480/previews/pattern_2.png) | ![pattern_3-2480](2480/previews/pattern_3.png) | ![pattern_4-2480](2480/previews/pattern_4.png) | ![pattern_5-2480](2480/previews/pattern_5.png) | ![pattern_6-2480](2480/previews/pattern_6.png) | ![pattern_7-2480](2480/previews/pattern_7.png) | ![pattern_8-2480](2480/previews/pattern_8.png) | ![pattern_9-2480](2480/previews/pattern_9.png) | ![pattern_10-2480](2480/previews/pattern_10.png) | ![pattern_11-2480](2480/previews/pattern_11.png) | ![bikini-2480](2480/previews/bikini.png) | [<NSFW, click to see>](2480/previews/bondage.png) | ![free-2480](2480/previews/free.png) | ![maid-2480](2480/previews/maid.png) | ![miko-2480](2480/previews/miko.png) | [<NSFW, click to see>](2480/previews/nude.png) | [<NSFW, click to see>](2480/previews/nude2.png) | ![suit-2480](2480/previews/suit.png) | ![yukata-2480](2480/previews/yukata.png) | | 1860 | 0.934 | [Download](1860/fina_kumakumakumabear.zip) | ![pattern_1-1860](1860/previews/pattern_1.png) | ![pattern_2-1860](1860/previews/pattern_2.png) | ![pattern_3-1860](1860/previews/pattern_3.png) | ![pattern_4-1860](1860/previews/pattern_4.png) | ![pattern_5-1860](1860/previews/pattern_5.png) | ![pattern_6-1860](1860/previews/pattern_6.png) | ![pattern_7-1860](1860/previews/pattern_7.png) | ![pattern_8-1860](1860/previews/pattern_8.png) | ![pattern_9-1860](1860/previews/pattern_9.png) | ![pattern_10-1860](1860/previews/pattern_10.png) | ![pattern_11-1860](1860/previews/pattern_11.png) | ![bikini-1860](1860/previews/bikini.png) | [<NSFW, click to see>](1860/previews/bondage.png) | ![free-1860](1860/previews/free.png) | ![maid-1860](1860/previews/maid.png) | ![miko-1860](1860/previews/miko.png) | [<NSFW, click to see>](1860/previews/nude.png) | [<NSFW, click to see>](1860/previews/nude2.png) | ![suit-1860](1860/previews/suit.png) | ![yukata-1860](1860/previews/yukata.png) | | 1240 | 0.874 | [Download](1240/fina_kumakumakumabear.zip) | ![pattern_1-1240](1240/previews/pattern_1.png) | ![pattern_2-1240](1240/previews/pattern_2.png) | ![pattern_3-1240](1240/previews/pattern_3.png) | ![pattern_4-1240](1240/previews/pattern_4.png) | ![pattern_5-1240](1240/previews/pattern_5.png) | ![pattern_6-1240](1240/previews/pattern_6.png) | ![pattern_7-1240](1240/previews/pattern_7.png) | ![pattern_8-1240](1240/previews/pattern_8.png) | ![pattern_9-1240](1240/previews/pattern_9.png) | ![pattern_10-1240](1240/previews/pattern_10.png) | ![pattern_11-1240](1240/previews/pattern_11.png) | ![bikini-1240](1240/previews/bikini.png) | [<NSFW, click to see>](1240/previews/bondage.png) | ![free-1240](1240/previews/free.png) | ![maid-1240](1240/previews/maid.png) | ![miko-1240](1240/previews/miko.png) | [<NSFW, click to see>](1240/previews/nude.png) | [<NSFW, click to see>](1240/previews/nude2.png) | ![suit-1240](1240/previews/suit.png) | ![yukata-1240](1240/previews/yukata.png) | | 620 | 0.904 | [Download](620/fina_kumakumakumabear.zip) | ![pattern_1-620](620/previews/pattern_1.png) | ![pattern_2-620](620/previews/pattern_2.png) | ![pattern_3-620](620/previews/pattern_3.png) | ![pattern_4-620](620/previews/pattern_4.png) | ![pattern_5-620](620/previews/pattern_5.png) | ![pattern_6-620](620/previews/pattern_6.png) | ![pattern_7-620](620/previews/pattern_7.png) | ![pattern_8-620](620/previews/pattern_8.png) | ![pattern_9-620](620/previews/pattern_9.png) | ![pattern_10-620](620/previews/pattern_10.png) | ![pattern_11-620](620/previews/pattern_11.png) | ![bikini-620](620/previews/bikini.png) | [<NSFW, click to see>](620/previews/bondage.png) | ![free-620](620/previews/free.png) | ![maid-620](620/previews/maid.png) | ![miko-620](620/previews/miko.png) | [<NSFW, click to see>](620/previews/nude.png) | [<NSFW, click to see>](620/previews/nude2.png) | ![suit-620](620/previews/suit.png) | ![yukata-620](620/previews/yukata.png) |
Haidra-Org/horde-fallback-image-models
Haidra-Org
2023-09-16T21:02:34Z
0
0
null
[ "license:unknown", "region:us" ]
null
2023-06-25T20:03:45Z
--- license: unknown --- These models were once hosted elsewhere, but in the spirit of allowing the model to continue to be served by AI-Horde, they have been temporaily placed here.
aiseeker/my_awesome_wiki-model
aiseeker
2023-09-16T20:54:11Z
223
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "base_model:distilbert/distilgpt2", "base_model:finetune:distilbert/distilgpt2", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-09-16T20:14:14Z
--- license: apache-2.0 base_model: distilgpt2 tags: - generated_from_trainer model-index: - name: my_awesome_wiki-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_wiki-model This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7182 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.9535 | 1.0 | 1581 | 1.8264 | | 1.85 | 2.0 | 3162 | 1.7420 | | 1.8039 | 3.0 | 4743 | 1.7182 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
actionpace/CalliopeDS-L2-13B
actionpace
2023-09-16T20:48:23Z
3
0
null
[ "gguf", "en", "license:other", "endpoints_compatible", "region:us" ]
null
2023-09-16T19:50:27Z
--- license: other language: - en --- **Some of my own quants:** * CalliopeDS-L2-13B_Q4_K_M.gguf * CalliopeDS-L2-13B_Q5_K_M.gguf **Source:** [Doctor-Shotgun](https://huggingface.co/Doctor-Shotgun) **Source Model:** [CalliopeDS-L2-13B](https://huggingface.co/Doctor-Shotgun/CalliopeDS-L2-13B) **Source models for Doctor-Shotgun/CalliopeDS-L2-13B (Merge)** - [jondurbin/airoboros-l2-13b-2.2](https://huggingface.co/jondurbin/airoboros-l2-13b-2.2) - [elinas/chronos-13b-v2](https://huggingface.co/elinas/chronos-13b-v2) ([Ref](https://huggingface.co/actionpace/chronos-13b-v2)) - [NousResearch/Nous-Hermes-Llama2-13b](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b) ([Ref](https://huggingface.co/actionpace/Nous-Hermes-Llama2-13b)) - [lemonilia/limarp-llama2-v2](https://huggingface.co/lemonilia/limarp-llama2-v2) (Lora) - [PygmalionAI/pygmalion-2-13b](https://huggingface.co/PygmalionAI/pygmalion-2-13b) ([Ref](https://huggingface.co/actionpace/pygmalion-2-13b))
sharoz/Heart_Disease_logistic_regression
sharoz
2023-09-16T20:47:46Z
0
0
null
[ "license:openrail", "region:us" ]
null
2023-09-16T20:43:10Z
--- license: openrail --- # Model Card ## Metrics | Epoch | Train Cost | Test Cost | |-|-|-| | 0 | 1.644475895311175 | 1.5758699798535951 | | 100 | 1.3761800172714675 | 1.31997920677817 | | 200 | 1.1431781728069763 | 1.0973011517034748 | | 300 | 0.9500436967421728 | 0.9122245666448192 | | 500 | 0.6838312399592716 | 0.6560191468590589 | | 1000 | 0.44794596327099884 | 0.4297061290168145 | | 1900 | 0.3889271472988263 | 0.3803457710138814 | | Learning Rate | Bias | Initial Weights | |-|-|-| | 0.001 | 0 | 0.9 to 1.0 | ![1](https://drive.google.com/uc?export=download&id=1_Z-ctIxwOSg1XF878ZXViU2oispU4YaM)
DunnBC22/mit-b0-CMP_semantic_seg_with_mps_v2
DunnBC22
2023-09-16T20:45:01Z
206
1
transformers
[ "transformers", "pytorch", "tensorboard", "segformer", "computer-vision", "generated_from_trainer", "image-segmentation", "en", "dataset:Xpitfire/cmp_facade", "license:other", "endpoints_compatible", "region:us" ]
image-segmentation
2023-02-20T06:22:56Z
--- language: - en license: other tags: - computer-vision - generated_from_trainer model-index: - name: mit-b0-CMP_semantic_seg_with_mps_v2 results: [] datasets: - Xpitfire/cmp_facade metrics: - mean_iou pipeline_tag: image-segmentation --- # mit-b0-CMP_semantic_seg_with_mps_v2 This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0). It achieves the following results on the evaluation set: - Loss: 1.0863 - Mean Iou: 0.4097 - Mean Accuracy: 0.5538 - Overall Accuracy: 0.6951 - Per Category Iou: - Segment 0: 0.5921698801573617 - Segment 1: 0.5795623712718901 - Segment 2: 0.5784812820145221 - Segment 3: 0.2917052691882505 - Segment 4: 0.3792639848157326 - Segment 5: 0.37973303153855376 - Segment 6: 0.4481097636024487 - Segment 7: 0.4354492668218124 - Segment 8: 0.26472453634508664 - Segment 9: 0.4173722023142026 - Segment 10: 0.18166072949276144 - Segment 11: 0.36809541729585366 - Per Category Accuracy: - Segment 0: 0.6884460857323806 - Segment 1: 0.7851625477616788 - Segment 2: 0.7322992353412343 - Segment 3: 0.45229387721112274 - Segment 4: 0.5829333862769369 - Segment 5: 0.5516333441001092 - Segment 6: 0.5904157921999404 - Segment 7: 0.5288772981353482 - Segment 8: 0.4518224891972707 - Segment 9: 0.571864661897264 - Segment 10: 0.23178753217655862 - Segment 11: 0.47833833709905393 ## Model description For more information on how it was created, check out the following link: https://github.com/DunnBC22/Vision_Audio_and_Multimodal_Projects/blob/main/Computer%20Vision/Image%20Segmentation/Trained%2C%20But%20to%20My%20Standard/Center%20for%20Machine%20Perception/Version%202/Center%20for%20Machine%20Perception%20-%20semantic_segmentation_v2.ipynb ## Intended uses & limitations This model is intended to demonstrate my ability to solve a complex problem using technology. You are welcome to use it, but remember that it is at your own risk/peril. ## Training and evaluation data Dataset Source: https://huggingface.co/datasets/Xpitfire/cmp_facade ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results #### Overall Dataset Metrics | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:| | 1.6807 | 1.0 | 189 | 1.3310 | 0.2226 | 0.3388 | 0.5893 | | 1.1837 | 2.0 | 378 | 1.1731 | 0.2602 | 0.3876 | 0.6122 | | 1.0241 | 3.0 | 567 | 1.0485 | 0.2915 | 0.3954 | 0.6393 | | 0.9353 | 4.0 | 756 | 0.9943 | 0.3054 | 0.4021 | 0.6570 | | 0.8717 | 5.0 | 945 | 1.0010 | 0.3299 | 0.4440 | 0.6530 | | 0.8238 | 6.0 | 1134 | 0.9537 | 0.3546 | 0.4771 | 0.6701 | | 0.7415 | 8.0 | 1512 | 0.9738 | 0.3554 | 0.4634 | 0.6733 | | 0.7708 | 7.0 | 1323 | 0.9789 | 0.3550 | 0.4837 | 0.6683 | | 0.7018 | 9.0 | 1701 | 0.9449 | 0.3667 | 0.4802 | 0.6826 | | 0.682 | 10.0 | 1890 | 0.9422 | 0.3762 | 0.5047 | 0.6805 | | 0.6503 | 11.0 | 2079 | 0.9889 | 0.3785 | 0.5082 | 0.6729 | | 0.633 | 12.0 | 2268 | 0.9594 | 0.3901 | 0.5224 | 0.6797 | | 0.6035 | 13.0 | 2457 | 0.9612 | 0.3939 | 0.5288 | 0.6834 | | 0.5874 | 14.0 | 2646 | 0.9657 | 0.3939 | 0.5383 | 0.6844 | | 0.5684 | 15.0 | 2835 | 0.9762 | 0.3950 | 0.5446 | 0.6855 | | 0.5485 | 16.0 | 3024 | 1.0645 | 0.3794 | 0.5095 | 0.6704 | | 0.5402 | 17.0 | 3213 | 0.9747 | 0.4044 | 0.5600 | 0.6839 | | 0.5275 | 18.0 | 3402 | 1.0054 | 0.3944 | 0.5411 | 0.6790 | | 0.5032 | 19.0 | 3591 | 1.0014 | 0.3973 | 0.5256 | 0.6875 | | 0.4985 | 20.0 | 3780 | 0.9893 | 0.3990 | 0.5468 | 0.6883 | | 0.4925 | 21.0 | 3969 | 1.0416 | 0.3955 | 0.5339 | 0.6806 | | 0.4772 | 22.0 | 4158 | 1.0142 | 0.3969 | 0.5476 | 0.6838 | | 0.4707 | 23.0 | 4347 | 0.9896 | 0.4077 | 0.5458 | 0.6966 | | 0.4601 | 24.0 | 4536 | 1.0040 | 0.4104 | 0.5551 | 0.6948 | | 0.4544 | 25.0 | 4725 | 1.0093 | 0.4093 | 0.5652 | 0.6899 | | 0.4421 | 26.0 | 4914 | 1.0434 | 0.4064 | 0.5448 | 0.6938 | | 0.4293 | 27.0 | 5103 | 1.0391 | 0.4076 | 0.5571 | 0.6908 | | 0.4312 | 28.0 | 5292 | 1.0037 | 0.4100 | 0.5534 | 0.6958 | | 0.4309 | 29.0 | 5481 | 1.0288 | 0.4101 | 0.5493 | 0.6968 | | 0.4146 | 30.0 | 5670 | 1.0602 | 0.4062 | 0.5445 | 0.6928 | | 0.4106 | 31.0 | 5859 | 1.0573 | 0.4113 | 0.5520 | 0.6937 | | 0.4102 | 32.0 | 6048 | 1.0616 | 0.4043 | 0.5444 | 0.6904 | | 0.394 | 33.0 | 6237 | 1.0244 | 0.4104 | 0.5587 | 0.6957 | | 0.3865 | 34.0 | 6426 | 1.0618 | 0.4086 | 0.5468 | 0.6922 | | 0.3816 | 35.0 | 6615 | 1.0515 | 0.4109 | 0.5587 | 0.6937 | | 0.3803 | 36.0 | 6804 | 1.0709 | 0.4118 | 0.5507 | 0.6982 | | 0.3841 | 37.0 | 6993 | 1.0646 | 0.4102 | 0.5423 | 0.7000 | | 0.383 | 38.0 | 7182 | 1.0769 | 0.4076 | 0.5463 | 0.6981 | | 0.3831 | 39.0 | 7371 | 1.0821 | 0.4081 | 0.5438 | 0.6949 | | 0.3701 | 40.0 | 7560 | 1.0971 | 0.4094 | 0.5503 | 0.6939 | | 0.3728 | 41.0 | 7749 | 1.0850 | 0.4073 | 0.5426 | 0.6955 | | 0.3693 | 42.0 | 7938 | 1.0969 | 0.4065 | 0.5503 | 0.6922 | | 0.3627 | 43.0 | 8127 | 1.0932 | 0.4087 | 0.5497 | 0.6948 | | 0.3707 | 44.0 | 8316 | 1.1095 | 0.4071 | 0.5449 | 0.6950 | | 0.3715 | 45.0 | 8505 | 1.0884 | 0.4110 | 0.5481 | 0.6962 | | 0.3637 | 46.0 | 8694 | 1.0893 | 0.4116 | 0.5565 | 0.6948 | | 0.3581 | 47.0 | 8883 | 1.1164 | 0.4080 | 0.5443 | 0.6938 | | 0.3595 | 48.0 | 9072 | 1.1264 | 0.4056 | 0.5374 | 0.6942 | | 0.3604 | 49.0 | 9261 | 1.0948 | 0.4104 | 0.5508 | 0.6953 | | 0.3541 | 50.0 | 9450 | 1.0863 | 0.4097 | 0.5538 | 0.6951 | #### Per Category IoU For Each Segment | Epoch | Segment 0 | Segment 1 | Segment 2 | Segment 3 | Segment 4 | Segment 5 | Segment 6 | Segment 7 | Segment 8 | Segment 9 | Segment 10 | Segment 11 | |:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:| | 1.0 | 0.4635 | 0.4905 | 0.4698 | 0.0 | 0.2307 | 0.1515 | 0.2789 | 0.0002 | 0.0250 | 0.3527 | 0.0 | 0.2087 | | 2.0 | 0.4240 | 0.5249 | 0.5152 | 0.0057 | 0.2636 | 0.2756 | 0.3312 | 0.0575 | 0.0539 | 0.3860 | 0.0 | 0.2854 | | 3.0 | 0.5442 | 0.5037 | 0.5329 | 0.0412 | 0.3062 | 0.2714 | 0.3820 | 0.1430 | 0.0796 | 0.4007 | 0.0002 | 0.2929 | | 4.0 | 0.5776 | 0.5289 | 0.5391 | 0.1171 | 0.3137 | 0.2600 | 0.3664 | 0.1527 | 0.1074 | 0.3935 | 0.0002 | 0.3078 | | 5.0 | 0.4790 | 0.5506 | 0.5472 | 0.1547 | 0.3372 | 0.3297 | 0.4151 | 0.2339 | 0.1709 | 0.4081 | 0.0008 | 0.3314 | | 6.0 | 0.5572 | 0.5525 | 0.5611 | 0.2076 | 0.3434 | 0.3163 | 0.4103 | 0.3279 | 0.2107 | 0.4191 | 0.0067 | 0.3418 | | 7.0 | 0.5310 | 0.5634 | 0.5594 | 0.2299 | 0.3424 | 0.3375 | 0.4050 | 0.2883 | 0.2197 | 0.4142 | 0.0316 | 0.3373 | | 8.0 | 0.5366 | 0.5659 | 0.5550 | 0.2331 | 0.3497 | 0.3334 | 0.4301 | 0.3401 | 0.1989 | 0.4181 | 0.0358 | 0.2680 | | 9.0 | 0.5798 | 0.5657 | 0.5624 | 0.2368 | 0.3648 | 0.3271 | 0.4250 | 0.3207 | 0.2096 | 0.4236 | 0.0504 | 0.3346 | | 10.0 | 0.5802 | 0.5622 | 0.5585 | 0.2340 | 0.3793 | 0.3407 | 0.4277 | 0.3801 | 0.2301 | 0.4216 | 0.0640 | 0.3367 | | 11.0 | 0.5193 | 0.5649 | 0.5605 | 0.2698 | 0.3772 | 0.3526 | 0.4342 | 0.3433 | 0.2415 | 0.4336 | 0.0889 | 0.3562 | | 12.0 | 0.5539 | 0.5641 | 0.5679 | 0.2658 | 0.3757 | 0.3510 | 0.4257 | 0.3993 | 0.2354 | 0.4338 | 0.1800 | 0.3287 | | 13.0 | 0.5663 | 0.5666 | 0.5679 | 0.2631 | 0.3726 | 0.3609 | 0.4351 | 0.3759 | 0.2511 | 0.4256 | 0.1737 | 0.3681 | | 14.0 | 0.5807 | 0.5670 | 0.5679 | 0.2670 | 0.3594 | 0.3605 | 0.4393 | 0.3863 | 0.2406 | 0.4228 | 0.1705 | 0.3652 | | 15.0 | 0.5800 | 0.5711 | 0.5671 | 0.2825 | 0.3664 | 0.3587 | 0.4408 | 0.4021 | 0.2540 | 0.4246 | 0.1376 | 0.3548 | | 16.0 | 0.4855 | 0.5683 | 0.5685 | 0.2612 | 0.3832 | 0.3628 | 0.4378 | 0.4056 | 0.2525 | 0.4206 | 0.1242 | 0.2825 | | 17.0 | 0.5697 | 0.5674 | 0.5687 | 0.2971 | 0.3767 | 0.3741 | 0.4486 | 0.4126 | 0.2489 | 0.4260 | 0.1874 | 0.3757 | | 18.0 | 0.5341 | 0.5728 | 0.5616 | 0.2827 | 0.3823 | 0.3782 | 0.4298 | 0.4070 | 0.2578 | 0.4195 | 0.1448 | 0.3632 | | 19.0 | 0.5696 | 0.5739 | 0.5699 | 0.2918 | 0.3717 | 0.3635 | 0.4444 | 0.4122 | 0.2531 | 0.4142 | 0.1659 | 0.3369 | | 20.0 | 0.5937 | 0.5702 | 0.5630 | 0.2892 | 0.3790 | 0.3757 | 0.4383 | 0.4110 | 0.2592 | 0.4147 | 0.1291 | 0.3653 | | 21.0 | 0.5336 | 0.5723 | 0.5732 | 0.2843 | 0.3748 | 0.3738 | 0.4383 | 0.3876 | 0.2598 | 0.4170 | 0.1693 | 0.3624 | | 22.0 | 0.5634 | 0.5752 | 0.5595 | 0.2783 | 0.3833 | 0.3540 | 0.4448 | 0.4054 | 0.2586 | 0.4145 | 0.1597 | 0.3660 | | 23.0 | 0.6013 | 0.5801 | 0.5794 | 0.2988 | 0.3816 | 0.3736 | 0.4464 | 0.4241 | 0.2633 | 0.4162 | 0.1747 | 0.3530 | | 24.0 | 0.6061 | 0.5756 | 0.5721 | 0.3086 | 0.3771 | 0.3707 | 0.4459 | 0.4242 | 0.2665 | 0.4104 | 0.1942 | 0.3732 | | 25.0 | 0.5826 | 0.5745 | 0.5742 | 0.3109 | 0.3765 | 0.3784 | 0.4441 | 0.4184 | 0.2609 | 0.4219 | 0.1930 | 0.3765 | | 26.0 | 0.5783 | 0.5821 | 0.5770 | 0.2985 | 0.3885 | 0.3582 | 0.4458 | 0.4220 | 0.2717 | 0.4260 | 0.1690 | 0.3600 | | 27.0 | 0.5764 | 0.5777 | 0.5749 | 0.2868 | 0.3824 | 0.3857 | 0.4450 | 0.4170 | 0.2644 | 0.4295 | 0.1922 | - | | 28.0 | 0.6023 | 0.5776 | 0.5769 | 0.2964 | 0.3759 | 0.3758 | 0.4464 | 0.4245 | 0.2712 | 0.4083 | 0.1967 | 0.3680 | | 29.0 | 0.6043 | 0.5814 | 0.5728 | 0.2882 | 0.3867 | 0.3841 | 0.4369 | 0.4254 | 0.2659 | 0.4252 | 0.2106 | 0.3391 | | 30.0 | 0.5840 | 0.5792 | 0.5750 | 0.2859 | 0.3839 | 0.3786 | 0.4479 | 0.4259 | 0.2664 | 0.3947 | 0.1753 | 0.3780 | | 31.0 | 0.5819 | 0.5787 | 0.5775 | 0.2882 | 0.3861 | 0.3888 | 0.4522 | 0.4207 | 0.2722 | 0.4277 | 0.2050 | 0.3566 | | 32.0 | 0.5769 | 0.5774 | 0.5737 | 0.2844 | 0.3762 | 0.3768 | 0.4424 | 0.4331 | 0.2649 | 0.3959 | 0.1748 | 0.3744 | | 33.0 | 0.6076 | 0.5755 | 0.5774 | 0.2887 | 0.3833 | 0.3803 | 0.4483 | 0.4329 | 0.2687 | 0.4194 | 0.1884 | 0.3547 | | 34.0 | 0.5729 | 0.5787 | 0.5789 | 0.2853 | 0.3854 | 0.3735 | 0.4469 | 0.4279 | 0.2694 | 0.4240 | 0.1986 | 0.3613 | | 35.0 | 0.5942 | 0.5769 | 0.5777 | 0.2873 | 0.3867 | 0.3811 | 0.4448 | 0.4281 | 0.2669 | 0.4147 | 0.1956 | 0.3774 | | 36.0 | 0.6024 | 0.5819 | 0.5782 | 0.2870 | 0.3850 | 0.3781 | 0.4469 | 0.4259 | 0.2696 | 0.4177 | 0.1885 | 0.3802 | | 37.0 | 0.6099 | 0.5822 | 0.5787 | 0.2920 | 0.3827 | 0.3739 | 0.4416 | 0.4271 | 0.2646 | 0.4200 | 0.1864 | 0.3637 | | 38.0 | 0.6028 | 0.5823 | 0.5799 | 0.2887 | 0.3828 | 0.3770 | 0.4470 | 0.4238 | 0.2639 | 0.4197 | 0.1617 | 0.3610 | | 39.0 | 0.5856 | 0.5809 | 0.5772 | 0.2889 | 0.3772 | 0.3683 | 0.4493 | 0.4296 | 0.2665 | 0.4112 | 0.1902 | 0.3723 | | 40.0 | 0.5830 | 0.5808 | 0.5785 | 0.2947 | 0.3803 | 0.3832 | 0.4496 | 0.4284 | 0.2675 | 0.4111 | 0.1913 | 0.3644 | | 41.0 | 0.5853 | 0.5827 | 0.5786 | 0.2921 | 0.3809 | 0.3712 | 0.4464 | 0.4330 | 0.2670 | 0.4180 | 0.1631 | 0.3694 | | 42.0 | 0.5756 | 0.5804 | 0.5766 | 0.2872 | 0.3775 | 0.3786 | 0.4480 | 0.4396 | 0.2669 | 0.4132 | 0.1619 | 0.3729 | | 43.0 | 0.5872 | 0.5821 | 0.5762 | 0.2896 | 0.3820 | 0.3742 | 0.4499 | 0.4346 | 0.2685 | 0.4164 | 0.1848 | 0.3597 | | 44.0 | 0.5894 | 0.5823 | 0.5774 | 0.2917 | 0.3801 | 0.3754 | 0.4476 | 0.4287 | 0.2635 | 0.4096 | 0.1911 | 0.3478 | | 45.0 | 0.5912 | 0.5809 | 0.5791 | 0.2980 | 0.3817 | 0.3750 | 0.4483 | 0.4349 | 0.2677 | 0.4155 | 0.1909 | 0.3686 | | 46.0 | 0.5922 | 0.5794 | 0.5788 | 0.2952 | 0.3804 | 0.3754 | 0.4487 | 0.4356 | 0.2641 | 0.4159 | 0.2068 | 0.3666 | | 47.0 | 0.5748 | 0.5822 | 0.5779 | 0.2909 | 0.3849 | 0.3751 | 0.4487 | 0.4350 | 0.2687 | 0.4150 | 0.1785 | 0.3643 | | 48.0 | 0.5787 | 0.5823 | 0.5789 | 0.2896 | 0.3819 | 0.3750 | 0.4479 | 0.4224 | 0.2665 | 0.4140 | 0.1723 | 0.3580 | | 49.0 | 0.5878 | 0.5812 | 0.5782 | 0.2930 | 0.3807 | 0.3796 | 0.4482 | 0.4364 | 0.2659 | 0.4139 | 0.1915 | 0.3678 | | 50.0 | 0.5922 | 0.5796 | 0.5785 | 0.2917 | 0.3793 | 0.3797 | 0.4481 | 0.4354 | 0.2647 | 0.4174 | 0.1817 | 0.3681 | #### Per Category Accuracy For Each Segment | Epoch | Segment 0 | Segment 1 | Segment 2 | Segment 3 | Segment 4 | Segment 5 | Segment 6 | Segment 7 | Segment 8 | Segment 9 | Segment 10 | Segment 11 | |:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:| | 1.0 | 0.6133 | 0.6847 | 0.7408 | 0.0 | 0.4973 | 0.1720 | 0.4073 | 0.0002 | 0.0255 | 0.6371 | 0.0 | 0.2874 | | 2.0 | 0.4782 | 0.7844 | 0.6966 | 0.0057 | 0.5735 | 0.3684 | 0.6226 | 0.0577 | 0.0563 | 0.5907 | 0.0 | 0.4168 | | 3.0 | 0.8126 | 0.6852 | 0.6683 | 0.0420 | 0.4972 | 0.3418 | 0.5121 | 0.1453 | 0.0849 | 0.5882 | 0.0002 | 0.3672 | | 4.0 | 0.8079 | 0.7362 | 0.6803 | 0.1231 | 0.5129 | 0.3324 | 0.4212 | 0.1554 | 0.1223 | 0.5587 | 0.0002 | 0.3751 | | 5.0 | 0.5408 | 0.8111 | 0.7439 | 0.1647 | 0.5336 | 0.4720 | 0.5650 | 0.2459 | 0.2127 | 0.6032 | 0.0008 | 0.4343 | | 6.0 | 0.6870 | 0.7532 | 0.7389 | 0.2428 | 0.5081 | 0.4173 | 0.5923 | 0.3710 | 0.3117 | 0.6181 | 0.0068 | 0.4785 | | 7.0 | 0.6050 | 0.7961 | 0.7434 | 0.2876 | 0.5835 | 0.4949 | 0.5608 | 0.3103 | 0.3672 | 0.6185 | 0.0345 | 0.4022 | | 8.0 | 0.6081 | 0.8461 | 0.6598 | 0.3035 | 0.5720 | 0.4540 | 0.5735 | 0.3849 | 0.2642 | 0.5608 | 0.0379 | 0.2962 | | 9.0 | 0.7241 | 0.7684 | 0.7677 | 0.2958 | 0.5321 | 0.4212 | 0.5547 | 0.3513 | 0.2813 | 0.5645 | 0.0544 | 0.4465 | | 10.0 | 0.7124 | 0.7649 | 0.7024 | 0.2879 | 0.5535 | 0.4413 | 0.6310 | 0.4960 | 0.3982 | 0.5592 | 0.0724 | 0.4370 | | 11.0 | 0.5876 | 0.8060 | 0.7296 | 0.3838 | 0.5267 | 0.4983 | 0.5902 | 0.3838 | 0.4151 | 0.5987 | 0.1030 | 0.4756 | | 12.0 | 0.6497 | 0.7807 | 0.7448 | 0.4018 | 0.5381 | 0.4615 | 0.5849 | 0.4883 | 0.3248 | 0.6063 | 0.2918 | 0.3958 | | 13.0 | 0.6650 | 0.7792 | 0.7595 | 0.4049 | 0.5501 | 0.4940 | 0.5831 | 0.4375 | 0.3843 | 0.5591 | 0.2578 | 0.4711 | | 14.0 | 0.6881 | 0.7715 | 0.7076 | 0.4518 | 0.6011 | 0.4900 | 0.6235 | 0.4466 | 0.3627 | 0.5934 | 0.2537 | 0.4702 | | 15.0 | 0.6690 | 0.7721 | 0.7253 | 0.4607 | 0.6286 | 0.4900 | 0.5936 | 0.4951 | 0.4337 | 0.6295 | 0.1749 | 0.4630 | | 16.0 | 0.5250 | 0.8335 | 0.7460 | 0.3742 | 0.6114 | 0.4823 | 0.5880 | 0.5021 | 0.4084 | 0.5757 | 0.1498 | 0.3171 | | 17.0 | 0.6652 | 0.7673 | 0.7058 | 0.4318 | 0.5995 | 0.5137 | 0.6112 | 0.5596 | 0.4548 | 0.5819 | 0.2821 | 0.5465 | | 18.0 | 0.6012 | 0.8091 | 0.6765 | 0.4561 | 0.5707 | 0.5393 | 0.6255 | 0.5679 | 0.4347 | 0.5567 | 0.1806 | 0.4751 | | 19.0 | 0.6634 | 0.8079 | 0.6986 | 0.4389 | 0.5274 | 0.4876 | 0.6232 | 0.5022 | 0.3717 | 0.5244 | 0.2232 | 0.4388 | | 20.0 | 0.7110 | 0.7679 | 0.6952 | 0.4875 | 0.5261 | 0.5549 | 0.6444 | 0.5301 | 0.4512 | 0.5441 | 0.1603 | 0.4888 | | 21.0 | 0.5945 | 0.8130 | 0.7299 | 0.4511 | 0.5922 | 0.5324 | 0.5643 | 0.4341 | 0.4067 | 0.5834 | 0.2272 | 0.4781 | | 22.0 | 0.6478 | 0.7921 | 0.6887 | 0.4826 | 0.5784 | 0.4599 | 0.6029 | 0.5938 | 0.4905 | 0.5605 | 0.2094 | 0.4644 | | 23.0 | 0.7110 | 0.7878 | 0.7192 | 0.4629 | 0.5670 | 0.5061 | 0.5891 | 0.5354 | 0.4442 | 0.5585 | 0.2280 | 0.4401 | | 24.0 | 0.7277 | 0.7718 | 0.7095 | 0.4789 | 0.5401 | 0.5080 | 0.6040 | 0.5314 | 0.4573 | 0.5414 | 0.2853 | 0.5062 | | 25.0 | 0.6781 | 0.7703 | 0.7305 | 0.5102 | 0.5954 | 0.5311 | 0.5960 | 0.5286 | 0.4647 | 0.5861 | 0.2676 | 0.5242 | | 26.0 | 0.6603 | 0.7989 | 0.7349 | 0.4689 | 0.5677 | 0.4620 | 0.6111 | 0.5258 | 0.4556 | 0.5889 | 0.2110 | 0.4530 | | 27.0 | - | - | - | - | - | - | - | - | - | - | - | - | | 28.0 | 0.7218 | 0.7735 | 0.7273 | 0.4297 | 0.6001 | 0.5321 | - | - | - | - | - | - | | 29.0 | 0.7054 | 0.7948 | 0.7009 | 0.4552 | 0.5413 | 0.5357 | 0.5421 | 0.5250 | 0.4701 | 0.5949 | 0.3048 | 0.4213 | | 30.0 | 0.6744 | 0.8004 | 0.7289 | 0.4421 | 0.5410 | 0.5409 | 0.5822 | 0.5334 | 0.4790 | 0.5028 | 0.2177 | 0.4910 | | 31.0 | 0.6622 | 0.7858 | 0.7534 | 0.3855 | 0.5707 | 0.5889 | 0.5902 | 0.4979 | 0.4268 | 0.6260 | 0.2735 | 0.4630 | | 32.0 | 0.6629 | 0.7960 | 0.7345 | 0.4132 | 0.5703 | 0.5450 | 0.5855 | 0.5469 | 0.4371 | 0.5087 | 0.2178 | 0.5147 | | 33.0 | 0.7279 | 0.7642 | 0.7250 | 0.4999 | 0.5330 | 0.5418 | 0.6148 | 0.5491 | 0.4678 | 0.5808 | 0.2548 | 0.4455 | | 34.0 | 0.6571 | 0.8002 | 0.7190 | 0.4516 | 0.5621 | 0.5183 | 0.5822 | 0.5444 | 0.3994 | 0.5931 | 0.2752 | 0.4588 | | 35.0 | 0.6946 | 0.7771 | 0.7289 | 0.4481 | 0.5478 | 0.5396 | 0.5834 | 0.5407 | 0.4980 | 0.5652 | 0.2696 | 0.5116 | | 36.0 | 0.7040 | 0.7881 | 0.7314 | 0.4432 | 0.5429 | 0.5308 | 0.5705 | 0.5124 | 0.4619 | 0.5667 | 0.2465 | 0.5101 | | 37.0 | 0.7277 | 0.7884 | 0.7298 | 0.4325 | 0.5471 | 0.5196 | 0.5523 | 0.5073 | 0.4390 | 0.5614 | 0.2453 | 0.4575 | | 38.0 | 0.7092 | 0.7907 | 0.7297 | 0.4713 | 0.5626 | 0.5483 | 0.5667 | 0.5067 | 0.4552 | 0.5608 | 0.2002 | 0.4545 | | 39.0 | 0.6763 | 0.8000 | 0.7345 | 0.4678 | 0.5544 | 0.5005 | 0.5818 | 0.5236 | 0.4071 | 0.5436 | 0.2496 | 0.4865 | | 40.0 | 0.6681 | 0.8020 | 0.7232 | 0.4519 | 0.5724 | 0.5465 | 0.5828 | 0.5132 | 0.4686 | 0.5479 | 0.2589 | 0.4678 | | 41.0 | 0.6698 | 0.8022 | 0.7318 | 0.4297 | 0.5493 | 0.5160 | 0.5727 | 0.5289 | 0.4574 | 0.5711 | 0.1978 | 0.4842 | | 42.0 | 0.6542 | 0.7977 | 0.7309 | 0.4450 | 0.5653 | 0.5389 | 0.5874 | 0.5625 | 0.4662 | 0.5561 | 0.1969 | 0.5024 | | 43.0 | 0.6732 | 0.7995 | 0.7126 | 0.4343 | 0.5636 | 0.5217 | 0.5952 | 0.5608 | 0.4679 | 0.5672 | 0.2449 | 0.4559 | | 44.0 | 0.6797 | 0.8035 | 0.7234 | 0.4571 | 0.5651 | 0.5352 | 0.5728 | 0.5156 | 0.4591 | 0.5458 | 0.2506 | 0.4307 | | 45.0 | 0.6866 | 0.7923 | 0.7332 | 0.4349 | 0.5523 | 0.5312 | 0.5855 | 0.5314 | 0.4323 | 0.5653 | 0.2488 | 0.4833 | | 46.0 | 0.6868 | 0.7856 | 0.7297 | 0.4426 | 0.5763 | 0.5288 | 0.5846 | 0.5331 | 0.4573 | 0.5724 | 0.2999 | 0.4811 | | 47.0 | 0.6506 | 0.8100 | 0.7248 | 0.4534 | 0.5506 | 0.5230 | 0.5954 | 0.5515 | 0.4251 | 0.5546 | 0.2245 | 0.4677 | | 48.0 | 0.6590 | 0.8106 | 0.7334 | 0.4353 | 0.5542 | 0.5254 | 0.5813 | 0.4869 | 0.4373 | 0.5611 | 0.2135 | 0.4503 | | 49.0 | 0.6790 | 0.7967 | 0.7227 | 0.4477 | 0.5612 | 0.5523 | 0.5861 | 0.5460 | 0.4310 | 0.5518 | 0.2535 | 0.4817 | | 50.0 | 0.6884 | 0.7852 | 0.7323 | 0.4523 | 0.5829 | 0.5516 | 0.5904 | 0.5289 | 0.4518 | 0.5719 | 0.2318 | 0.4783 | * All values in the above charts are rounded to nearest ten-thousandth. ### Framework versions - Transformers 4.26.1 - Pytorch 1.12.1 - Datasets 2.9.0 - Tokenizers 0.12.1
JenniferD16/proyecto
JenniferD16
2023-09-16T20:41:28Z
61
0
transformers
[ "transformers", "tf", "distilbert", "question-answering", "generated_from_keras_callback", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-09-16T18:28:02Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_keras_callback model-index: - name: JenniferD16/proyecto results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # JenniferD16/proyecto This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 5.6174 - Validation Loss: 5.6332 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 20, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 5.8977 | 5.7571 | 0 | | 5.6895 | 5.6332 | 1 | | 5.6174 | 5.6332 | 2 | ### Framework versions - Transformers 4.33.2 - TensorFlow 2.13.0 - Datasets 2.14.5 - Tokenizers 0.13.3
trieudemo11/llama_7b_attrb_cate_4m_240_1
trieudemo11
2023-09-16T20:35:52Z
2
0
peft
[ "peft", "region:us" ]
null
2023-09-16T20:35:37Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0
rkla/minetester-treechop_shaped-v0-ppo_fixed_dtime_seed1-seed1
rkla
2023-09-16T20:29:07Z
0
0
minetest-baselines
[ "minetest-baselines", "tensorboard", "minetester-treechop_shaped-v0", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-09-16T20:28:54Z
--- tags: - minetester-treechop_shaped-v0 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: minetest-baselines model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: minetester-treechop_shaped-v0 type: minetester-treechop_shaped-v0 metrics: - type: mean_reward value: -2.21 +/- 0.00 name: mean_reward verified: false --- #**PPO** Agent Playing **minetester-treechop_shaped-v0** This is a trained model of a PPO agent playing minetester-treechop_shaped-v0. The model was trained by using [minetest-baselines](https://github.com/EleutherAI/minetest-baselines). ## Command to reproduce the training ```bash python -m minetest_baselines.train --algo ppo --exp-name ppo_fixed_dtime_seed1 --seed 1 --capture-video --track --wandb-entity rkla --save-model --upload-model --hf-entity rkla --total-timesteps 1000 --num-envs 2 ``` # Hyperparameters ```python {'anneal_lr': True, 'batch_size': 256, 'capture_video': True, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'minetester-treechop_shaped-v0', 'exp_name': 'ppo_fixed_dtime_seed1', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'rkla', 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 64, 'norm_adv': True, 'num_envs': 2, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 3, 'save_model': True, 'seed': 1, 'target_kl': None, 'torch_deterministic': True, 'total_timesteps': 1000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'video_frequency': 100, 'wandb_entity': 'rkla', 'wandb_project_name': 'minetest-baselines'} ```
hardikpanchariya/sks-hardik
hardikpanchariya
2023-09-16T20:19:28Z
1
1
diffusers
[ "diffusers", "text-to-image", "autotrain", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
2023-09-16T18:33:08Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: photo of a sks hardik tags: - text-to-image - diffusers - autotrain inference: true --- # DreamBooth trained by AutoTrain Text encoder was not trained.
dminhk/stable-diffusion-xl-base-1.0-neuronx
dminhk
2023-09-16T20:10:30Z
0
0
null
[ "license:openrail++", "region:us" ]
null
2023-09-16T17:48:57Z
--- license: openrail++ --- **Amazon EC2 Inf2 Instances** - Instance Size: inf2.xlarge - https://aws.amazon.com/ec2/instance-types/inf2/ **How to use Diffusers library for inference** ``` from optimum.neuron import NeuronStableDiffusionXLPipeline prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" stable_diffusion_xl = NeuronStableDiffusionXLPipeline.from_pretrained( model_id="dminhk/stable-diffusion-xl-base-1.0-neuronx", device_ids=[0, 1] ) image = stable_diffusion_xl(prompt).images[0] ```
manudeva/rare-puppers
manudeva
2023-09-16T20:04:43Z
194
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-09-16T20:04:33Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: rare-puppers results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.5 --- # rare-puppers Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### person leaning back in chair ![person leaning back in chair](images/person_leaning_back_in_chair.jpg) #### person sitting leaning left ![person sitting leaning left](images/person_sitting_leaning_left.jpg) #### person sitting leaning right ![person sitting leaning right ](images/person_sitting_leaning_right_.jpg) #### person sitting slouched ![person sitting slouched](images/person_sitting_slouched.jpg) #### person sitting upright facing forward ![person sitting upright facing forward](images/person_sitting_upright_facing_forward.jpg)
hakanssonjesper/falcon-rw-1b-ggml
hakanssonjesper
2023-09-16T20:00:55Z
4
0
transformers
[ "transformers", "llm-rs", "ggml", "text-generation", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
text-generation
2023-09-16T20:00:53Z
--- tags: - llm-rs - ggml pipeline_tag: text-generation --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
anyuanay/my_awesome_billsum_model
anyuanay
2023-09-16T20:00:44Z
103
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "dataset:billsum", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-09-16T19:55:05Z
--- license: apache-2.0 base_model: t5-small tags: - generated_from_trainer datasets: - billsum metrics: - rouge model-index: - name: my_awesome_billsum_model results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: billsum type: billsum config: default split: ca_test args: default metrics: - name: Rouge1 type: rouge value: 0.1369 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_billsum_model This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset. It achieves the following results on the evaluation set: - Loss: 2.5326 - Rouge1: 0.1369 - Rouge2: 0.0453 - Rougel: 0.1124 - Rougelsum: 0.1122 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 62 | 2.8265 | 0.1253 | 0.0374 | 0.1075 | 0.1071 | 19.0 | | No log | 2.0 | 124 | 2.6118 | 0.1382 | 0.0469 | 0.1156 | 0.1154 | 19.0 | | No log | 3.0 | 186 | 2.5500 | 0.1373 | 0.0448 | 0.1129 | 0.1127 | 19.0 | | No log | 4.0 | 248 | 2.5326 | 0.1369 | 0.0453 | 0.1124 | 0.1122 | 19.0 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
Shishir1807/M6_llama
Shishir1807
2023-09-16T19:59:34Z
10
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "gpt", "llm", "large language model", "h2o-llmstudio", "en", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-09-16T19:56:03Z
--- language: - en library_name: transformers tags: - gpt - llm - large language model - h2o-llmstudio inference: false thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico --- # Model Card ## Summary This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio). - Base model: [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed. ```bash pip install transformers==4.29.2 pip install einops==0.6.1 pip install accelerate==0.19.0 pip install torch==2.0.0 ``` ```python import torch from transformers import pipeline generate_text = pipeline( model="Shishir1807/M6_llama", torch_dtype="auto", trust_remote_code=True, use_fast=True, device_map={"": "cuda:0"}, ) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.0), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer: ```python print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"]) ``` ```bash <|prompt|>Why is drinking water so healthy?</s><|answer|> ``` Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`. ```python import torch from h2oai_pipeline import H2OTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( "Shishir1807/M6_llama", use_fast=True, padding_side="left", trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( "Shishir1807/M6_llama", torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=True, ) generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.0), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Shishir1807/M6_llama" # either local folder or huggingface model name # Important: The prompt needs to be in the same format the model was trained with. # You can find an example prompt in the experiment logs. prompt = "<|prompt|>How are you?</s><|answer|>" tokenizer = AutoTokenizer.from_pretrained( model_name, use_fast=True, trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=True, ) model.cuda().eval() inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda") # generate configuration can be modified to your needs tokens = model.generate( input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.0), repetition_penalty=float(1.2), renormalize_logits=True )[0] tokens = tokens[inputs["input_ids"].shape[1]:] answer = tokenizer.decode(tokens, skip_special_tokens=True) print(answer) ``` ## Quantization and sharding You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```. ## Model Architecture ``` LlamaForCausalLM( (model): LlamaModel( (embed_tokens): Embedding(32000, 4096, padding_idx=0) (layers): ModuleList( (0-31): 32 x LlamaDecoderLayer( (self_attn): LlamaAttention( (q_proj): Linear(in_features=4096, out_features=4096, bias=False) (k_proj): Linear(in_features=4096, out_features=4096, bias=False) (v_proj): Linear(in_features=4096, out_features=4096, bias=False) (o_proj): Linear(in_features=4096, out_features=4096, bias=False) (rotary_emb): LlamaRotaryEmbedding() ) (mlp): LlamaMLP( (gate_proj): Linear(in_features=4096, out_features=11008, bias=False) (down_proj): Linear(in_features=11008, out_features=4096, bias=False) (up_proj): Linear(in_features=4096, out_features=11008, bias=False) (act_fn): SiLUActivation() ) (input_layernorm): LlamaRMSNorm() (post_attention_layernorm): LlamaRMSNorm() ) ) (norm): LlamaRMSNorm() ) (lm_head): Linear(in_features=4096, out_features=32000, bias=False) ) ``` ## Model Configuration This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models. ## Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities. - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues. - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes. By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
manudeva/intervUone
manudeva
2023-09-16T19:55:27Z
214
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-09-16T19:55:12Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: intervUone results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.17499999701976776 --- # intervUone Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### person sitting in chair leaning back ![person sitting in chair leaning back](images/person_sitting_in_chair_leaning_back.jpg) #### person sitting leaning left in chair ![person sitting leaning left in chair](images/person_sitting_leaning_left_in_chair.jpg) #### person sitting leaning right in chair ![person sitting leaning right in chair](images/person_sitting_leaning_right_in_chair.jpg) #### person sitting upright in chair ![person sitting upright in chair](images/person_sitting_upright_in_chair.jpg) #### person slouching in chair ![person slouching in chair](images/person_slouching_in_chair.jpg)
Shishir1807/M4_llama
Shishir1807
2023-09-16T19:55:11Z
163
0
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "gpt", "llm", "large language model", "h2o-llmstudio", "en", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-09-16T19:54:34Z
--- language: - en library_name: transformers tags: - gpt - llm - large language model - h2o-llmstudio inference: false thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico --- # Model Card ## Summary This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio). - Base model: [EleutherAI/pythia-2.8b-deduped](https://huggingface.co/EleutherAI/pythia-2.8b-deduped) ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed. ```bash pip install transformers==4.29.2 pip install einops==0.6.1 pip install accelerate==0.19.0 pip install torch==2.0.0 ``` ```python import torch from transformers import pipeline generate_text = pipeline( model="Shishir1807/M4_llama", torch_dtype="auto", trust_remote_code=True, use_fast=True, device_map={"": "cuda:0"}, ) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.0), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer: ```python print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"]) ``` ```bash <|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|> ``` Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`. ```python import torch from h2oai_pipeline import H2OTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( "Shishir1807/M4_llama", use_fast=True, padding_side="left", trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( "Shishir1807/M4_llama", torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=True, ) generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.0), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Shishir1807/M4_llama" # either local folder or huggingface model name # Important: The prompt needs to be in the same format the model was trained with. # You can find an example prompt in the experiment logs. prompt = "<|prompt|>How are you?<|endoftext|><|answer|>" tokenizer = AutoTokenizer.from_pretrained( model_name, use_fast=True, trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=True, ) model.cuda().eval() inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda") # generate configuration can be modified to your needs tokens = model.generate( input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.0), repetition_penalty=float(1.2), renormalize_logits=True )[0] tokens = tokens[inputs["input_ids"].shape[1]:] answer = tokenizer.decode(tokens, skip_special_tokens=True) print(answer) ``` ## Quantization and sharding You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```. ## Model Architecture ``` GPTNeoXForCausalLM( (gpt_neox): GPTNeoXModel( (embed_in): Embedding(50304, 2560) (layers): ModuleList( (0-31): 32 x GPTNeoXLayer( (input_layernorm): LayerNorm((2560,), eps=1e-05, elementwise_affine=True) (post_attention_layernorm): LayerNorm((2560,), eps=1e-05, elementwise_affine=True) (attention): GPTNeoXAttention( (rotary_emb): RotaryEmbedding() (query_key_value): Linear(in_features=2560, out_features=7680, bias=True) (dense): Linear(in_features=2560, out_features=2560, bias=True) ) (mlp): GPTNeoXMLP( (dense_h_to_4h): Linear(in_features=2560, out_features=10240, bias=True) (dense_4h_to_h): Linear(in_features=10240, out_features=2560, bias=True) (act): GELUActivation() ) ) ) (final_layer_norm): LayerNorm((2560,), eps=1e-05, elementwise_affine=True) ) (embed_out): Linear(in_features=2560, out_features=50304, bias=False) ) ``` ## Model Configuration This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models. ## Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities. - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues. - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes. By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
Shishir1807/M1_llama
Shishir1807
2023-09-16T19:53:56Z
9
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "gpt", "llm", "large language model", "h2o-llmstudio", "en", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-09-16T19:52:28Z
--- language: - en library_name: transformers tags: - gpt - llm - large language model - h2o-llmstudio inference: false thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico --- # Model Card ## Summary This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio). - Base model: [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed. ```bash pip install transformers==4.29.2 pip install einops==0.6.1 pip install accelerate==0.19.0 pip install torch==2.0.0 ``` ```python import torch from transformers import pipeline generate_text = pipeline( model="Shishir1807/M1_llama", torch_dtype="auto", trust_remote_code=True, use_fast=True, device_map={"": "cuda:0"}, ) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.0), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer: ```python print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"]) ``` ```bash <|prompt|>Why is drinking water so healthy?</s><|answer|> ``` Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`. ```python import torch from h2oai_pipeline import H2OTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( "Shishir1807/M1_llama", use_fast=True, padding_side="left", trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( "Shishir1807/M1_llama", torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=True, ) generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.0), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Shishir1807/M1_llama" # either local folder or huggingface model name # Important: The prompt needs to be in the same format the model was trained with. # You can find an example prompt in the experiment logs. prompt = "<|prompt|>How are you?</s><|answer|>" tokenizer = AutoTokenizer.from_pretrained( model_name, use_fast=True, trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=True, ) model.cuda().eval() inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda") # generate configuration can be modified to your needs tokens = model.generate( input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.0), repetition_penalty=float(1.2), renormalize_logits=True )[0] tokens = tokens[inputs["input_ids"].shape[1]:] answer = tokenizer.decode(tokens, skip_special_tokens=True) print(answer) ``` ## Quantization and sharding You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```. ## Model Architecture ``` LlamaForCausalLM( (model): LlamaModel( (embed_tokens): Embedding(32000, 4096, padding_idx=0) (layers): ModuleList( (0-31): 32 x LlamaDecoderLayer( (self_attn): LlamaAttention( (q_proj): Linear(in_features=4096, out_features=4096, bias=False) (k_proj): Linear(in_features=4096, out_features=4096, bias=False) (v_proj): Linear(in_features=4096, out_features=4096, bias=False) (o_proj): Linear(in_features=4096, out_features=4096, bias=False) (rotary_emb): LlamaRotaryEmbedding() ) (mlp): LlamaMLP( (gate_proj): Linear(in_features=4096, out_features=11008, bias=False) (down_proj): Linear(in_features=11008, out_features=4096, bias=False) (up_proj): Linear(in_features=4096, out_features=11008, bias=False) (act_fn): SiLUActivation() ) (input_layernorm): LlamaRMSNorm() (post_attention_layernorm): LlamaRMSNorm() ) ) (norm): LlamaRMSNorm() ) (lm_head): Linear(in_features=4096, out_features=32000, bias=False) ) ``` ## Model Configuration This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models. ## Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities. - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues. - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes. By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
TamalDeFrijol/IAFrijol
TamalDeFrijol
2023-09-16T19:46:38Z
184
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "dataset:beans", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-09-13T22:08:13Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer datasets: - beans metrics: - accuracy model-index: - name: IAFrijol results: - task: name: Image Classification type: image-classification dataset: name: beans type: beans config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.9924812030075187 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # IAFrijol This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0265 - Accuracy: 0.9925 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.142 | 3.85 | 500 | 0.0265 | 0.9925 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
thegoodfellas/tgf-xlm-roberta-base-pt-br
thegoodfellas
2023-09-16T19:35:21Z
85
2
transformers
[ "transformers", "pytorch", "safetensors", "xlm-roberta", "fill-mask", "dataset:thegoodfellas/brwac_tiny", "arxiv:1907.11692", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-10-09T22:53:41Z
--- license: mit datasets: thegoodfellas/brwac_tiny widget: - text: Demanda por fundos de <mask> para crianças cresce em 2022 example_title: Exemplo 1 - text: Havia uma <mask> no meio do caminho example_title: Exemplo 2 - text: Na verdade, começar a <mask> cedo é ideal para ter um bom dinheiro no futuro example_title: Exemplo 3 - text: Mitos e verdades sobre o <mask>. Doença que mais mata mulheres no Brasil. example_title: Exemplo 4 base_model: xlm-roberta-base model-index: - name: tgf-xlm-roberta-base-pt-br results: [] --- # tgf-xlm-roberta-base-pt-br This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [BrWac](https://huggingface.co/datasets/thegoodfellas/brwac_tiny) dataset. ## Model description This is a fine-tuned version of the Brazilian Portuguese language. It was trained using the [BrWac](https://huggingface.co/datasets/thegoodfellas/brwac_tiny) dataset and followed the principles from [Roberta's paper](https://arxiv.org/abs/1907.11692). The key strategies are: 1. *Full-Sentences*: Quoted from the paper: "Each input is packed with full sentences sampled contiguously from one or more documents, such that the total length is at most 512 tokens. Inputs may cross document boundaries. When we reach the end of one document, we begin sampling sentences from the next document and add an extra separator token between documents". 2. Tunned hyperparameters: adam_beta1=0.9, adam_beta2=0.98, adam_epsilon=1e-6 (as paper suggests) ## Availability The source code is available [here](https://github.com/the-good-fellas/xlm-roberta-pt-br) ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-4 - train_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 2 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.23.1 - Pytorch 1.11.0a0+b6df043 - Datasets 2.6.1 - Tokenizers 0.13.1 ### Environment 4xA100.88V NVIDIA Special thanks to [DataCrunch.io](https://datacrunch.io) with their amazing, and affordable GPUs. <img src="https://datacrunch.io/_next/static/media/Logo.6b773500.svg" width="20%"/>
mohsinshah/git-base-coco-dummy-temp100
mohsinshah
2023-09-16T19:34:15Z
14
0
transformers
[ "transformers", "pytorch", "tensorboard", "git", "image-text-to-text", "generated_from_trainer", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2023-09-16T16:33:01Z
--- license: mit tags: - generated_from_trainer model-index: - name: git-base-coco-dummy-temp100 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # git-base-coco-dummy-temp100 This model is a fine-tuned version of [microsoft/git-base-coco](https://huggingface.co/microsoft/git-base-coco) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3244 - Wer Score: 2.5274 - Blue Score: 0.1539 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Score | Blue Score | |:-------------:|:-----:|:----:|:---------------:|:---------:|:----------:| | 8.003 | 1.01 | 35 | 6.0344 | 1.2184 | 0.0042 | | 4.5865 | 2.03 | 70 | 2.7898 | 2.0108 | 0.0150 | | 1.6831 | 3.04 | 105 | 0.7239 | 1.8867 | 0.0250 | | 0.5535 | 4.06 | 140 | 0.4567 | 1.9632 | 0.0377 | | 0.3898 | 5.07 | 175 | 0.4016 | 2.2010 | 0.0618 | | 0.3024 | 6.09 | 210 | 0.3712 | 1.7956 | 0.0892 | | 0.2411 | 7.1 | 245 | 0.3506 | 2.5006 | 0.0950 | | 0.2034 | 8.12 | 280 | 0.3388 | 2.1609 | 0.1094 | | 0.1721 | 9.13 | 315 | 0.3319 | 2.2919 | 0.1092 | | 0.1449 | 10.14 | 350 | 0.3236 | 2.1683 | 0.1184 | | 0.1245 | 11.16 | 385 | 0.3205 | 2.3594 | 0.1242 | | 0.1073 | 12.17 | 420 | 0.3160 | 2.4311 | 0.1343 | | 0.0956 | 13.19 | 455 | 0.3141 | 2.3027 | 0.1327 | | 0.0821 | 14.2 | 490 | 0.3121 | 2.3957 | 0.1369 | | 0.07 | 15.22 | 525 | 0.3131 | 2.2508 | 0.1407 | | 0.0678 | 16.23 | 560 | 0.3114 | 2.4291 | 0.1390 | | 0.0563 | 17.25 | 595 | 0.3113 | 2.5218 | 0.1428 | | 0.0518 | 18.26 | 630 | 0.3108 | 2.2964 | 0.1513 | | 0.0474 | 19.28 | 665 | 0.3138 | 2.2457 | 0.1492 | | 0.0407 | 20.29 | 700 | 0.3136 | 2.3072 | 0.1485 | | 0.0383 | 21.3 | 735 | 0.3138 | 2.4791 | 0.1426 | | 0.0373 | 22.32 | 770 | 0.3136 | 2.4541 | 0.1472 | | 0.03 | 23.33 | 805 | 0.3145 | 2.4218 | 0.1500 | | 0.0316 | 24.35 | 840 | 0.3141 | 2.4169 | 0.1466 | | 0.0255 | 25.36 | 875 | 0.3149 | 2.5450 | 0.1473 | | 0.0266 | 26.38 | 910 | 0.3159 | 2.4613 | 0.1475 | | 0.0245 | 27.39 | 945 | 0.3161 | 2.4809 | 0.1506 | | 0.0223 | 28.41 | 980 | 0.3172 | 2.4252 | 0.1516 | | 0.0189 | 29.42 | 1015 | 0.3173 | 2.6111 | 0.1501 | | 0.0204 | 30.43 | 1050 | 0.3184 | 2.5457 | 0.1518 | | 0.0194 | 31.45 | 1085 | 0.3191 | 2.6389 | 0.1493 | | 0.0154 | 32.46 | 1120 | 0.3188 | 2.5125 | 0.1518 | | 0.017 | 33.48 | 1155 | 0.3192 | 2.5197 | 0.1485 | | 0.0161 | 34.49 | 1190 | 0.3210 | 2.5103 | 0.1512 | | 0.0146 | 35.51 | 1225 | 0.3206 | 2.4992 | 0.1527 | | 0.0135 | 36.52 | 1260 | 0.3221 | 2.4620 | 0.1516 | | 0.0139 | 37.54 | 1295 | 0.3216 | 2.4769 | 0.1519 | | 0.0132 | 38.55 | 1330 | 0.3215 | 2.5613 | 0.1525 | | 0.0121 | 39.57 | 1365 | 0.3222 | 2.5648 | 0.1518 | | 0.0121 | 40.58 | 1400 | 0.3226 | 2.5601 | 0.1541 | | 0.0114 | 41.59 | 1435 | 0.3231 | 2.4888 | 0.1527 | | 0.0123 | 42.61 | 1470 | 0.3239 | 2.5037 | 0.1537 | | 0.0101 | 43.62 | 1505 | 0.3241 | 2.5378 | 0.1526 | | 0.0109 | 44.64 | 1540 | 0.3245 | 2.5312 | 0.1534 | | 0.0109 | 45.65 | 1575 | 0.3245 | 2.5692 | 0.1529 | | 0.0098 | 46.67 | 1610 | 0.3243 | 2.5583 | 0.1536 | | 0.0109 | 47.68 | 1645 | 0.3248 | 2.5498 | 0.1536 | | 0.0101 | 48.7 | 1680 | 0.3244 | 2.5274 | 0.1539 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.3 - Tokenizers 0.13.3
gonchisi/roberta-base-bne-finetuned-new_or_used_title
gonchisi
2023-09-16T19:06:44Z
100
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "base_model:BSC-LT/roberta-base-bne", "base_model:finetune:BSC-LT/roberta-base-bne", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-11T15:11:56Z
--- license: apache-2.0 base_model: BSC-TeMU/roberta-base-bne tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-base-bne-finetuned-new_or_used_title results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-bne-finetuned-new_or_used_title This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3983 - Accuracy: 0.8535 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3803 | 1.0 | 5000 | 0.3551 | 0.8470 | | 0.2407 | 2.0 | 10000 | 0.3983 | 0.8535 | ### Framework versions - Transformers 4.33.1 - Pytorch 2.0.1+cu117 - Datasets 2.14.4 - Tokenizers 0.13.3
fdgdfgdfrrr/fghgfhgfhfghfg
fdgdfgdfrrr
2023-09-16T19:04:05Z
0
0
null
[ "license:bsd-3-clause-clear", "region:us" ]
null
2023-09-16T19:04:05Z
--- license: bsd-3-clause-clear ---
badokorach/distilbert-base-cased-160923
badokorach
2023-09-16T18:56:55Z
3
0
transformers
[ "transformers", "tf", "bert", "question-answering", "generated_from_keras_callback", "base_model:badokorach/distilbert-base-cased-160923", "base_model:finetune:badokorach/distilbert-base-cased-160923", "endpoints_compatible", "region:us" ]
question-answering
2023-09-16T17:40:17Z
--- base_model: badokorach/distilbert-base-cased-160923 tags: - generated_from_keras_callback model-index: - name: badokorach/distilbert-base-cased-160923 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # badokorach/distilbert-base-cased-160923 This model is a fine-tuned version of [badokorach/distilbert-base-cased-160923](https://huggingface.co/badokorach/distilbert-base-cased-160923) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0084 - Validation Loss: 0.0 - Epoch: 28 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'module': 'transformers.optimization_tf', 'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1946, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 4, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.8999999761581421, 'beta_2': 0.9990000128746033, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.02}, 'registered_name': 'AdamWeightDecay'}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.0655 | 0.0 | 0 | | 0.1225 | 0.0 | 1 | | 0.0648 | 0.0 | 2 | | 0.0527 | 0.0 | 3 | | 0.0653 | 0.0 | 4 | | 0.0456 | 0.0 | 5 | | 0.0482 | 0.0 | 6 | | 0.0318 | 0.0 | 7 | | 0.0183 | 0.0 | 8 | | 0.0237 | 0.0 | 9 | | 0.0146 | 0.0 | 10 | | 0.0231 | 0.0 | 11 | | 0.0115 | 0.0 | 12 | | 0.0161 | 0.0 | 13 | | 0.0129 | 0.0 | 14 | | 0.0162 | 0.0 | 15 | | 0.0118 | 0.0 | 16 | | 0.0109 | 0.0 | 17 | | 0.0097 | 0.0 | 18 | | 0.0070 | 0.0 | 19 | | 0.0073 | 0.0 | 20 | | 0.0099 | 0.0 | 21 | | 0.0067 | 0.0 | 22 | | 0.0037 | 0.0 | 23 | | 0.0041 | 0.0 | 24 | | 0.0082 | 0.0 | 25 | | 0.0037 | 0.0 | 26 | | 0.0054 | 0.0 | 27 | | 0.0084 | 0.0 | 28 | ### Framework versions - Transformers 4.33.2 - TensorFlow 2.13.0 - Datasets 2.14.5 - Tokenizers 0.13.3
MohannadTak/ppo-Huggy
MohannadTak
2023-09-16T18:50:57Z
6
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-09-16T18:50:52Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: MohannadTak/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
anyuanay/my_awesome_qa_model
anyuanay
2023-09-16T18:48:07Z
93
0
transformers
[ "transformers", "pytorch", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-09-16T18:39:49Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - squad model-index: - name: my_awesome_qa_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_qa_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.6146 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 250 | 2.3372 | | 2.7382 | 2.0 | 500 | 1.6893 | | 2.7382 | 3.0 | 750 | 1.6146 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
CyberHarem/meigo_arisa_toarumajutsunoindex
CyberHarem
2023-09-16T18:46:13Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/meigo_arisa_toarumajutsunoindex", "license:mit", "region:us" ]
text-to-image
2023-09-16T18:35:39Z
--- license: mit datasets: - CyberHarem/meigo_arisa_toarumajutsunoindex pipeline_tag: text-to-image tags: - art --- # Lora of meigo_arisa_toarumajutsunoindex This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 3740, you need to download `3740/meigo_arisa_toarumajutsunoindex.pt` as the embedding and `3740/meigo_arisa_toarumajutsunoindex.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 3740**, with the score of 0.952. The trigger words are: 1. `meigo_arisa_toarumajutsunoindex` 2. `pink_hair, long_hair, pink_eyes, hat, smile, open_mouth` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | pattern_1 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:---------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------| | 5100 | 0.873 | [Download](5100/meigo_arisa_toarumajutsunoindex.zip) | ![pattern_1-5100](5100/previews/pattern_1.png) | ![bikini-5100](5100/previews/bikini.png) | [<NSFW, click to see>](5100/previews/bondage.png) | ![free-5100](5100/previews/free.png) | ![maid-5100](5100/previews/maid.png) | ![miko-5100](5100/previews/miko.png) | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) | ![suit-5100](5100/previews/suit.png) | ![yukata-5100](5100/previews/yukata.png) | | 4760 | 0.921 | [Download](4760/meigo_arisa_toarumajutsunoindex.zip) | ![pattern_1-4760](4760/previews/pattern_1.png) | ![bikini-4760](4760/previews/bikini.png) | [<NSFW, click to see>](4760/previews/bondage.png) | ![free-4760](4760/previews/free.png) | ![maid-4760](4760/previews/maid.png) | ![miko-4760](4760/previews/miko.png) | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) | ![suit-4760](4760/previews/suit.png) | ![yukata-4760](4760/previews/yukata.png) | | 4420 | 0.882 | [Download](4420/meigo_arisa_toarumajutsunoindex.zip) | ![pattern_1-4420](4420/previews/pattern_1.png) | ![bikini-4420](4420/previews/bikini.png) | [<NSFW, click to see>](4420/previews/bondage.png) | ![free-4420](4420/previews/free.png) | ![maid-4420](4420/previews/maid.png) | ![miko-4420](4420/previews/miko.png) | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) | ![suit-4420](4420/previews/suit.png) | ![yukata-4420](4420/previews/yukata.png) | | 4080 | 0.869 | [Download](4080/meigo_arisa_toarumajutsunoindex.zip) | ![pattern_1-4080](4080/previews/pattern_1.png) | ![bikini-4080](4080/previews/bikini.png) | [<NSFW, click to see>](4080/previews/bondage.png) | ![free-4080](4080/previews/free.png) | ![maid-4080](4080/previews/maid.png) | ![miko-4080](4080/previews/miko.png) | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) | ![suit-4080](4080/previews/suit.png) | ![yukata-4080](4080/previews/yukata.png) | | **3740** | **0.952** | [**Download**](3740/meigo_arisa_toarumajutsunoindex.zip) | ![pattern_1-3740](3740/previews/pattern_1.png) | ![bikini-3740](3740/previews/bikini.png) | [<NSFW, click to see>](3740/previews/bondage.png) | ![free-3740](3740/previews/free.png) | ![maid-3740](3740/previews/maid.png) | ![miko-3740](3740/previews/miko.png) | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) | ![suit-3740](3740/previews/suit.png) | ![yukata-3740](3740/previews/yukata.png) | | 3400 | 0.857 | [Download](3400/meigo_arisa_toarumajutsunoindex.zip) | ![pattern_1-3400](3400/previews/pattern_1.png) | ![bikini-3400](3400/previews/bikini.png) | [<NSFW, click to see>](3400/previews/bondage.png) | ![free-3400](3400/previews/free.png) | ![maid-3400](3400/previews/maid.png) | ![miko-3400](3400/previews/miko.png) | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) | ![suit-3400](3400/previews/suit.png) | ![yukata-3400](3400/previews/yukata.png) | | 3060 | 0.839 | [Download](3060/meigo_arisa_toarumajutsunoindex.zip) | ![pattern_1-3060](3060/previews/pattern_1.png) | ![bikini-3060](3060/previews/bikini.png) | [<NSFW, click to see>](3060/previews/bondage.png) | ![free-3060](3060/previews/free.png) | ![maid-3060](3060/previews/maid.png) | ![miko-3060](3060/previews/miko.png) | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) | ![suit-3060](3060/previews/suit.png) | ![yukata-3060](3060/previews/yukata.png) | | 2720 | 0.655 | [Download](2720/meigo_arisa_toarumajutsunoindex.zip) | ![pattern_1-2720](2720/previews/pattern_1.png) | ![bikini-2720](2720/previews/bikini.png) | [<NSFW, click to see>](2720/previews/bondage.png) | ![free-2720](2720/previews/free.png) | ![maid-2720](2720/previews/maid.png) | ![miko-2720](2720/previews/miko.png) | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) | ![suit-2720](2720/previews/suit.png) | ![yukata-2720](2720/previews/yukata.png) | | 2380 | 0.746 | [Download](2380/meigo_arisa_toarumajutsunoindex.zip) | ![pattern_1-2380](2380/previews/pattern_1.png) | ![bikini-2380](2380/previews/bikini.png) | [<NSFW, click to see>](2380/previews/bondage.png) | ![free-2380](2380/previews/free.png) | ![maid-2380](2380/previews/maid.png) | ![miko-2380](2380/previews/miko.png) | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) | ![suit-2380](2380/previews/suit.png) | ![yukata-2380](2380/previews/yukata.png) | | 2040 | 0.856 | [Download](2040/meigo_arisa_toarumajutsunoindex.zip) | ![pattern_1-2040](2040/previews/pattern_1.png) | ![bikini-2040](2040/previews/bikini.png) | [<NSFW, click to see>](2040/previews/bondage.png) | ![free-2040](2040/previews/free.png) | ![maid-2040](2040/previews/maid.png) | ![miko-2040](2040/previews/miko.png) | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) | ![suit-2040](2040/previews/suit.png) | ![yukata-2040](2040/previews/yukata.png) | | 1700 | 0.845 | [Download](1700/meigo_arisa_toarumajutsunoindex.zip) | ![pattern_1-1700](1700/previews/pattern_1.png) | ![bikini-1700](1700/previews/bikini.png) | [<NSFW, click to see>](1700/previews/bondage.png) | ![free-1700](1700/previews/free.png) | ![maid-1700](1700/previews/maid.png) | ![miko-1700](1700/previews/miko.png) | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) | ![suit-1700](1700/previews/suit.png) | ![yukata-1700](1700/previews/yukata.png) | | 1360 | 0.806 | [Download](1360/meigo_arisa_toarumajutsunoindex.zip) | ![pattern_1-1360](1360/previews/pattern_1.png) | ![bikini-1360](1360/previews/bikini.png) | [<NSFW, click to see>](1360/previews/bondage.png) | ![free-1360](1360/previews/free.png) | ![maid-1360](1360/previews/maid.png) | ![miko-1360](1360/previews/miko.png) | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) | ![suit-1360](1360/previews/suit.png) | ![yukata-1360](1360/previews/yukata.png) | | 1020 | 0.732 | [Download](1020/meigo_arisa_toarumajutsunoindex.zip) | ![pattern_1-1020](1020/previews/pattern_1.png) | ![bikini-1020](1020/previews/bikini.png) | [<NSFW, click to see>](1020/previews/bondage.png) | ![free-1020](1020/previews/free.png) | ![maid-1020](1020/previews/maid.png) | ![miko-1020](1020/previews/miko.png) | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) | ![suit-1020](1020/previews/suit.png) | ![yukata-1020](1020/previews/yukata.png) | | 680 | 0.540 | [Download](680/meigo_arisa_toarumajutsunoindex.zip) | ![pattern_1-680](680/previews/pattern_1.png) | ![bikini-680](680/previews/bikini.png) | [<NSFW, click to see>](680/previews/bondage.png) | ![free-680](680/previews/free.png) | ![maid-680](680/previews/maid.png) | ![miko-680](680/previews/miko.png) | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) | ![suit-680](680/previews/suit.png) | ![yukata-680](680/previews/yukata.png) | | 340 | 0.512 | [Download](340/meigo_arisa_toarumajutsunoindex.zip) | ![pattern_1-340](340/previews/pattern_1.png) | ![bikini-340](340/previews/bikini.png) | [<NSFW, click to see>](340/previews/bondage.png) | ![free-340](340/previews/free.png) | ![maid-340](340/previews/maid.png) | ![miko-340](340/previews/miko.png) | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) | ![suit-340](340/previews/suit.png) | ![yukata-340](340/previews/yukata.png) |
VictorGil75/workshops-setfit-model_V3
VictorGil75
2023-09-16T18:37:42Z
3
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-09-16T18:36:57Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # VictorGil75/workshops-setfit-model_V3 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("VictorGil75/workshops-setfit-model_V3") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
dviejo/llama2-pretrained-quantized-model
dviejo
2023-09-16T18:34:32Z
2
0
peft
[ "peft", "region:us" ]
null
2023-09-16T18:34:28Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0
anyuanay/my_awesome_wnut_model
anyuanay
2023-09-16T18:22:59Z
89
0
transformers
[ "transformers", "pytorch", "distilbert", "token-classification", "generated_from_trainer", "dataset:wnut_17", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-09-16T18:21:25Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - wnut_17 metrics: - precision - recall - f1 - accuracy model-index: - name: my_awesome_wnut_model results: - task: name: Token Classification type: token-classification dataset: name: wnut_17 type: wnut_17 config: wnut_17 split: test args: wnut_17 metrics: - name: Precision type: precision value: 0.5826086956521739 - name: Recall type: recall value: 0.31047265987025024 - name: F1 type: f1 value: 0.4050785973397823 - name: Accuracy type: accuracy value: 0.941558719165491 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_wnut_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the wnut_17 dataset. It achieves the following results on the evaluation set: - Loss: 0.2744 - Precision: 0.5826 - Recall: 0.3105 - F1: 0.4051 - Accuracy: 0.9416 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 213 | 0.2785 | 0.5871 | 0.2530 | 0.3536 | 0.9387 | | No log | 2.0 | 426 | 0.2744 | 0.5826 | 0.3105 | 0.4051 | 0.9416 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
kanishka/smolm-mlm-bpe-unmask-seed_111
kanishka
2023-09-16T18:09:09Z
110
0
transformers
[ "transformers", "pytorch", "safetensors", "roberta", "fill-mask", "generated_from_trainer", "dataset:AO-CHILDES", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-09-16T04:18:06Z
--- base_model: models/smolm-mlm/config.json tags: - generated_from_trainer datasets: - AO-CHILDES metrics: - accuracy widget: - text: Do you like your <mask> ? - text: Look here . What is that <mask> ? - text: Where is <mask> ? model-index: - name: smolm-mlm-bpe-unmask-seed_111 results: [] pipeline_tag: fill-mask --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smolm-mlm-bpe-unmask-seed_111 This model is a fine-tuned version of [models/smolm-mlm/config.json](https://huggingface.co/models/smolm-mlm/config.json) on 5M words of American-English child-directed input. It achieves the following results on the evaluation set: - Loss: 2.6956 - Accuracy: 0.4492 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 512 - seed: 111 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 24000 - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 3.5129 | 1.0 | 11938 | 3.4627 | 0.3523 | | 3.319 | 2.0 | 23876 | 3.3322 | 0.3641 | | 3.1577 | 3.0 | 35814 | 3.1841 | 0.3810 | | 3.0357 | 4.0 | 47752 | 3.0588 | 0.3982 | | 2.9606 | 5.0 | 59690 | 2.9535 | 0.4109 | | 2.87 | 6.0 | 71628 | 2.8745 | 0.4221 | | 2.7817 | 7.0 | 83566 | 2.8351 | 0.4284 | | 2.7388 | 8.0 | 95504 | 2.7536 | 0.4417 | | 2.6618 | 9.0 | 107442 | 2.7308 | 0.4424 | | 2.6258 | 10.0 | 119380 | 2.6880 | 0.4522 | ### Framework versions - Transformers 4.32.1 - Pytorch 1.13.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
Shishir1807/M2_llama
Shishir1807
2023-09-16T18:08:22Z
4
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "gpt", "llm", "large language model", "h2o-llmstudio", "en", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-09-16T18:06:54Z
--- language: - en library_name: transformers tags: - gpt - llm - large language model - h2o-llmstudio inference: false thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico --- # Model Card ## Summary This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio). - Base model: [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed. ```bash pip install transformers==4.29.2 pip install einops==0.6.1 pip install accelerate==0.19.0 pip install torch==2.0.0 ``` ```python import torch from transformers import pipeline generate_text = pipeline( model="Shishir1807/M2_llama", torch_dtype="auto", trust_remote_code=True, use_fast=True, device_map={"": "cuda:0"}, ) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.0), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer: ```python print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"]) ``` ```bash <|prompt|>Why is drinking water so healthy?</s><|answer|> ``` Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`. ```python import torch from h2oai_pipeline import H2OTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( "Shishir1807/M2_llama", use_fast=True, padding_side="left", trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( "Shishir1807/M2_llama", torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=True, ) generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.0), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Shishir1807/M2_llama" # either local folder or huggingface model name # Important: The prompt needs to be in the same format the model was trained with. # You can find an example prompt in the experiment logs. prompt = "<|prompt|>How are you?</s><|answer|>" tokenizer = AutoTokenizer.from_pretrained( model_name, use_fast=True, trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=True, ) model.cuda().eval() inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda") # generate configuration can be modified to your needs tokens = model.generate( input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.0), repetition_penalty=float(1.2), renormalize_logits=True )[0] tokens = tokens[inputs["input_ids"].shape[1]:] answer = tokenizer.decode(tokens, skip_special_tokens=True) print(answer) ``` ## Quantization and sharding You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```. ## Model Architecture ``` LlamaForCausalLM( (model): LlamaModel( (embed_tokens): Embedding(32000, 4096, padding_idx=0) (layers): ModuleList( (0-31): 32 x LlamaDecoderLayer( (self_attn): LlamaAttention( (q_proj): Linear(in_features=4096, out_features=4096, bias=False) (k_proj): Linear(in_features=4096, out_features=4096, bias=False) (v_proj): Linear(in_features=4096, out_features=4096, bias=False) (o_proj): Linear(in_features=4096, out_features=4096, bias=False) (rotary_emb): LlamaRotaryEmbedding() ) (mlp): LlamaMLP( (gate_proj): Linear(in_features=4096, out_features=11008, bias=False) (down_proj): Linear(in_features=11008, out_features=4096, bias=False) (up_proj): Linear(in_features=4096, out_features=11008, bias=False) (act_fn): SiLUActivation() ) (input_layernorm): LlamaRMSNorm() (post_attention_layernorm): LlamaRMSNorm() ) ) (norm): LlamaRMSNorm() ) (lm_head): Linear(in_features=4096, out_features=32000, bias=False) ) ``` ## Model Configuration This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models. ## Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities. - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues. - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes. By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
kanishka/smolm-mlm-bpe-unmask-seed_666
kanishka
2023-09-16T18:06:26Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-09-16T14:26:46Z
--- base_model: models/smolm-mlm/config.json tags: - generated_from_trainer metrics: - accuracy model-index: - name: smolm-mlm-bpe-unmask-seed_666 results: [] widget: - text: Do you like your <mask> ? - text: Look here . What is that <mask> ? - text: Where is <mask> ? pipeline_tag: fill-mask --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> ## Authors Kanishka Misra and Najoung Kim # smolm-mlm-bpe-unmask-seed_666 This model is a fine-tuned version of [models/smolm-mlm/config.json](https://huggingface.co/models/smolm-mlm/config.json) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.6992 - Accuracy: 0.4500 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 512 - seed: 666 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 24000 - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 3.5563 | 1.0 | 11938 | 3.5327 | 0.3460 | | 3.3111 | 2.0 | 23876 | 3.3614 | 0.3626 | | 3.1505 | 3.0 | 35814 | 3.1647 | 0.3857 | | 3.0774 | 4.0 | 47752 | 3.0538 | 0.3970 | | 2.9331 | 5.0 | 59690 | 2.9540 | 0.4137 | | 2.8411 | 6.0 | 71628 | 2.8855 | 0.4240 | | 2.7907 | 7.0 | 83566 | 2.8185 | 0.4294 | | 2.7201 | 8.0 | 95504 | 2.7614 | 0.4389 | | 2.6691 | 9.0 | 107442 | 2.7250 | 0.4450 | | 2.6016 | 10.0 | 119380 | 2.7179 | 0.4467 | ### Framework versions - Transformers 4.32.1 - Pytorch 1.13.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
rajashekarvt/finetuned-setfit-model
rajashekarvt
2023-09-16T17:59:58Z
3
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-09-16T17:42:02Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # rajashekarvt/finetuned-setfit-model This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("rajashekarvt/finetuned-setfit-model") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
VictorGil75/workshops-setfit-model_V2
VictorGil75
2023-09-16T17:44:01Z
3
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-09-16T17:43:07Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # VictorGil75/workshops-setfit-model_V2 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("VictorGil75/workshops-setfit-model_V2") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
andreipb/roberta-poetry-anger-crpo
andreipb
2023-09-16T17:32:00Z
99
0
transformers
[ "transformers", "pytorch", "safetensors", "roberta", "fill-mask", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-09-13T14:04:48Z
--- license: mit language: - en pipeline_tag: fill-mask library_name: transformers widget: - text: "This morning, the CEO was <mask>." example_title: "Example 1" - text: "Yesterday, all the students were <mask> in the park." example_title: "Example 2" - text: "All the children seemed <mask>." example_title: "Example 3" - text: "I opened the door and found a <mask> behind it." example_title: "Example 4" - text: "We went to see the <mask> movie." example_title: "Example 5" --- # roberta-poetry-sadness-crpo This model is based on the [RoBERTa base model](https://huggingface.co/roberta-base) (125 M parameters) fine-tuned for 20 epochs on a poetry dataset of 33 MB. This dataset was extracted from the [Gutenberg Poetry Corpus](https://github.com/aparrish/gutenberg-poetry-corpus) using an automatic classifier for **anger**. The model replaces a masked word, indicated by the `<mask>` tag, with a word associated with **anger**, while preserving fluency. Caution: the emotion (here, **anger**) only biases the choice of words with respect to the base model, so do not expect to find only words strongly associated to this emotion. This model was trained by [Teo Ferrari](https://www.linkedin.com/in/teo-ferrari-0a4009176/) as part of his Bachelor thesis at [HEIG-VD](https://gaps.heig-vd.ch/public/diplome/rapports.php?id=6763), supervised by [Andrei Popescu-Belis](http://iict-space.heig-vd.ch/apu/). The model is described in "[GPoeT: a Language Model Trained for Rhyme Generation on Synthetic Data](https://aclanthology.org/2023.latechclfl-1.2/)" and is used in the [CR-PO](https://github.com/heig-iict-ida/crpo) system for [interactive poem generation](https://aclanthology.org/2022.lrec-1.377), along with several other models for specific topics or emotions.
andreipb/roberta-poetry-religion-crpo
andreipb
2023-09-16T17:31:51Z
108
0
transformers
[ "transformers", "pytorch", "safetensors", "roberta", "fill-mask", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-09-13T15:18:08Z
--- license: mit language: - en pipeline_tag: fill-mask library_name: transformers widget: - text: "This morning, the CEO was <mask>." example_title: "Example 1" - text: "Yesterday, all the students were <mask> in the park." example_title: "Example 2" - text: "All the children seemed <mask>." example_title: "Example 3" - text: "I opened the door and found a <mask> behind it." example_title: "Example 4" - text: "We went to see the <mask> movie." example_title: "Example 5" --- # roberta-poetry-religion-crpo This model is based on the [RoBERTa base model](https://huggingface.co/roberta-base) (125M parameters) fine-tuned for 20 epochs on a poetry dataset of 38 MB. This dataset was extracted from the [Gutenberg Poetry Corpus](https://github.com/aparrish/gutenberg-poetry-corpus) using an automatic classifier for poems in relation with the topic of **religion and spirituality**. The model replaces a masked word, indicated by the `<mask>` tag, with a word associated with **religion and spirituality**, while preserving fluency. Caution: the topic (here, **religion and spirituality**) only biases the choice of words with respect to the base model, but do not expect to find only words strongly associated to this topic. This model was trained by [Teo Ferrari](https://www.linkedin.com/in/teo-ferrari-0a4009176/) as part of his Bachelor thesis at [HEIG-VD](https://gaps.heig-vd.ch/public/diplome/rapports.php?id=6763), supervised by [Andrei Popescu-Belis](http://iict-space.heig-vd.ch/apu/). The model is described in "[GPoeT: a Language Model Trained for Rhyme Generation on Synthetic Data](https://aclanthology.org/2023.latechclfl-1.2/)" and is used in the [CR-PO](https://github.com/heig-iict-ida/crpo) system for [interactive poem generation](https://aclanthology.org/2022.lrec-1.377), along with several other models for specific topics or emotions.
abvijaykumar/distilgpt2-prefix-tuned-textgen
abvijaykumar
2023-09-16T16:57:00Z
3
0
peft
[ "peft", "region:us" ]
null
2023-09-16T16:46:25Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.6.0.dev0
117jahfar/my-cats-new
117jahfar
2023-09-16T16:42:49Z
1
0
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-09-16T16:29:48Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-cats-new Dreambooth model trained by 117jahfar following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: MESCOE223 Sample pictures of this concept: ![0](https://huggingface.co/117jahfar/my-cats-new/resolve/main/sample_images/_752942be-d83f-454f-aa02-b41b33587f33.jpeg) ![1](https://huggingface.co/117jahfar/my-cats-new/resolve/main/sample_images/_7c48c337-45e1-47fe-aed5-4ea50ca1b793.jpeg) ![2](https://huggingface.co/117jahfar/my-cats-new/resolve/main/sample_images/_45a551e4-4713-4cb1-bb1f-61afb6f70499.jpeg) ![3](https://huggingface.co/117jahfar/my-cats-new/resolve/main/sample_images/_5dbca548-e84a-42a1-88e3-02196dbd8d00.jpeg)
prabhatsingh/bert-finetuned-squad
prabhatsingh
2023-09-16T16:42:39Z
122
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-09-16T15:22:30Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer datasets: - squad model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
Unstoppable0/modexla
Unstoppable0
2023-09-16T16:39:54Z
0
0
null
[ "arxiv:2211.06679", "region:us" ]
null
2023-09-16T16:08:11Z
# Stable Diffusion web UI A browser interface based on Gradio library for Stable Diffusion. ![](screenshot.png) ## Features [Detailed feature showcase with images](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features): - Original txt2img and img2img modes - One click install and run script (but you still must install python and git) - Outpainting - Inpainting - Color Sketch - Prompt Matrix - Stable Diffusion Upscale - Attention, specify parts of text that the model should pay more attention to - a man in a `((tuxedo))` - will pay more attention to tuxedo - a man in a `(tuxedo:1.21)` - alternative syntax - select text and press `Ctrl+Up` or `Ctrl+Down` (or `Command+Up` or `Command+Down` if you're on a MacOS) to automatically adjust attention to selected text (code contributed by anonymous user) - Loopback, run img2img processing multiple times - X/Y/Z plot, a way to draw a 3 dimensional plot of images with different parameters - Textual Inversion - have as many embeddings as you want and use any names you like for them - use multiple embeddings with different numbers of vectors per token - works with half precision floating point numbers - train embeddings on 8GB (also reports of 6GB working) - Extras tab with: - GFPGAN, neural network that fixes faces - CodeFormer, face restoration tool as an alternative to GFPGAN - RealESRGAN, neural network upscaler - ESRGAN, neural network upscaler with a lot of third party models - SwinIR and Swin2SR ([see here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/2092)), neural network upscalers - LDSR, Latent diffusion super resolution upscaling - Resizing aspect ratio options - Sampling method selection - Adjust sampler eta values (noise multiplier) - More advanced noise setting options - Interrupt processing at any time - 4GB video card support (also reports of 2GB working) - Correct seeds for batches - Live prompt token length validation - Generation parameters - parameters you used to generate images are saved with that image - in PNG chunks for PNG, in EXIF for JPEG - can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI - can be disabled in settings - drag and drop an image/text-parameters to promptbox - Read Generation Parameters Button, loads parameters in promptbox to UI - Settings page - Running arbitrary python code from UI (must run with `--allow-code` to enable) - Mouseover hints for most UI elements - Possible to change defaults/mix/max/step values for UI elements via text config - Tiling support, a checkbox to create images that can be tiled like textures - Progress bar and live image generation preview - Can use a separate neural network to produce previews with almost none VRAM or compute requirement - Negative prompt, an extra text field that allows you to list what you don't want to see in generated image - Styles, a way to save part of prompt and easily apply them via dropdown later - Variations, a way to generate same image but with tiny differences - Seed resizing, a way to generate same image but at slightly different resolution - CLIP interrogator, a button that tries to guess prompt from an image - Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway - Batch Processing, process a group of files using img2img - Img2img Alternative, reverse Euler method of cross attention control - Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions - Reloading checkpoints on the fly - Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one - [Custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts) with many extensions from community - [Composable-Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/), a way to use multiple prompts at once - separate prompts using uppercase `AND` - also supports weights for prompts: `a cat :1.2 AND a dog AND a penguin :2.2` - No token limit for prompts (original stable diffusion lets you use up to 75 tokens) - DeepDanbooru integration, creates danbooru style tags for anime prompts - [xformers](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers), major speed increase for select cards: (add `--xformers` to commandline args) - via extension: [History tab](https://github.com/yfszzx/stable-diffusion-webui-images-browser): view, direct and delete images conveniently within the UI - Generate forever option - Training tab - hypernetworks and embeddings options - Preprocessing images: cropping, mirroring, autotagging using BLIP or deepdanbooru (for anime) - Clip skip - Hypernetworks - Loras (same as Hypernetworks but more pretty) - A separate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt - Can select to load a different VAE from settings screen - Estimated completion time in progress bar - API - Support for dedicated [inpainting model](https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion) by RunwayML - via extension: [Aesthetic Gradients](https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients), a way to generate images with a specific aesthetic by using clip images embeds (implementation of [https://github.com/vicgalle/stable-diffusion-aesthetic-gradients](https://github.com/vicgalle/stable-diffusion-aesthetic-gradients)) - [Stable Diffusion 2.0](https://github.com/Stability-AI/stablediffusion) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#stable-diffusion-20) for instructions - [Alt-Diffusion](https://arxiv.org/abs/2211.06679) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#alt-diffusion) for instructions - Now without any bad letters! - Load checkpoints in safetensors format - Eased resolution restriction: generated image's dimension must be a multiple of 8 rather than 64 - Now with a license! - Reorder elements in the UI from settings screen ## Installation and Running Make sure the required [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) are met and follow the instructions available for: - [NVidia](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) (recommended) - [AMD](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs) GPUs. - [Intel CPUs, Intel GPUs (both integrated and discrete)](https://github.com/openvinotoolkit/stable-diffusion-webui/wiki/Installation-on-Intel-Silicon) (external wiki page) Alternatively, use online services (like Google Colab): - [List of Online Services](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Online-Services) ### Installation on Windows 10/11 with NVidia-GPUs using release package 1. Download `sd.webui.zip` from [v1.0.0-pre](https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.0.0-pre) and extract it's contents. 2. Run `update.bat`. 3. Run `run.bat`. > For more details see [Install-and-Run-on-NVidia-GPUs](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) ### Automatic Installation on Windows 1. Install [Python 3.10.6](https://www.python.org/downloads/release/python-3106/) (Newer version of Python does not support torch), checking "Add Python to PATH". 2. Install [git](https://git-scm.com/download/win). 3. Download the stable-diffusion-webui repository, for example by running `git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git`. 4. Run `webui-user.bat` from Windows Explorer as normal, non-administrator, user. ### Automatic Installation on Linux 1. Install the dependencies: ```bash # Debian-based: sudo apt install wget git python3 python3-venv libgl1 libglib2.0-0 # Red Hat-based: sudo dnf install wget git python3 # Arch-based: sudo pacman -S wget git python3 ``` 2. Navigate to the directory you would like the webui to be installed and execute the following command: ```bash wget -q https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh ``` 3. Run `webui.sh`. 4. Check `webui-user.sh` for options. ### Installation on Apple Silicon Find the instructions [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Installation-on-Apple-Silicon). ## Contributing Here's how to add code to this repo: [Contributing](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing) ## Documentation The documentation was moved from this README over to the project's [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki). For the purposes of getting Google and other search engines to crawl the wiki, here's a link to the (not for humans) [crawlable wiki](https://github-wiki-see.page/m/AUTOMATIC1111/stable-diffusion-webui/wiki). ## Credits Licenses for borrowed code can be found in `Settings -> Licenses` screen, and also in `html/licenses.html` file. - Stable Diffusion - https://github.com/CompVis/stable-diffusion, https://github.com/CompVis/taming-transformers - k-diffusion - https://github.com/crowsonkb/k-diffusion.git - GFPGAN - https://github.com/TencentARC/GFPGAN.git - CodeFormer - https://github.com/sczhou/CodeFormer - ESRGAN - https://github.com/xinntao/ESRGAN - SwinIR - https://github.com/JingyunLiang/SwinIR - Swin2SR - https://github.com/mv-lab/swin2sr - LDSR - https://github.com/Hafiidz/latent-diffusion - MiDaS - https://github.com/isl-org/MiDaS - Ideas for optimizations - https://github.com/basujindal/stable-diffusion - Cross Attention layer optimization - Doggettx - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing. - Cross Attention layer optimization - InvokeAI, lstein - https://github.com/invoke-ai/InvokeAI (originally http://github.com/lstein/stable-diffusion) - Sub-quadratic Cross Attention layer optimization - Alex Birch (https://github.com/Birch-san/diffusers/pull/1), Amin Rezaei (https://github.com/AminRezaei0x443/memory-efficient-attention) - Textual Inversion - Rinon Gal - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas). - Idea for SD upscale - https://github.com/jquesnelle/txt2imghd - Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot - CLIP interrogator idea and borrowing some code - https://github.com/pharmapsychotic/clip-interrogator - Idea for Composable Diffusion - https://github.com/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch - xformers - https://github.com/facebookresearch/xformers - DeepDanbooru - interrogator for anime diffusers https://github.com/KichangKim/DeepDanbooru - Sampling in float32 precision from a float16 UNet - marunine for the idea, Birch-san for the example Diffusers implementation (https://github.com/Birch-san/diffusers-play/tree/92feee6) - Instruct pix2pix - Tim Brooks (star), Aleksander Holynski (star), Alexei A. Efros (no star) - https://github.com/timothybrooks/instruct-pix2pix - Security advice - RyotaK - UniPC sampler - Wenliang Zhao - https://github.com/wl-zhao/UniPC - TAESD - Ollin Boer Bohan - https://github.com/madebyollin/taesd - LyCORIS - KohakuBlueleaf - Restart sampling - lambertae - https://github.com/Newbeeer/diffusion_restart_sampling - Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user. - (You)
cgijoe/ppo-LunarLander-v2
cgijoe
2023-09-16T16:34:23Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-09-05T20:39:43Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -658.36 +/- 50.44 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
CyberHarem/vento_of_the_front_toarumajutsunoindex
CyberHarem
2023-09-16T16:33:53Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/vento_of_the_front_toarumajutsunoindex", "license:mit", "region:us" ]
text-to-image
2023-09-16T16:22:55Z
--- license: mit datasets: - CyberHarem/vento_of_the_front_toarumajutsunoindex pipeline_tag: text-to-image tags: - art --- # Lora of vento_of_the_front_toarumajutsunoindex This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 4420, you need to download `4420/vento_of_the_front_toarumajutsunoindex.pt` as the embedding and `4420/vento_of_the_front_toarumajutsunoindex.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 4420**, with the score of 0.967. The trigger words are: 1. `vento_of_the_front_toarumajutsunoindex` 2. `blonde_hair, piercing, blue_eyes` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:----------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------| | 5100 | 0.844 | [Download](5100/vento_of_the_front_toarumajutsunoindex.zip) | ![pattern_1-5100](5100/previews/pattern_1.png) | ![pattern_2-5100](5100/previews/pattern_2.png) | ![pattern_3-5100](5100/previews/pattern_3.png) | ![pattern_4-5100](5100/previews/pattern_4.png) | ![pattern_5-5100](5100/previews/pattern_5.png) | ![pattern_6-5100](5100/previews/pattern_6.png) | ![bikini-5100](5100/previews/bikini.png) | [<NSFW, click to see>](5100/previews/bondage.png) | ![free-5100](5100/previews/free.png) | ![maid-5100](5100/previews/maid.png) | ![miko-5100](5100/previews/miko.png) | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) | ![suit-5100](5100/previews/suit.png) | ![yukata-5100](5100/previews/yukata.png) | | 4760 | 0.960 | [Download](4760/vento_of_the_front_toarumajutsunoindex.zip) | ![pattern_1-4760](4760/previews/pattern_1.png) | ![pattern_2-4760](4760/previews/pattern_2.png) | ![pattern_3-4760](4760/previews/pattern_3.png) | ![pattern_4-4760](4760/previews/pattern_4.png) | ![pattern_5-4760](4760/previews/pattern_5.png) | ![pattern_6-4760](4760/previews/pattern_6.png) | ![bikini-4760](4760/previews/bikini.png) | [<NSFW, click to see>](4760/previews/bondage.png) | ![free-4760](4760/previews/free.png) | ![maid-4760](4760/previews/maid.png) | ![miko-4760](4760/previews/miko.png) | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) | ![suit-4760](4760/previews/suit.png) | ![yukata-4760](4760/previews/yukata.png) | | **4420** | **0.967** | [**Download**](4420/vento_of_the_front_toarumajutsunoindex.zip) | ![pattern_1-4420](4420/previews/pattern_1.png) | ![pattern_2-4420](4420/previews/pattern_2.png) | ![pattern_3-4420](4420/previews/pattern_3.png) | ![pattern_4-4420](4420/previews/pattern_4.png) | ![pattern_5-4420](4420/previews/pattern_5.png) | ![pattern_6-4420](4420/previews/pattern_6.png) | ![bikini-4420](4420/previews/bikini.png) | [<NSFW, click to see>](4420/previews/bondage.png) | ![free-4420](4420/previews/free.png) | ![maid-4420](4420/previews/maid.png) | ![miko-4420](4420/previews/miko.png) | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) | ![suit-4420](4420/previews/suit.png) | ![yukata-4420](4420/previews/yukata.png) | | 4080 | 0.916 | [Download](4080/vento_of_the_front_toarumajutsunoindex.zip) | ![pattern_1-4080](4080/previews/pattern_1.png) | ![pattern_2-4080](4080/previews/pattern_2.png) | ![pattern_3-4080](4080/previews/pattern_3.png) | ![pattern_4-4080](4080/previews/pattern_4.png) | ![pattern_5-4080](4080/previews/pattern_5.png) | ![pattern_6-4080](4080/previews/pattern_6.png) | ![bikini-4080](4080/previews/bikini.png) | [<NSFW, click to see>](4080/previews/bondage.png) | ![free-4080](4080/previews/free.png) | ![maid-4080](4080/previews/maid.png) | ![miko-4080](4080/previews/miko.png) | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) | ![suit-4080](4080/previews/suit.png) | ![yukata-4080](4080/previews/yukata.png) | | 3740 | 0.902 | [Download](3740/vento_of_the_front_toarumajutsunoindex.zip) | ![pattern_1-3740](3740/previews/pattern_1.png) | ![pattern_2-3740](3740/previews/pattern_2.png) | ![pattern_3-3740](3740/previews/pattern_3.png) | ![pattern_4-3740](3740/previews/pattern_4.png) | ![pattern_5-3740](3740/previews/pattern_5.png) | ![pattern_6-3740](3740/previews/pattern_6.png) | ![bikini-3740](3740/previews/bikini.png) | [<NSFW, click to see>](3740/previews/bondage.png) | ![free-3740](3740/previews/free.png) | ![maid-3740](3740/previews/maid.png) | ![miko-3740](3740/previews/miko.png) | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) | ![suit-3740](3740/previews/suit.png) | ![yukata-3740](3740/previews/yukata.png) | | 3400 | 0.889 | [Download](3400/vento_of_the_front_toarumajutsunoindex.zip) | ![pattern_1-3400](3400/previews/pattern_1.png) | ![pattern_2-3400](3400/previews/pattern_2.png) | ![pattern_3-3400](3400/previews/pattern_3.png) | ![pattern_4-3400](3400/previews/pattern_4.png) | ![pattern_5-3400](3400/previews/pattern_5.png) | ![pattern_6-3400](3400/previews/pattern_6.png) | ![bikini-3400](3400/previews/bikini.png) | [<NSFW, click to see>](3400/previews/bondage.png) | ![free-3400](3400/previews/free.png) | ![maid-3400](3400/previews/maid.png) | ![miko-3400](3400/previews/miko.png) | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) | ![suit-3400](3400/previews/suit.png) | ![yukata-3400](3400/previews/yukata.png) | | 3060 | 0.767 | [Download](3060/vento_of_the_front_toarumajutsunoindex.zip) | ![pattern_1-3060](3060/previews/pattern_1.png) | ![pattern_2-3060](3060/previews/pattern_2.png) | ![pattern_3-3060](3060/previews/pattern_3.png) | ![pattern_4-3060](3060/previews/pattern_4.png) | ![pattern_5-3060](3060/previews/pattern_5.png) | ![pattern_6-3060](3060/previews/pattern_6.png) | ![bikini-3060](3060/previews/bikini.png) | [<NSFW, click to see>](3060/previews/bondage.png) | ![free-3060](3060/previews/free.png) | ![maid-3060](3060/previews/maid.png) | ![miko-3060](3060/previews/miko.png) | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) | ![suit-3060](3060/previews/suit.png) | ![yukata-3060](3060/previews/yukata.png) | | 2720 | 0.830 | [Download](2720/vento_of_the_front_toarumajutsunoindex.zip) | ![pattern_1-2720](2720/previews/pattern_1.png) | ![pattern_2-2720](2720/previews/pattern_2.png) | ![pattern_3-2720](2720/previews/pattern_3.png) | ![pattern_4-2720](2720/previews/pattern_4.png) | ![pattern_5-2720](2720/previews/pattern_5.png) | ![pattern_6-2720](2720/previews/pattern_6.png) | ![bikini-2720](2720/previews/bikini.png) | [<NSFW, click to see>](2720/previews/bondage.png) | ![free-2720](2720/previews/free.png) | ![maid-2720](2720/previews/maid.png) | ![miko-2720](2720/previews/miko.png) | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) | ![suit-2720](2720/previews/suit.png) | ![yukata-2720](2720/previews/yukata.png) | | 2380 | 0.846 | [Download](2380/vento_of_the_front_toarumajutsunoindex.zip) | ![pattern_1-2380](2380/previews/pattern_1.png) | ![pattern_2-2380](2380/previews/pattern_2.png) | ![pattern_3-2380](2380/previews/pattern_3.png) | ![pattern_4-2380](2380/previews/pattern_4.png) | ![pattern_5-2380](2380/previews/pattern_5.png) | ![pattern_6-2380](2380/previews/pattern_6.png) | ![bikini-2380](2380/previews/bikini.png) | [<NSFW, click to see>](2380/previews/bondage.png) | ![free-2380](2380/previews/free.png) | ![maid-2380](2380/previews/maid.png) | ![miko-2380](2380/previews/miko.png) | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) | ![suit-2380](2380/previews/suit.png) | ![yukata-2380](2380/previews/yukata.png) | | 2040 | 0.760 | [Download](2040/vento_of_the_front_toarumajutsunoindex.zip) | ![pattern_1-2040](2040/previews/pattern_1.png) | ![pattern_2-2040](2040/previews/pattern_2.png) | ![pattern_3-2040](2040/previews/pattern_3.png) | ![pattern_4-2040](2040/previews/pattern_4.png) | ![pattern_5-2040](2040/previews/pattern_5.png) | ![pattern_6-2040](2040/previews/pattern_6.png) | ![bikini-2040](2040/previews/bikini.png) | [<NSFW, click to see>](2040/previews/bondage.png) | ![free-2040](2040/previews/free.png) | ![maid-2040](2040/previews/maid.png) | ![miko-2040](2040/previews/miko.png) | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) | ![suit-2040](2040/previews/suit.png) | ![yukata-2040](2040/previews/yukata.png) | | 1700 | 0.782 | [Download](1700/vento_of_the_front_toarumajutsunoindex.zip) | ![pattern_1-1700](1700/previews/pattern_1.png) | ![pattern_2-1700](1700/previews/pattern_2.png) | ![pattern_3-1700](1700/previews/pattern_3.png) | ![pattern_4-1700](1700/previews/pattern_4.png) | ![pattern_5-1700](1700/previews/pattern_5.png) | ![pattern_6-1700](1700/previews/pattern_6.png) | ![bikini-1700](1700/previews/bikini.png) | [<NSFW, click to see>](1700/previews/bondage.png) | ![free-1700](1700/previews/free.png) | ![maid-1700](1700/previews/maid.png) | ![miko-1700](1700/previews/miko.png) | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) | ![suit-1700](1700/previews/suit.png) | ![yukata-1700](1700/previews/yukata.png) | | 1360 | 0.684 | [Download](1360/vento_of_the_front_toarumajutsunoindex.zip) | ![pattern_1-1360](1360/previews/pattern_1.png) | ![pattern_2-1360](1360/previews/pattern_2.png) | ![pattern_3-1360](1360/previews/pattern_3.png) | ![pattern_4-1360](1360/previews/pattern_4.png) | ![pattern_5-1360](1360/previews/pattern_5.png) | ![pattern_6-1360](1360/previews/pattern_6.png) | ![bikini-1360](1360/previews/bikini.png) | [<NSFW, click to see>](1360/previews/bondage.png) | ![free-1360](1360/previews/free.png) | ![maid-1360](1360/previews/maid.png) | ![miko-1360](1360/previews/miko.png) | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) | ![suit-1360](1360/previews/suit.png) | ![yukata-1360](1360/previews/yukata.png) | | 1020 | 0.610 | [Download](1020/vento_of_the_front_toarumajutsunoindex.zip) | ![pattern_1-1020](1020/previews/pattern_1.png) | ![pattern_2-1020](1020/previews/pattern_2.png) | ![pattern_3-1020](1020/previews/pattern_3.png) | ![pattern_4-1020](1020/previews/pattern_4.png) | ![pattern_5-1020](1020/previews/pattern_5.png) | ![pattern_6-1020](1020/previews/pattern_6.png) | ![bikini-1020](1020/previews/bikini.png) | [<NSFW, click to see>](1020/previews/bondage.png) | ![free-1020](1020/previews/free.png) | ![maid-1020](1020/previews/maid.png) | ![miko-1020](1020/previews/miko.png) | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) | ![suit-1020](1020/previews/suit.png) | ![yukata-1020](1020/previews/yukata.png) | | 680 | 0.469 | [Download](680/vento_of_the_front_toarumajutsunoindex.zip) | ![pattern_1-680](680/previews/pattern_1.png) | ![pattern_2-680](680/previews/pattern_2.png) | ![pattern_3-680](680/previews/pattern_3.png) | ![pattern_4-680](680/previews/pattern_4.png) | ![pattern_5-680](680/previews/pattern_5.png) | ![pattern_6-680](680/previews/pattern_6.png) | ![bikini-680](680/previews/bikini.png) | [<NSFW, click to see>](680/previews/bondage.png) | ![free-680](680/previews/free.png) | ![maid-680](680/previews/maid.png) | ![miko-680](680/previews/miko.png) | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) | ![suit-680](680/previews/suit.png) | ![yukata-680](680/previews/yukata.png) | | 340 | 0.441 | [Download](340/vento_of_the_front_toarumajutsunoindex.zip) | ![pattern_1-340](340/previews/pattern_1.png) | ![pattern_2-340](340/previews/pattern_2.png) | ![pattern_3-340](340/previews/pattern_3.png) | ![pattern_4-340](340/previews/pattern_4.png) | ![pattern_5-340](340/previews/pattern_5.png) | ![pattern_6-340](340/previews/pattern_6.png) | ![bikini-340](340/previews/bikini.png) | [<NSFW, click to see>](340/previews/bondage.png) | ![free-340](340/previews/free.png) | ![maid-340](340/previews/maid.png) | ![miko-340](340/previews/miko.png) | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) | ![suit-340](340/previews/suit.png) | ![yukata-340](340/previews/yukata.png) |
winglian/omega-3b
winglian
2023-09-16T16:32:13Z
31
3
transformers
[ "transformers", "pytorch", "mixformer-sequential", "text-generation", "custom_code", "en", "dataset:nampdn-ai/tiny-textbooks", "dataset:nampdn-ai/tiny-lessons", "autotrain_compatible", "region:us" ]
text-generation
2023-09-15T07:40:11Z
--- datasets: - nampdn-ai/tiny-textbooks - nampdn-ai/tiny-lessons language: - en --- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) # Omega 2.6B This model is derived from phi 1.3B using layer stacking techniques to double the number of hidden layers in the model. The model was then trained for 1 epoch on data from tiny-textbooks and tiny-lessons. # Training https://wandb.ai/wing-lian/phi-2x-pt-tiny
markredito/FilmGrain-LoRA-stablediffusion
markredito
2023-09-16T16:23:34Z
10
1
diffusers
[ "diffusers", "art", "text-to-image", "en", "license:artistic-2.0", "region:us" ]
text-to-image
2023-09-09T17:22:12Z
--- library_name: diffusers license: artistic-2.0 pipeline_tag: text-to-image language: - en tags: - art --- ### Introducing Filmgrain LoRA for Stable Diffusion 1.5 <img src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/5ef5ed30-feca-46e4-b6b4-6b41ef2ed9f7/width=1024/00004-1404433137.jpeg" width="512" height="512"> <img src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/bc14aaa7-53c1-45ea-abbb-06ecdd011411/width=1024/00005-1404433138.jpeg" width="512" height="512"> <img src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/1d881292-c7f5-4155-bff3-fd951a9bf04d/width=1024/00008-1404433140.jpeg" width="512" height="512"> **Overview** Meet our Low Rank Adapter (LoRA) for Stable Diffusion 1.5: your go-to for adding that nostalgic, film-like touch to digital images. **Who's It For?** If you love the "analog style" in photographs—grains, colors, and all—this one's for you. **Features** - Film Grain: Get that classic grainy texture. - Slight Discoloration: Add subtle, film-like color shifts to your images. **Recommended Settings:** Note: This LoRA works well for portrait photography. - Model/checkpoint to use this LoRA with: dreamshaper - Steps: 25 - CFG scale: 7 - Sampler: Tested with Eular a and DPM++ 2M Karras Experiment with different settings; you might get better results! **Local Installation** - Download and save the tensor file to your models\lora folder of your Stable Diffusion installation. If using Automatic1111 it would be here: \\a1111\\stable-diffusion-webui\\models\\Lora - When using Automatic1111 UI, click “show/hide networks” button. - Choose the LoRA, and it automatically adds the activation tag to your prompt. **Alternate Download Link** https://civitai.com/models/142795/filmgrain
clp/opt-6.7b-lora
clp
2023-09-16T16:11:39Z
2
0
peft
[ "peft", "region:us" ]
null
2023-09-16T16:11:36Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.0.dev0
elenaThevalley/mobilenet_v2_1.0_224-finetuned-32bs-0.01lr
elenaThevalley
2023-09-16T16:05:57Z
194
0
transformers
[ "transformers", "pytorch", "mobilenet_v2", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/mobilenet_v2_1.0_224", "base_model:finetune:google/mobilenet_v2_1.0_224", "license:other", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-09-15T15:48:51Z
--- license: other base_model: google/mobilenet_v2_1.0_224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: mobilenet_v2_1.0_224-finetuned-32bs-0.01lr results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.9206560874783305 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mobilenet_v2_1.0_224-finetuned-32bs-0.01lr This model is a fine-tuned version of [google/mobilenet_v2_1.0_224](https://huggingface.co/google/mobilenet_v2_1.0_224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.2230 - Accuracy: 0.9207 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.01 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.99 | 53 | 0.3603 | 0.8745 | | No log | 1.99 | 107 | 0.2323 | 0.9183 | | No log | 2.96 | 159 | 0.2230 | 0.9207 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
hardikpanchariya/photo-sks-hp
hardikpanchariya
2023-09-16T16:04:02Z
0
0
null
[ "text-to-image", "doi:10.57967/hf/1119", "license:openrail", "region:us" ]
text-to-image
2023-09-16T15:53:24Z
--- license: openrail pipeline_tag: text-to-image ---
SuperSecureHuman/t5_base_trails
SuperSecureHuman
2023-09-16T15:52:29Z
5
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "dataset:opus_books", "base_model:google-t5/t5-base", "base_model:finetune:google-t5/t5-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-09-16T14:54:35Z
--- license: apache-2.0 base_model: t5-base tags: - generated_from_trainer datasets: - opus_books model-index: - name: t5_base_trails results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5_base_trails This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the opus_books dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
InexperiencedMe/a2c-PandaReachDense-v3
InexperiencedMe
2023-09-16T15:48:07Z
1
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-09-16T15:42:41Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v3 type: PandaReachDense-v3 metrics: - type: mean_reward value: -0.21 +/- 0.09 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v3** This is a trained model of a **A2C** agent playing **PandaReachDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
ctrltokyo/phi-1_5-finetuned-gsm8k
ctrltokyo
2023-09-16T15:46:17Z
0
0
null
[ "generated_from_trainer", "base_model:microsoft/phi-1_5", "base_model:finetune:microsoft/phi-1_5", "license:other", "region:us" ]
null
2023-09-16T15:42:32Z
--- license: other base_model: microsoft/phi-1_5 tags: - generated_from_trainer model-index: - name: phi-1_5-finetuned-gsm8k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi-1_5-finetuned-gsm8k This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 1000 ### Training results ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
huygdng/wav2vec2-large-xls-r-300m-tw-trans-colab-2
huygdng
2023-09-16T15:41:58Z
105
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "base_model:facebook/wav2vec2-xls-r-300m", "base_model:finetune:facebook/wav2vec2-xls-r-300m", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-09-16T14:28:08Z
--- license: apache-2.0 base_model: facebook/wav2vec2-xls-r-300m tags: - generated_from_trainer model-index: - name: wav2vec2-large-xls-r-300m-tw-trans-colab-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-tw-trans-colab-2 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 150 - num_epochs: 100 ### Framework versions - Transformers 4.34.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
Anastasiaps/Reinforce-Cart-Pole
Anastasiaps
2023-09-16T15:34:47Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-09-16T15:34:38Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Cart-Pole results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 452.30 +/- 97.19 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
salim4n/dqn-SpaceInvadersNoFrameskip-v4
salim4n
2023-09-16T15:29:24Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-09-16T15:28:53Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 385.00 +/- 136.99 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga salim4n -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga salim4n -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga salim4n ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 10000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
DazMashaly/VIT_large_ieee
DazMashaly
2023-09-16T15:19:02Z
8
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-large-patch16-224-in21k", "base_model:finetune:google/vit-large-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-09-14T23:19:34Z
--- license: apache-2.0 base_model: google/vit-large-patch16-224-in21k tags: - generated_from_trainer metrics: - accuracy model-index: - name: VIT_large_ieee results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # VIT_large_ieee This model is a fine-tuned version of [google/vit-large-patch16-224-in21k](https://huggingface.co/google/vit-large-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0230 - Accuracy: 0.9941 ## Model description This model was used for IEEE ManSB VICTORIS 2.0 Final Competition ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0047 | 0.67 | 100 | 0.0283 | 0.9929 | | 0.0165 | 1.34 | 200 | 0.0230 | 0.9941 | ### Framework versions - Transformers 4.33.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
anyuanay/my_awesome_model
anyuanay
2023-09-16T15:14:44Z
106
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-16T14:57:41Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - imdb metrics: - accuracy model-index: - name: my_awesome_model results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: test args: plain_text metrics: - name: Accuracy type: accuracy value: 1.0 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.0002 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 157 | 0.0004 | 1.0 | | No log | 2.0 | 314 | 0.0002 | 1.0 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
liadraz/rl_course_vizdoom_health_gathering_supreme
liadraz
2023-09-16T15:11:55Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-09-16T15:11:45Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 4.80 +/- 1.15 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r liadraz/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
DaunIDebil/Zelensky-RVC2.0
DaunIDebil
2023-09-16T15:11:22Z
0
0
null
[ "region:us" ]
null
2023-09-16T15:07:23Z
model is not mine. i found it somewhere on the internet in may.