modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-12 12:31:00
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
555 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-12 12:28:53
card
stringlengths
11
1.01M
nikinetrahutama/afx-ai-llama-chat-model-sqlprompt-11
nikinetrahutama
2023-08-22T13:55:24Z
2
0
peft
[ "peft", "pytorch", "llama", "region:us" ]
null
2023-08-22T12:15:21Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
gollumeo/Gollumecoder-1b
gollumeo
2023-08-22T13:54:20Z
6
0
peft
[ "peft", "region:us" ]
null
2023-08-21T08:26:03Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.5.0.dev0
ColdChair/hw_1
ColdChair
2023-08-22T13:53:18Z
0
0
null
[ "dataset:roneneldan/TinyStories", "license:openrail", "region:us" ]
null
2023-08-22T13:51:46Z
--- license: openrail datasets: - roneneldan/TinyStories ---
yodi/karina
yodi
2023-08-22T13:42:51Z
17
0
transformers
[ "transformers", "pytorch", "safetensors", "bloom", "text-generation", "id", "dataset:Local", "license:bigscience-bloom-rail-1.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-22T05:35:38Z
--- datasets: - Local license: bigscience-bloom-rail-1.0 language: - id pipeline_tag: text-generation --- # Table of Contents 1. [Model Summary](#model-summary) 2. [Use](#use) 4. [Training](#training) # Model Summary > We present KARINA, finetuned from BLOOMZ bigscience/bloomz-3b, a family of models capable of following human instructions in dozens of languages zero-shot. We finetune BLOOMZ pretrained multilingual language models on our crosslingual task mixture (xP3) and find the resulting models capable of crosslingual generalization to unseen tasks & languages. # Use ## Intended use We recommend using the model to perform tasks expressed in natural language. For example, given the prompt "*prompt = f"Given the question:\n{{ siapa kamu? }}\n---\nAnswer:\n"*", the model will most likely answer "*Saya Karina. Ada yang bisa saya bantu?*". ## How to use ### CPU <details> <summary> Click to expand </summary> ```python # pip install -q transformers from transformers import AutoModelForCausalLM, AutoTokenizer MODEL_NAME = "yodi/karina" tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME) model = AutoModelForCausalLM.from_pretrained(MODEL_NAME) inputs = tokenizer.encode("Given the question:\n{{ siapa kamu? }}\n---\nAnswer:\n", return_tensors="pt") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` </details> ### GPU in 4 bit <details> <summary> Click to expand </summary> ```python # pip install -q transformers from transformers import AutoModelForCausalLM, AutoTokenizer from transformers import pipeline MODEL_NAME = "yodi/karina" model_4bit = AutoModelForCausalLM.from_pretrained(MODEL_NAME, device_map="cuda:1", load_in_4bit=True) tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME) prompt = f"Given the question:\n{{ siapa kamu? }}\n---\nAnswer:\n" generator = pipeline('text-generation', model=model_4bit, tokenizer=tokenizer, do_sample=False) result = generator(prompt, max_length=256) print(result) ``` </details> ### GPU in 8bit <details> <summary> Click to expand </summary> ```python # pip install -q transformers from transformers import AutoModelForCausalLM, AutoTokenizer from transformers import pipeline MODEL_NAME = "yodi/karina" model_4bit = AutoModelForCausalLM.from_pretrained(MODEL_NAME, device_map="cuda:1", load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME) prompt = f"Given the question:\n{{ siapa kamu? }}\n---\nAnswer:\n" generator = pipeline('text-generation', model=model_4bit, tokenizer=tokenizer, do_sample=False) result = generator(prompt, max_length=256) print(result) ``` </details> ``` [{'generated_text': 'Given the question:\n{ siapa kamu? }\n---\nAnswer:\nSaya Karina, asisten virtual siap membantu seputar estimasi harga atau pertanyaan lain'}] ``` ### Infer in Local with Gradio ```python from transformers import AutoModelForCausalLM, AutoTokenizer from transformers import pipeline import re import gradio as gr MODEL_NAME = "yodi/karina" model_4bit = AutoModelForCausalLM.from_pretrained(MODEL_NAME, device_map="cuda:1", load_in_4bit=True) tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME) generator = pipeline('text-generation', model=model_4bit, tokenizer=tokenizer, do_sample=False) def preprocess(text): return f"Given the question:\n{{ {text} }}\n---\nAnswer:\n" def generate(text): preprocess_result = preprocess(text) result = generator(preprocess_result, max_length=256) output = re.split(r'\n---\nAnswer:\n',result[0]['generated_text'])[1] return output with gr.Blocks() as demo: input_text = gr.Textbox(label="Input", lines=1) button = gr.Button("Submit") output_text = gr.Textbox(lines=6, label="Output") button.click(generate, inputs=[input_text], outputs=output_text) demo.launch(enable_queue=True, debug=True) ``` And open the gradio url from browser. ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0.dev0 <!-- Necessary for whitespace --> ### # Limitations **Prompt Engineering:** The performance may vary depending on the prompt and its following BLOOMZ models. # Training ## Model - **Architecture:** Same as [bloom](https://huggingface.co/bigscience/bloom), also refer to the `config.json` file
thinkermode/kamalhassan-sdxl-db
thinkermode
2023-08-22T13:41:22Z
2
1
diffusers
[ "diffusers", "text-to-image", "autotrain", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
2023-08-22T13:41:20Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: kamalhassan tags: - text-to-image - diffusers - autotrain inference: true --- # DreamBooth trained by AutoTrain Text encoder was not trained.
isaranga/distilbert-base-uncased-finetuned-emotion
isaranga
2023-08-22T13:38:37Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-22T09:12:05Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.922 - name: F1 type: f1 value: 0.9219811284875927 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2160 - Accuracy: 0.922 - F1: 0.9220 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8295 | 1.0 | 250 | 0.3180 | 0.906 | 0.9049 | | 0.2523 | 2.0 | 500 | 0.2160 | 0.922 | 0.9220 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
Asad182/whisper-small-ur
Asad182
2023-08-22T13:27:04Z
77
0
transformers
[ "transformers", "pytorch", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "ur", "dataset:mozilla-foundation/common_voice_11_0", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-21T15:48:01Z
--- language: - ur license: apache-2.0 base_model: openai/whisper-small tags: - hf-asr-leaderboard - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 model-index: - name: Whisper Small Urdu - Asad Rizvi results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Urdu - Asad Rizvi This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 500 ### Training results ### Framework versions - Transformers 4.33.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
dkqjrm/20230822202040
dkqjrm
2023-08-22T13:20:05Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:super_glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-22T11:20:58Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - super_glue metrics: - accuracy model-index: - name: '20230822202040' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 20230822202040 This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5208 - Accuracy: 0.7365 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 11 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | No log | 1.0 | 312 | 0.7722 | 0.5271 | | 0.7133 | 2.0 | 624 | 0.5588 | 0.4982 | | 0.7133 | 3.0 | 936 | 0.6273 | 0.4729 | | 0.6364 | 4.0 | 1248 | 0.5976 | 0.4946 | | 0.6219 | 5.0 | 1560 | 0.7382 | 0.5415 | | 0.6219 | 6.0 | 1872 | 0.5328 | 0.6282 | | 0.5974 | 7.0 | 2184 | 0.5253 | 0.6282 | | 0.5974 | 8.0 | 2496 | 0.8677 | 0.5668 | | 0.5614 | 9.0 | 2808 | 0.5249 | 0.5884 | | 0.5732 | 10.0 | 3120 | 0.5113 | 0.6895 | | 0.5732 | 11.0 | 3432 | 0.5092 | 0.6931 | | 0.5559 | 12.0 | 3744 | 0.4693 | 0.7148 | | 0.5301 | 13.0 | 4056 | 0.4781 | 0.7256 | | 0.5301 | 14.0 | 4368 | 0.5693 | 0.6823 | | 0.4999 | 15.0 | 4680 | 0.4649 | 0.7256 | | 0.4999 | 16.0 | 4992 | 0.5702 | 0.6859 | | 0.4712 | 17.0 | 5304 | 0.4598 | 0.7401 | | 0.4431 | 18.0 | 5616 | 0.4750 | 0.7076 | | 0.4431 | 19.0 | 5928 | 0.4782 | 0.7184 | | 0.4348 | 20.0 | 6240 | 0.6236 | 0.6570 | | 0.4113 | 21.0 | 6552 | 0.5125 | 0.7473 | | 0.4113 | 22.0 | 6864 | 0.5703 | 0.6787 | | 0.4035 | 23.0 | 7176 | 0.5080 | 0.7112 | | 0.4035 | 24.0 | 7488 | 0.4619 | 0.7365 | | 0.3898 | 25.0 | 7800 | 0.5639 | 0.7076 | | 0.3736 | 26.0 | 8112 | 0.4968 | 0.7292 | | 0.3736 | 27.0 | 8424 | 0.4483 | 0.7509 | | 0.3708 | 28.0 | 8736 | 0.4929 | 0.7220 | | 0.3656 | 29.0 | 9048 | 0.5168 | 0.7401 | | 0.3656 | 30.0 | 9360 | 0.5618 | 0.7256 | | 0.3545 | 31.0 | 9672 | 0.4900 | 0.7365 | | 0.3545 | 32.0 | 9984 | 0.4676 | 0.7256 | | 0.3474 | 33.0 | 10296 | 0.5222 | 0.7220 | | 0.3326 | 34.0 | 10608 | 0.4861 | 0.7437 | | 0.3326 | 35.0 | 10920 | 0.4560 | 0.7401 | | 0.3313 | 36.0 | 11232 | 0.5375 | 0.7256 | | 0.3209 | 37.0 | 11544 | 0.5606 | 0.7329 | | 0.3209 | 38.0 | 11856 | 0.5173 | 0.7401 | | 0.3169 | 39.0 | 12168 | 0.5060 | 0.7329 | | 0.3169 | 40.0 | 12480 | 0.5250 | 0.7365 | | 0.3096 | 41.0 | 12792 | 0.5133 | 0.7256 | | 0.3097 | 42.0 | 13104 | 0.5012 | 0.7437 | | 0.3097 | 43.0 | 13416 | 0.5274 | 0.7401 | | 0.3049 | 44.0 | 13728 | 0.5086 | 0.7329 | | 0.2929 | 45.0 | 14040 | 0.4934 | 0.7329 | | 0.2929 | 46.0 | 14352 | 0.5667 | 0.7401 | | 0.293 | 47.0 | 14664 | 0.5047 | 0.7437 | | 0.293 | 48.0 | 14976 | 0.5353 | 0.7292 | | 0.291 | 49.0 | 15288 | 0.5280 | 0.7401 | | 0.2817 | 50.0 | 15600 | 0.5142 | 0.7365 | | 0.2817 | 51.0 | 15912 | 0.5141 | 0.7329 | | 0.2822 | 52.0 | 16224 | 0.4990 | 0.7329 | | 0.2758 | 53.0 | 16536 | 0.5074 | 0.7292 | | 0.2758 | 54.0 | 16848 | 0.5147 | 0.7329 | | 0.2763 | 55.0 | 17160 | 0.5138 | 0.7365 | | 0.2763 | 56.0 | 17472 | 0.5291 | 0.7365 | | 0.2782 | 57.0 | 17784 | 0.5204 | 0.7329 | | 0.272 | 58.0 | 18096 | 0.5093 | 0.7365 | | 0.272 | 59.0 | 18408 | 0.5217 | 0.7365 | | 0.2758 | 60.0 | 18720 | 0.5208 | 0.7365 | ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Polo123/llama2-qlora-finetunined-task
Polo123
2023-08-22T13:19:45Z
1
0
peft
[ "peft", "region:us" ]
null
2023-08-22T13:19:12Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.0.dev0
Saurabh16100/distilroberta-base-finetuned-wikitext2
Saurabh16100
2023-08-22T13:02:56Z
161
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-08-22T12:21:54Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilroberta-base-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-finetuned-wikitext2 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8349 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0852 | 1.0 | 2406 | 1.9234 | | 1.992 | 2.0 | 4812 | 1.8828 | | 1.9603 | 3.0 | 7218 | 1.8223 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
H-amza/q-taxi-v1
H-amza
2023-08-22T13:02:30Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-08-22T13:02:27Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-taxi-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="H-amza/q-taxi-v1", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
akar49/detr-crack-II
akar49
2023-08-22T12:54:57Z
34
0
transformers
[ "transformers", "pytorch", "tensorboard", "detr", "object-detection", "generated_from_trainer", "dataset:crack_detection-merged-ii", "base_model:facebook/detr-resnet-50", "base_model:finetune:facebook/detr-resnet-50", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
2023-08-22T11:09:26Z
--- license: apache-2.0 base_model: facebook/detr-resnet-50 tags: - generated_from_trainer datasets: - crack_detection-merged-ii model-index: - name: detr-crack-II results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-crack-II This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the crack_detection-merged-ii dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
AhmedTaha012/finance-ner-v0.0.4-finetuned-ner
AhmedTaha012
2023-08-22T12:49:06Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-08-22T12:16:22Z
--- license: mit tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: finance-ner-v0.0.4-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finance-ner-v0.0.4-finetuned-ner This model is a fine-tuned version of [dslim/bert-base-NER](https://huggingface.co/dslim/bert-base-NER) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0001 - Precision: 0.9988 - Recall: 1.0 - F1: 0.9994 - Accuracy: 1.0000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0 | 1.0 | 1413 | 0.0000 | 0.9994 | 0.9998 | 0.9996 | 1.0000 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
natsusakiyomi/SakuraMix
natsusakiyomi
2023-08-22T12:30:44Z
115
70
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "ja", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-03-17T17:37:21Z
--- license: creativeml-openrail-m language: - ja pipeline_tag: text-to-image tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image library_name: diffusers --- <div class="flex justify-center"> <div class="container p-0 w-100"> <img class="mt-0 object-cover rounded-t-lg w-100" style="height: 320px;" src="https://pbs.twimg.com/media/Fwzt7HZaEAAkX2U?format=jpg" width="100%"/> <div class="flex px-4"> <div class="flex-auto"> <h1 class="mb-2 text-3xl font-bold leading-tight" style="color: rgb(252, 238, 235/var(--tw-text-opacity));"> SakuraMixSeries </h1> <p class="mb-4 text-base text-neutral-600 dark:text-neutral-200"> 背景とキャラクタークオリティーを両立させたVAE内蔵型モデル<br> Model with built-in VAE for both background and character quality </p> </div> <div> <a href="https://twitter.com/min__san" class="mb-2 inline-block rounded px-6 py-2.5 text-white shadow-md" style="background-color: #1da1f2"> <svg xmlns="http://www.w3.org/2000/svg" class="h-3.5 w-3.5" fill="currentColor" viewBox="0 0 24 24"> <path d="M24 4.557c-.883.392-1.832.656-2.828.775 1.017-.609 1.798-1.574 2.165-2.724-.951.564-2.005.974-3.127 1.195-.897-.957-2.178-1.555-3.594-1.555-3.179 0-5.515 2.966-4.797 6.045-4.091-.205-7.719-2.165-10.148-5.144-1.29 2.213-.669 5.108 1.523 6.574-.806-.026-1.566-.247-2.229-.616-.054 2.281 1.581 4.415 3.949 4.89-.693.188-1.452.232-2.224.084.626 1.956 2.444 3.379 4.6 3.419-2.07 1.623-4.678 2.348-7.29 2.04 2.179 1.397 4.768 2.212 7.548 2.212 9.142 0 14.307-7.721 13.995-14.646.962-.695 1.797-1.562 2.457-2.549z" /> </svg> </a> </div> </div> </div> </div> --- <h4>📄 ライセンス / License</h4> <div class="px-2"> <table class="table-fixed border mt-0 text-xs"> <tbody> <tr> <td class="px-4 text-base" colspan="2"> <a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license"> 修正 CreativeML OpenRAIL-M ライセンス / Modified CreativeML OpenRAIL-M license </a> </td> </tr> <tr> <td class="align-middle px-2 w-8"> <span class="text-green-500"> <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6"> <path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" /> </svg> </span> </td> <td> このモデルのクレジットを入れずに使用する<br> Use the model without crediting the creator </td> </tr> <tr> <td class="align-middle px-2 w-8"> <span class="text-green-500"> <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6"> <path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" /> </svg> </span> </td> <td> このモデルで生成した画像を商用利用する<br> Sell images they generate </td> </tr> <tr class="bg-danger-100"> <td class="align-middle px-2 w-8"> <span class="text-red-500"> <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6"> <path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" /> </svg> </span> </td> <td> このモデルを商用の画像生成サービスで利用する</br> Run on services that generate images for money </td> </tr> <tr> <td class="align-middle px-2 w-8"> <span class="text-green-500"> <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6"> <path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" /> </svg> </span> </td> <td> このモデルを使用したマージモデルを共有する<br> Share merges using this model </td> </tr> <tr class="bg-danger-100"> <td class="align-middle px-2 w-8"> <span class="text-red-500"> <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6"> <path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" /> </svg> </span> </td> <td> このモデル、またはこのモデルをマージしたモデルを販売する</br> Sell this model or merges using this model </td> </tr> <tr class="bg-danger-100"> <td class="align-middle px-2 w-8"> <span class="text-red-500"> <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6"> <path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" /> </svg> </span> </td> <td> このモデルをマージしたモデルに異なる権限を設定する</br> Have different permissions when sharing merges </td> </tr> </tbody> </table> </div> <h3 id="blue_pencil-v7" class="mt-0 text-2xl"> <code>SakuraMix-v4</code> <small></small> </h3> <div> v3の改修モデル 全体的に手や破綻の少なくなったモデル<br> 若干書き込み量が減ったような気がするので昔からSakuraMix好きな人はflat loraを使うことを推奨 <hr class="my-6 h-0.5 border-t-0 opacity-100 dark:opacity-50" style="background-color: rgb(245 245 245/var(--tw-bg-opacity));"> <h3 id="blue_pencil-v7" class="mt-0 text-2xl"> <code>SakuraMix-v3</code> <small></small> </h3> <div> v2の改修モデル 服装や構図が前よりも増えた気がする 破綻しやすいがいいものが生成できるときはとてもいいものが生成できる<br> 個人的にはv2をお勧めします <hr class="my-6 h-0.5 border-t-0 opacity-100 dark:opacity-50" style="background-color: rgb(245 245 245/var(--tw-bg-opacity));"> <h3 id="SakuraMix-v2" class="mt-0 text-2xl"> <code>SakuraMix-v2</code> <small></small> </h3> <div> HimawariMix-v2B(没案)を改造したモデル<br> HimawariMix-v2自体character自体を強化したモデルだがさらにキャラを強くしたモデル <hr class="my-6 h-0.5 border-t-0 opacity-100 dark:opacity-50" style="background-color: rgb(245 245 245/var(--tw-bg-opacity));"> <h3 id="SakuraMix-v1" class="mt-0 text-2xl"> <code>SakuraMix-v1</code> <small></small> </h3> <div> 初代SakuraMix 特徴とか知らん忘れた<br> --- # 作者&連絡先 Twiter: [@min__san](https://twitter.com/min__san)<br> mail: (natsusakiyomi@mail.ru)
asenella/ms_config_1_alpha_10_beta_250_seed_1
asenella
2023-08-22T12:29:55Z
0
0
null
[ "multivae", "en", "license:apache-2.0", "region:us" ]
null
2023-08-22T12:29:53Z
--- language: en tags: - multivae license: apache-2.0 --- ### Downloading this model from the Hub This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub` ```python >>> from multivae.models import AutoModel >>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name") ```
kevinassemi/WaterWizard
kevinassemi
2023-08-22T12:19:21Z
0
0
null
[ "text-generation", "en", "license:llama2", "region:us" ]
text-generation
2023-08-22T12:17:16Z
--- license: llama2 language: - en pipeline_tag: text-generation ---
Muhammadreza/mann-e-bitmap-revised-2
Muhammadreza
2023-08-22T12:14:16Z
4
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-22T12:01:29Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### mann-e_bitmap_revised-2 Dreambooth model trained by Muhammadreza with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
zjoe/dqn-SpaceInvadersNoFrameskip-v4
zjoe
2023-08-22T12:09:09Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-08-22T12:08:35Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 404.00 +/- 164.56 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga zjoe -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga zjoe -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga zjoe ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0002), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
EliKet/lora-trained-xl-colab
EliKet
2023-08-22T11:53:44Z
2
1
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "base_model:stabilityai/stable-diffusion-xl-base-0.9", "base_model:adapter:stabilityai/stable-diffusion-xl-base-0.9", "license:openrail++", "region:us" ]
text-to-image
2023-08-17T09:17:36Z
--- license: openrail++ base_model: stabilityai/stable-diffusion-xl-base-0.9 instance_prompt: a photo of model tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - EliKet/lora-trained-xl-colab These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-0.9. The weights were trained on a photo of model using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
dkqjrm/20230822185017
dkqjrm
2023-08-22T11:48:59Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:super_glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-22T09:50:35Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - super_glue metrics: - accuracy model-index: - name: '20230822185017' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 20230822185017 This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset. It achieves the following results on the evaluation set: - Loss: 0.3476 - Accuracy: 0.7076 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 11 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | No log | 1.0 | 312 | 0.3644 | 0.5271 | | 0.5253 | 2.0 | 624 | 0.3757 | 0.5632 | | 0.5253 | 3.0 | 936 | 0.3595 | 0.4874 | | 0.4289 | 4.0 | 1248 | 0.4613 | 0.5415 | | 0.4182 | 5.0 | 1560 | 0.3427 | 0.6137 | | 0.4182 | 6.0 | 1872 | 0.3880 | 0.4874 | | 0.4027 | 7.0 | 2184 | 0.4778 | 0.5487 | | 0.4027 | 8.0 | 2496 | 0.3335 | 0.6715 | | 0.4009 | 9.0 | 2808 | 0.4011 | 0.5523 | | 0.3781 | 10.0 | 3120 | 0.3286 | 0.7040 | | 0.3781 | 11.0 | 3432 | 0.4135 | 0.6101 | | 0.3679 | 12.0 | 3744 | 0.3368 | 0.6787 | | 0.3774 | 13.0 | 4056 | 0.3311 | 0.6787 | | 0.3774 | 14.0 | 4368 | 0.3223 | 0.6859 | | 0.3457 | 15.0 | 4680 | 0.3293 | 0.7076 | | 0.3457 | 16.0 | 4992 | 0.4108 | 0.5812 | | 0.3607 | 17.0 | 5304 | 0.3682 | 0.6534 | | 0.3436 | 18.0 | 5616 | 0.3374 | 0.6498 | | 0.3436 | 19.0 | 5928 | 0.3248 | 0.7148 | | 0.3236 | 20.0 | 6240 | 0.3447 | 0.7184 | | 0.3022 | 21.0 | 6552 | 0.3444 | 0.7148 | | 0.3022 | 22.0 | 6864 | 0.3790 | 0.6643 | | 0.2938 | 23.0 | 7176 | 0.3575 | 0.6968 | | 0.2938 | 24.0 | 7488 | 0.3321 | 0.7112 | | 0.2837 | 25.0 | 7800 | 0.3570 | 0.7076 | | 0.2783 | 26.0 | 8112 | 0.3716 | 0.6426 | | 0.2783 | 27.0 | 8424 | 0.3534 | 0.7040 | | 0.2693 | 28.0 | 8736 | 0.3435 | 0.7004 | | 0.2654 | 29.0 | 9048 | 0.3371 | 0.6968 | | 0.2654 | 30.0 | 9360 | 0.3610 | 0.6787 | | 0.2598 | 31.0 | 9672 | 0.3277 | 0.7220 | | 0.2598 | 32.0 | 9984 | 0.3412 | 0.7076 | | 0.257 | 33.0 | 10296 | 0.3389 | 0.7040 | | 0.2484 | 34.0 | 10608 | 0.3424 | 0.6968 | | 0.2484 | 35.0 | 10920 | 0.3671 | 0.7112 | | 0.2446 | 36.0 | 11232 | 0.3492 | 0.7148 | | 0.2449 | 37.0 | 11544 | 0.3485 | 0.7148 | | 0.2449 | 38.0 | 11856 | 0.3413 | 0.7148 | | 0.2414 | 39.0 | 12168 | 0.3373 | 0.7004 | | 0.2414 | 40.0 | 12480 | 0.3415 | 0.7220 | | 0.2377 | 41.0 | 12792 | 0.3434 | 0.6931 | | 0.2353 | 42.0 | 13104 | 0.3612 | 0.7040 | | 0.2353 | 43.0 | 13416 | 0.3516 | 0.7112 | | 0.2347 | 44.0 | 13728 | 0.3430 | 0.7112 | | 0.2357 | 45.0 | 14040 | 0.3455 | 0.7004 | | 0.2357 | 46.0 | 14352 | 0.3480 | 0.7040 | | 0.2306 | 47.0 | 14664 | 0.3580 | 0.7112 | | 0.2306 | 48.0 | 14976 | 0.3636 | 0.7040 | | 0.2304 | 49.0 | 15288 | 0.3483 | 0.7112 | | 0.2295 | 50.0 | 15600 | 0.3529 | 0.7004 | | 0.2295 | 51.0 | 15912 | 0.3498 | 0.7040 | | 0.2296 | 52.0 | 16224 | 0.3501 | 0.7220 | | 0.2285 | 53.0 | 16536 | 0.3474 | 0.7076 | | 0.2285 | 54.0 | 16848 | 0.3444 | 0.7076 | | 0.2276 | 55.0 | 17160 | 0.3404 | 0.7004 | | 0.2276 | 56.0 | 17472 | 0.3500 | 0.6895 | | 0.2278 | 57.0 | 17784 | 0.3507 | 0.7040 | | 0.2264 | 58.0 | 18096 | 0.3468 | 0.7040 | | 0.2264 | 59.0 | 18408 | 0.3522 | 0.7040 | | 0.2265 | 60.0 | 18720 | 0.3476 | 0.7076 | ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
dkqjrm/20230822185044
dkqjrm
2023-08-22T11:47:15Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:super_glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-22T09:51:01Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - super_glue metrics: - accuracy model-index: - name: '20230822185044' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 20230822185044 This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset. It achieves the following results on the evaluation set: - Loss: 0.3482 - Accuracy: 0.4729 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 11 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | No log | 1.0 | 312 | 0.3580 | 0.5379 | | 0.5102 | 2.0 | 624 | 0.3670 | 0.5415 | | 0.5102 | 3.0 | 936 | 0.4888 | 0.4765 | | 0.4569 | 4.0 | 1248 | 0.3742 | 0.4982 | | 0.4403 | 5.0 | 1560 | 0.3796 | 0.5379 | | 0.4403 | 6.0 | 1872 | 0.3602 | 0.5776 | | 0.4215 | 7.0 | 2184 | 0.4013 | 0.5415 | | 0.4215 | 8.0 | 2496 | 0.3596 | 0.5884 | | 0.4166 | 9.0 | 2808 | 0.3447 | 0.5487 | | 0.3885 | 10.0 | 3120 | 0.3395 | 0.6101 | | 0.3885 | 11.0 | 3432 | 0.3395 | 0.6354 | | 0.3776 | 12.0 | 3744 | 0.3568 | 0.5343 | | 0.4274 | 13.0 | 4056 | 0.5923 | 0.4729 | | 0.4274 | 14.0 | 4368 | 0.3503 | 0.5668 | | 0.4138 | 15.0 | 4680 | 0.3605 | 0.5523 | | 0.4138 | 16.0 | 4992 | 0.3491 | 0.5451 | | 0.4025 | 17.0 | 5304 | 0.3728 | 0.5379 | | 0.394 | 18.0 | 5616 | 0.4029 | 0.4729 | | 0.394 | 19.0 | 5928 | 0.3682 | 0.4729 | | 0.3892 | 20.0 | 6240 | 0.3484 | 0.5054 | | 0.3839 | 21.0 | 6552 | 0.3485 | 0.4765 | | 0.3839 | 22.0 | 6864 | 0.3467 | 0.5343 | | 0.3782 | 23.0 | 7176 | 0.3471 | 0.5307 | | 0.3782 | 24.0 | 7488 | 0.3565 | 0.4693 | | 0.3757 | 25.0 | 7800 | 0.3483 | 0.5343 | | 0.3737 | 26.0 | 8112 | 0.3495 | 0.5271 | | 0.3737 | 27.0 | 8424 | 0.3550 | 0.4729 | | 0.3724 | 28.0 | 8736 | 0.3544 | 0.4729 | | 0.3696 | 29.0 | 9048 | 0.3478 | 0.5307 | | 0.3696 | 30.0 | 9360 | 0.3519 | 0.5271 | | 0.3693 | 31.0 | 9672 | 0.3515 | 0.5271 | | 0.3693 | 32.0 | 9984 | 0.3487 | 0.4729 | | 0.3674 | 33.0 | 10296 | 0.3492 | 0.5379 | | 0.3628 | 34.0 | 10608 | 0.3555 | 0.4729 | | 0.3628 | 35.0 | 10920 | 0.3550 | 0.4729 | | 0.3635 | 36.0 | 11232 | 0.3686 | 0.4729 | | 0.3636 | 37.0 | 11544 | 0.3488 | 0.4801 | | 0.3636 | 38.0 | 11856 | 0.3484 | 0.4874 | | 0.3595 | 39.0 | 12168 | 0.3477 | 0.4910 | | 0.3595 | 40.0 | 12480 | 0.3486 | 0.5307 | | 0.3598 | 41.0 | 12792 | 0.3488 | 0.4801 | | 0.3594 | 42.0 | 13104 | 0.3614 | 0.4729 | | 0.3594 | 43.0 | 13416 | 0.3476 | 0.5199 | | 0.3586 | 44.0 | 13728 | 0.3482 | 0.4729 | | 0.3581 | 45.0 | 14040 | 0.3519 | 0.4729 | | 0.3581 | 46.0 | 14352 | 0.3494 | 0.4729 | | 0.3579 | 47.0 | 14664 | 0.3613 | 0.4729 | | 0.3579 | 48.0 | 14976 | 0.3480 | 0.4729 | | 0.3573 | 49.0 | 15288 | 0.3480 | 0.4729 | | 0.3564 | 50.0 | 15600 | 0.3487 | 0.4729 | | 0.3564 | 51.0 | 15912 | 0.3529 | 0.4729 | | 0.3561 | 52.0 | 16224 | 0.3515 | 0.4729 | | 0.3554 | 53.0 | 16536 | 0.3475 | 0.4946 | | 0.3554 | 54.0 | 16848 | 0.3489 | 0.5271 | | 0.3535 | 55.0 | 17160 | 0.3488 | 0.4729 | | 0.3535 | 56.0 | 17472 | 0.3478 | 0.5018 | | 0.3542 | 57.0 | 17784 | 0.3491 | 0.4729 | | 0.354 | 58.0 | 18096 | 0.3485 | 0.4729 | | 0.354 | 59.0 | 18408 | 0.3483 | 0.4729 | | 0.3529 | 60.0 | 18720 | 0.3482 | 0.4729 | ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
dkqjrm/20230822185237
dkqjrm
2023-08-22T11:44:15Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:super_glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-22T09:52:55Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - super_glue metrics: - accuracy model-index: - name: '20230822185237' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 20230822185237 This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset. It achieves the following results on the evaluation set: - Loss: 0.3335 - Accuracy: 0.6498 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 11 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | No log | 1.0 | 312 | 0.3589 | 0.5415 | | 0.4381 | 2.0 | 624 | 0.3585 | 0.5560 | | 0.4381 | 3.0 | 936 | 0.4824 | 0.4729 | | 0.4251 | 4.0 | 1248 | 0.3497 | 0.5740 | | 0.4013 | 5.0 | 1560 | 0.5515 | 0.5307 | | 0.4013 | 6.0 | 1872 | 0.5300 | 0.5343 | | 0.4064 | 7.0 | 2184 | 0.3515 | 0.4982 | | 0.4064 | 8.0 | 2496 | 0.3456 | 0.5704 | | 0.4121 | 9.0 | 2808 | 0.3522 | 0.5632 | | 0.4048 | 10.0 | 3120 | 0.3437 | 0.5632 | | 0.4048 | 11.0 | 3432 | 0.3483 | 0.5668 | | 0.4035 | 12.0 | 3744 | 0.3952 | 0.4657 | | 0.3797 | 13.0 | 4056 | 0.3535 | 0.4801 | | 0.3797 | 14.0 | 4368 | 0.3443 | 0.5993 | | 0.3657 | 15.0 | 4680 | 0.3431 | 0.5379 | | 0.3657 | 16.0 | 4992 | 0.3478 | 0.5993 | | 0.3615 | 17.0 | 5304 | 0.3475 | 0.6173 | | 0.3573 | 18.0 | 5616 | 0.3539 | 0.6101 | | 0.3573 | 19.0 | 5928 | 0.3384 | 0.6101 | | 0.3552 | 20.0 | 6240 | 0.3483 | 0.6245 | | 0.3545 | 21.0 | 6552 | 0.3359 | 0.6173 | | 0.3545 | 22.0 | 6864 | 0.3844 | 0.5740 | | 0.349 | 23.0 | 7176 | 0.3436 | 0.6498 | | 0.349 | 24.0 | 7488 | 0.3422 | 0.6209 | | 0.351 | 25.0 | 7800 | 0.3495 | 0.6318 | | 0.3471 | 26.0 | 8112 | 0.3498 | 0.6101 | | 0.3471 | 27.0 | 8424 | 0.3316 | 0.6462 | | 0.3468 | 28.0 | 8736 | 0.3322 | 0.6751 | | 0.3459 | 29.0 | 9048 | 0.3354 | 0.6390 | | 0.3459 | 30.0 | 9360 | 0.3353 | 0.6390 | | 0.344 | 31.0 | 9672 | 0.3383 | 0.6354 | | 0.344 | 32.0 | 9984 | 0.3329 | 0.6245 | | 0.3435 | 33.0 | 10296 | 0.3411 | 0.6390 | | 0.3408 | 34.0 | 10608 | 0.3414 | 0.6354 | | 0.3408 | 35.0 | 10920 | 0.3319 | 0.6534 | | 0.3401 | 36.0 | 11232 | 0.3347 | 0.6282 | | 0.3406 | 37.0 | 11544 | 0.3382 | 0.6137 | | 0.3406 | 38.0 | 11856 | 0.3355 | 0.6245 | | 0.3378 | 39.0 | 12168 | 0.3416 | 0.6245 | | 0.3378 | 40.0 | 12480 | 0.3422 | 0.6209 | | 0.3386 | 41.0 | 12792 | 0.3388 | 0.6390 | | 0.3362 | 42.0 | 13104 | 0.3330 | 0.6390 | | 0.3362 | 43.0 | 13416 | 0.3393 | 0.6282 | | 0.3373 | 44.0 | 13728 | 0.3340 | 0.6282 | | 0.3337 | 45.0 | 14040 | 0.3318 | 0.6390 | | 0.3337 | 46.0 | 14352 | 0.3323 | 0.6354 | | 0.3332 | 47.0 | 14664 | 0.3301 | 0.6643 | | 0.3332 | 48.0 | 14976 | 0.3422 | 0.6282 | | 0.3315 | 49.0 | 15288 | 0.3348 | 0.6570 | | 0.33 | 50.0 | 15600 | 0.3366 | 0.6462 | | 0.33 | 51.0 | 15912 | 0.3308 | 0.6570 | | 0.331 | 52.0 | 16224 | 0.3298 | 0.6606 | | 0.3295 | 53.0 | 16536 | 0.3377 | 0.6498 | | 0.3295 | 54.0 | 16848 | 0.3439 | 0.6462 | | 0.3282 | 55.0 | 17160 | 0.3326 | 0.6570 | | 0.3282 | 56.0 | 17472 | 0.3356 | 0.6498 | | 0.3291 | 57.0 | 17784 | 0.3309 | 0.6570 | | 0.3278 | 58.0 | 18096 | 0.3333 | 0.6498 | | 0.3278 | 59.0 | 18408 | 0.3324 | 0.6498 | | 0.3292 | 60.0 | 18720 | 0.3335 | 0.6498 | ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
stalker331333/my-pet-cat
stalker331333
2023-08-22T11:19:37Z
41
0
diffusers
[ "diffusers", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-22T11:16:01Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Pet-Cat Dreambooth model trained by stalker331333 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: GoX19932gAS Sample pictures of this concept: ![0](https://huggingface.co/stalker331333/my-pet-cat/resolve/main/sample_images/a4056cc6-bf04-450e-9ed4-eb57a73ae4cb.jpeg)
elit333/newstable
elit333
2023-08-22T11:15:25Z
5
1
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "arxiv:2207.12598", "arxiv:2112.10752", "arxiv:2103.00020", "arxiv:2205.11487", "arxiv:1910.09700", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-22T10:20:12Z
--- license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image inference: true extra_gated_prompt: |- This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. CompVis claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license carefully here: https://huggingface.co/spaces/CompVis/stable-diffusion-license extra_gated_heading: Please read the LICENSE to access this model --- # Stable Diffusion v1-5 Model Card Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input. For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion blog](https://huggingface.co/blog/stable_diffusion). The **Stable-Diffusion-v1-5** checkpoint was initialized with the weights of the [Stable-Diffusion-v1-2](https:/steps/huggingface.co/CompVis/stable-diffusion-v1-2) checkpoint and subsequently fine-tuned on 595k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). You can use this both with the [🧨Diffusers library](https://github.com/huggingface/diffusers) and the [RunwayML GitHub repository](https://github.com/runwayml/stable-diffusion). ### Diffusers ```py from diffusers import StableDiffusionPipeline import torch model_id = "runwayml/stable-diffusion-v1-5" pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt).images[0] image.save("astronaut_rides_horse.png") ``` For more detailed instructions, use-cases and examples in JAX follow the instructions [here](https://github.com/huggingface/diffusers#text-to-image-generation-with-stable-diffusion) ### Original GitHub Repository 1. Download the weights - [v1-5-pruned-emaonly.ckpt](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt) - 4.27GB, ema-only weight. uses less VRAM - suitable for inference - [v1-5-pruned.ckpt](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt) - 7.7GB, ema+non-ema weights. uses more VRAM - suitable for fine-tuning 2. Follow instructions [here](https://github.com/runwayml/stable-diffusion). ## Model Details - **Developed by:** Robin Rombach, Patrick Esser - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based. - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487). - **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752). - **Cite as:** @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } # Uses ## Direct Use The model is intended for research purposes only. Possible research areas and tasks include - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. - Research on generative models. Excluded uses are described below. ### Misuse, Malicious Use, and Out-of-Scope Use _Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_. The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. #### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. #### Misuse and Malicious Use Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to: - Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc. - Intentionally promoting or propagating discriminatory content or harmful stereotypes. - Impersonating individuals without their consent. - Sexual content without consent of the people who might see it. - Mis- and disinformation - Representations of egregious violence and gore - Sharing of copyrighted or licensed material in violation of its terms of use. - Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use. ## Limitations and Bias ### Limitations - The model does not achieve perfect photorealism - The model cannot render legible text - The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere” - Faces and people in general may not be generated properly. - The model was trained mainly with English captions and will not work as well in other languages. - The autoencoding part of the model is lossy - The model was trained on a large-scale dataset [LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material and is not fit for product use without additional safety mechanisms and considerations. - No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data. The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images. ### Bias While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/), which consists of images that are primarily limited to English descriptions. Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for. This affects the overall output of the model, as white and western cultures are often set as the default. Further, the ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts. ### Safety Module The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers. This checker works by checking model outputs against known hard-coded NSFW concepts. The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter. Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images. The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept. ## Training **Training Data** The model developers used the following dataset for training the model: - LAION-2B (en) and subsets thereof (see next section) **Training Procedure** Stable Diffusion v1-5 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training, - Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4 - Text prompts are encoded through a ViT-L/14 text-encoder. - The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention. - The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet. Currently six Stable Diffusion checkpoints are provided, which were trained as follows. - [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1): 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en). 194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`). - [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2): Resumed from `stable-diffusion-v1-1`. 515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en, filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)). - [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2` - 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). - [`stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2` - 225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). - [`stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) Resumed from `stable-diffusion-v1-2` - 595,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). - [`stable-diffusion-inpainting`](https://huggingface.co/runwayml/stable-diffusion-inpainting) Resumed from `stable-diffusion-v1-5` - then 440,000 steps of inpainting training at resolution 512x512 on “laion-aesthetics v2 5+” and 10% dropping of the text-conditioning. For inpainting, the UNet has 5 additional input channels (4 for the encoded masked-image and 1 for the mask itself) whose weights were zero-initialized after restoring the non-inpainting checkpoint. During training, we generate synthetic masks and in 25% mask everything. - **Hardware:** 32 x 8 x A100 GPUs - **Optimizer:** AdamW - **Gradient Accumulations**: 2 - **Batch:** 32 x 8 x 2 x 4 = 2048 - **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant ## Evaluation Results Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0) and 50 PNDM/PLMS sampling steps show the relative improvements of the checkpoints: ![pareto](https://huggingface.co/CompVis/stable-diffusion/resolve/main/v1-1-to-v1-5.png) Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores. ## Environmental Impact **Stable Diffusion v1** **Estimated Emissions** Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact. - **Hardware Type:** A100 PCIe 40GB - **Hours used:** 150000 - **Cloud Provider:** AWS - **Compute Region:** US-east - **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq. ## Citation ```bibtex @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } ``` *This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
Muhammadreza/mann-e-pixel-art-revised-2
Muhammadreza
2023-08-22T11:12:49Z
5
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-22T11:08:57Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### mann-e_pixel-art_revised-2 Dreambooth model trained by Muhammadreza with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
ahmedtremo/image-gen
ahmedtremo
2023-08-22T11:10:58Z
1
1
diffusers
[ "diffusers", "text-to-image", "autotrain", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
2023-08-22T09:55:30Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: photo of a GenNexttt tags: - text-to-image - diffusers - autotrain inference: true --- # DreamBooth trained by AutoTrain Text encoder was not trained.
deepsdh99/llama2-qlora-finetunined-8
deepsdh99
2023-08-22T11:05:58Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-22T11:05:30Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
cast42/my-awesome-setfit-model
cast42
2023-08-22T11:05:20Z
4
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-08-22T11:05:02Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # cast42/my-awesome-setfit-model This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("cast42/my-awesome-setfit-model") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
OpenBuddy/openbuddy-openllama-3b-v10-bf16
OpenBuddy
2023-08-22T10:51:04Z
1,568
8
transformers
[ "transformers", "pytorch", "llama", "text-generation", "zh", "en", "fr", "de", "ja", "ko", "it", "ru", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-08-10T13:37:46Z
--- language: - zh - en - fr - de - ja - ko - it - ru pipeline_tag: text-generation inference: false library_name: transformers license: apache-2.0 --- # OpenBuddy - Open Multilingual Chatbot GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy) Website and Demo: [https://openbuddy.ai](https://openbuddy.ai) ![Demo](https://raw.githubusercontent.com/OpenBuddy/OpenBuddy/main/media/demo.png) # Copyright Notice License: Apache 2.0. ## Disclaimer All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions. OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software. By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy. ## 免责声明 所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。 OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。 使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
roa7n/gpt2-human_nontata_promoters-rng_ep8
roa7n
2023-08-22T10:51:03Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-22T10:51:00Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0
OpenBuddy/openbuddy-falcon-40b-v9-bf16
OpenBuddy
2023-08-22T10:50:44Z
18
4
transformers
[ "transformers", "pytorch", "RefinedWeb", "text-generation", "custom_code", "zh", "en", "fr", "de", "ja", "ko", "it", "ru", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-08-02T22:19:59Z
--- language: - zh - en - fr - de - ja - ko - it - ru pipeline_tag: text-generation inference: false library_name: transformers license: apache-2.0 --- # OpenBuddy - Open Multilingual Chatbot GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy) Website and Demo: [https://openbuddy.ai](https://openbuddy.ai) ![Demo](https://raw.githubusercontent.com/OpenBuddy/OpenBuddy/main/media/demo.png) # Copyright Notice License: Apache 2.0. ## Disclaimer All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions. OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software. By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy. ## 免责声明 所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。 OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。 使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
ankur24022002/test1
ankur24022002
2023-08-22T10:44:57Z
0
0
peft
[ "peft", "pytorch", "opt", "region:us" ]
null
2023-08-22T08:27:08Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0.dev0
dc-at-hf/Reinforce-CartPole-v1
dc-at-hf
2023-08-22T10:22:29Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-08-22T10:22:20Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
thanhnew2001/bloom560m_grade7_2000_10kstep
thanhnew2001
2023-08-22T10:22:01Z
31
0
peft
[ "peft", "text-generation", "region:us" ]
text-generation
2023-08-22T09:50:28Z
--- library_name: peft pipeline_tag: text-generation --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0 - PEFT 0.5.0.dev0
rohanbalkondekar/yes-bank
rohanbalkondekar
2023-08-22T10:21:14Z
11
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "gpt", "llm", "large language model", "h2o-llmstudio", "en", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-08-22T10:14:32Z
--- language: - en library_name: transformers tags: - gpt - llm - large language model - h2o-llmstudio inference: false thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico --- # Model Card ## Summary This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio). - Base model: [h2oai/h2ogpt-4096-llama2-7b](https://huggingface.co/h2oai/h2ogpt-4096-llama2-7b) ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` library installed. ```bash pip install transformers==4.31.0 ``` Also make sure you are providing your huggingface token to the pipeline if the model is lying in a private repo. - Either leave `token=True` in the `pipeline` and login to hugginface_hub by running ```python import huggingface_hub huggingface_hub.login(<ACCES_TOKEN>) ``` - Or directly pass your <ACCES_TOKEN> to `token` in the `pipeline` ```python from transformers import pipeline generate_text = pipeline( model="BeRohan/yes-bank", torch_dtype="auto", trust_remote_code=True, use_fast=True, device_map={"": "cuda:0"}, token=True, ) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer: ```python print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"]) ``` ```bash <|prompt|>Why is drinking water so healthy?</s><|answer|> ``` Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`. ```python from h2oai_pipeline import H2OTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( "BeRohan/yes-bank", use_fast=True, padding_side="left", trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( "BeRohan/yes-bank", torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=True, ) generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "BeRohan/yes-bank" # either local folder or huggingface model name # Important: The prompt needs to be in the same format the model was trained with. # You can find an example prompt in the experiment logs. prompt = "<|prompt|>How are you?</s><|answer|>" tokenizer = AutoTokenizer.from_pretrained( model_name, use_fast=True, trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=True, ) model.cuda().eval() inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda") # generate configuration can be modified to your needs tokens = model.generate( input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True )[0] tokens = tokens[inputs["input_ids"].shape[1]:] answer = tokenizer.decode(tokens, skip_special_tokens=True) print(answer) ``` ## Quantization and sharding You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```. ## Model Architecture ``` LlamaForCausalLM( (model): LlamaModel( (embed_tokens): Embedding(32000, 4096, padding_idx=0) (layers): ModuleList( (0-31): 32 x LlamaDecoderLayer( (self_attn): LlamaAttention( (q_proj): Linear(in_features=4096, out_features=4096, bias=False) (k_proj): Linear(in_features=4096, out_features=4096, bias=False) (v_proj): Linear(in_features=4096, out_features=4096, bias=False) (o_proj): Linear(in_features=4096, out_features=4096, bias=False) (rotary_emb): LlamaRotaryEmbedding() ) (mlp): LlamaMLP( (gate_proj): Linear(in_features=4096, out_features=11008, bias=False) (up_proj): Linear(in_features=4096, out_features=11008, bias=False) (down_proj): Linear(in_features=11008, out_features=4096, bias=False) (act_fn): SiLUActivation() ) (input_layernorm): LlamaRMSNorm() (post_attention_layernorm): LlamaRMSNorm() ) ) (norm): LlamaRMSNorm() ) (lm_head): Linear(in_features=4096, out_features=32000, bias=False) ) ``` ## Model Configuration This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models. ## Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities. - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues. - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes. By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
RishuD7/t5_number_v7_new_data
RishuD7
2023-08-22T10:11:45Z
3
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-08-22T07:29:28Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: t5_number_v7_new_data results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5_number_v7_new_data This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0108 - Cer: 0.6315 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.0163 | 1.0 | 858 | 0.0141 | 0.8245 | | 0.0135 | 2.0 | 1716 | 0.0121 | 0.7102 | | 0.012 | 3.0 | 2574 | 0.0115 | 0.6647 | | 0.0115 | 4.0 | 3432 | 0.0111 | 0.6577 | | 0.0115 | 5.0 | 4290 | 0.0110 | 0.6490 | | 0.0106 | 6.0 | 5148 | 0.0108 | 0.6461 | | 0.0103 | 7.0 | 6006 | 0.0108 | 0.6362 | | 0.0103 | 8.0 | 6864 | 0.0108 | 0.6362 | | 0.0101 | 9.0 | 7722 | 0.0108 | 0.6327 | | 0.01 | 10.0 | 8580 | 0.0108 | 0.6315 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
GrantW65/ppo-LunarLander-v2
GrantW65
2023-08-22T10:07:55Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-08-22T10:07:32Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 245.25 +/- 48.18 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
asenella/ms_config_1_alpha_10_beta_1_seed_2
asenella
2023-08-22T10:02:53Z
0
0
null
[ "multivae", "en", "license:apache-2.0", "region:us" ]
null
2023-08-22T10:02:51Z
--- language: en tags: - multivae license: apache-2.0 --- ### Downloading this model from the Hub This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub` ```python >>> from multivae.models import AutoModel >>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name") ```
qgallouedec/window-open-v2
qgallouedec
2023-08-22T09:49:49Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-08T10:03:16Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: window-open-v2 type: window-open-v2 metrics: - type: mean_reward value: 613.63 +/- 41.62 name: mean_reward verified: false --- A(n) **APPO** model trained on the **window-open-v2** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r qgallouedec/window-open-v2 ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m enjoy --algo=APPO --env=window-open-v2 --train_dir=./train_dir --experiment=window-open-v2 ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m train --algo=APPO --env=window-open-v2 --train_dir=./train_dir --experiment=window-open-v2 --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
qgallouedec/sweep-into-v2
qgallouedec
2023-08-22T09:46:59Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-08T10:02:39Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: sweep-into-v2 type: sweep-into-v2 metrics: - type: mean_reward value: 802.49 +/- 11.59 name: mean_reward verified: false --- A(n) **APPO** model trained on the **sweep-into-v2** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r qgallouedec/sweep-into-v2 ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m enjoy --algo=APPO --env=sweep-into-v2 --train_dir=./train_dir --experiment=sweep-into-v2 ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m train --algo=APPO --env=sweep-into-v2 --train_dir=./train_dir --experiment=sweep-into-v2 --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
qgallouedec/stick-pull-v2
qgallouedec
2023-08-22T09:45:10Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-08T10:02:18Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: stick-pull-v2 type: stick-pull-v2 metrics: - type: mean_reward value: 540.61 +/- 8.27 name: mean_reward verified: false --- A(n) **APPO** model trained on the **stick-pull-v2** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r qgallouedec/stick-pull-v2 ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m enjoy --algo=APPO --env=stick-pull-v2 --train_dir=./train_dir --experiment=stick-pull-v2 ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m train --algo=APPO --env=stick-pull-v2 --train_dir=./train_dir --experiment=stick-pull-v2 --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
qgallouedec/soccer-v2
qgallouedec
2023-08-22T09:44:14Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-08T10:02:07Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: soccer-v2 type: soccer-v2 metrics: - type: mean_reward value: 411.46 +/- 168.68 name: mean_reward verified: false --- A(n) **APPO** model trained on the **soccer-v2** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r qgallouedec/soccer-v2 ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m enjoy --algo=APPO --env=soccer-v2 --train_dir=./train_dir --experiment=soccer-v2 ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m train --algo=APPO --env=soccer-v2 --train_dir=./train_dir --experiment=soccer-v2 --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
Samuael/llama-2-7b-tebot-sharded
Samuael
2023-08-22T09:44:04Z
0
0
peft
[ "peft", "pytorch", "llama", "region:us" ]
null
2023-08-17T16:00:53Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0 - PEFT 0.4.0
qgallouedec/shelf-place-v2
qgallouedec
2023-08-22T09:43:18Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-08T10:01:57Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: shelf-place-v2 type: shelf-place-v2 metrics: - type: mean_reward value: 274.68 +/- 29.20 name: mean_reward verified: false --- A(n) **APPO** model trained on the **shelf-place-v2** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r qgallouedec/shelf-place-v2 ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m enjoy --algo=APPO --env=shelf-place-v2 --train_dir=./train_dir --experiment=shelf-place-v2 ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m train --algo=APPO --env=shelf-place-v2 --train_dir=./train_dir --experiment=shelf-place-v2 --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
qgallouedec/reach-v2
qgallouedec
2023-08-22T09:41:29Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-08T10:01:35Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: reach-v2 type: reach-v2 metrics: - type: mean_reward value: 686.43 +/- 166.95 name: mean_reward verified: false --- A(n) **APPO** model trained on the **reach-v2** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r qgallouedec/reach-v2 ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m enjoy --algo=APPO --env=reach-v2 --train_dir=./train_dir --experiment=reach-v2 ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m train --algo=APPO --env=reach-v2 --train_dir=./train_dir --experiment=reach-v2 --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
seaside2003/opt-6.7b-lora
seaside2003
2023-08-22T09:39:41Z
5
0
peft
[ "peft", "region:us" ]
null
2023-08-22T07:10:43Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.5.0.dev0
stabilityai/japanese-stablelm-base-alpha-7b
stabilityai
2023-08-22T09:36:29Z
1,656
120
transformers
[ "transformers", "pytorch", "text-generation", "japanese-stablelm", "causal-lm", "custom_code", "ja", "dataset:wikipedia", "dataset:mc4", "dataset:cc100", "dataset:oscar-corpus/OSCAR-2301", "dataset:oscar-corpus/OSCAR-2201", "dataset:togethercomputer/RedPajama-Data-1T", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-generation
2023-08-09T14:30:09Z
--- language: - ja tags: - japanese-stablelm - causal-lm pipeline_tag: text-generation datasets: - wikipedia - mc4 - cc100 - oscar-corpus/OSCAR-2301 - oscar-corpus/OSCAR-2201 - togethercomputer/RedPajama-Data-1T license: - apache-2.0 --- # Japanese-StableLM-Base-Alpha-7B ![japanese-stablelm-icon](./japanese-stablelm-parrot.jpg) > "A parrot able to speak Japanese, ukiyoe, edo period" — [Stable Diffusion XL](https://clipdrop.co/stable-diffusion) ## Model Description `japanese-stablelm-base-alpha-7b` is a 7B-parameter decoder-only language model pre-trained on a diverse collection of Japanese and English datasets which focus on maximizing Japanese language modeling performance and Japanese downstream task performance. For an instruction-following model, check [Japanese-StableLM-Instruct-Alpha-7B](https://huggingface.co/stabilityai/japanese-stablelm-instruct-alpha-7b) and get access by accepting the terms and conditions. ## Usage First install additional dependencies in [requirements.txt](./requirements.txt): ```sh pip install sentencepiece einops ``` Then start generating text with `japanese-stablelm-base-alpha-7b` by using the following code snippet: ```python import torch from transformers import LlamaTokenizer, AutoModelForCausalLM tokenizer = LlamaTokenizer.from_pretrained("novelai/nerdstash-tokenizer-v1", additional_special_tokens=['▁▁']) model = AutoModelForCausalLM.from_pretrained( "stabilityai/japanese-stablelm-base-alpha-7b", trust_remote_code=True, ) model.half() model.eval() if torch.cuda.is_available(): model = model.to("cuda") prompt = """ AI で科学研究を加速するには、 """.strip() input_ids = tokenizer.encode( prompt, add_special_tokens=False, return_tensors="pt" ) # this is for reproducibility. # feel free to change to get different result seed = 23 torch.manual_seed(seed) tokens = model.generate( input_ids.to(device=model.device), max_new_tokens=128, temperature=1, top_p=0.95, do_sample=True, ) out = tokenizer.decode(tokens[0], skip_special_tokens=True) print(out) """ AI で科学研究を加速するには、データ駆動型文化が必要であることも明らかになってきています。研究のあらゆる側面で、データがより重要になっているのです。 20 世紀の科学は、研究者が直接研究を行うことで、研究データを活用してきました。その後、多くの科学分野ではデータは手動で分析されるようになったものの、これらの方法には多大なコストと労力がかかることが分かりました。 そこで、多くの研究者や研究者グループは、より効率的な手法を開発し、研究の規模を拡大してきました。21 世紀になると、研究者が手動で実施する必要のある研究は、その大部分を研究者が自動化できるようになりました。 """ ``` We suggest playing with different generation config (`top_p`, `repetition_penalty` etc) to find the best setup for your tasks. For example, use higher temperature for roleplay task, lower temperature for reasoning. ## Model Details * **Model type**: `japanese-stablelm-base-alpha-7b` model is an auto-regressive language model based on the NeoX transformer architecture. * **Language(s)**: Japanese * **Library**: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) * **License**: This model is licensed under [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). ## Training | Parameters | Hidden Size | Layers | Heads | Sequence Length | |------------|-------------|--------|-------|-----------------| | 7B | 4096 | 32 | 32 | 2048 | ### Training Dataset `japanese-stablelm-base-alpha-7b` is pre-trained on around 750B tokens from a mixture of the following corpora: - [Japanese/English Wikipedia](https://dumps.wikimedia.org/other/cirrussearch) - [Japanese mc4](https://huggingface.co/datasets/mc4) - [Japanese CC-100](http://data.statmt.org/cc-100/ja.txt.xz) - [Japanese OSCAR](https://oscar-project.github.io/documentation/) - [RedPajama](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) ## Use and Limitations ### Intended Use The model is intended to be used by all individuals as foundational models for application-specific fine-tuning without strict limitations on commercial use. ### Limitations and bias The pre-training dataset may have contained offensive or inappropriate content even after applying data cleansing filters which can be reflected in the model generated text. We recommend users exercise reasonable caution when using these models in production systems. Do not use the model for any applications that may cause harm or distress to individuals or groups. ## Authors - [Meng Lee](https://huggingface.co/leemeng) - [Fujiki Nakamura](https://huggingface.co/fujiki) - [Makoto Shing](https://huggingface.co/mkshing) - [Paul McCann](https://huggingface.co/polm-stability) - [Takuya Akiba](https://huggingface.co/iwiwi) - [Naoki Orii](https://huggingface.co/mrorii) ## Acknowledgements We are utilizing the v1 version of the [novelai-tokenizer](https://github.com/NovelAI/novelai-tokenizer), introduced by [NovelAI](https://novelai.net/), because it processes both Japanese and English text effectively and efficiently. We extend our gratitude to NovelAI for allowing us to use their remarkable work. For more details about the tokenizer, please refer to their [blog post](https://blog.novelai.net/novelais-new-llm-tokenizer-5bc140e17642). We are grateful for the contributions of the EleutherAI Polyglot-JA team in helping us to collect a large amount of pre-training data in Japanese. Polyglot-JA members includes Hyunwoong Ko (Project Lead), Fujiki Nakamura (originally started this project when he commited to the Polyglot team), Yunho Mo, Minji Jung, KeunSeok Im, and Su-Kyeong Jang. We are also appreciative of [AI Novelist/Sta (Bit192, Inc.)](https://ai-novel.com/index.php) and the numerous contributors from [Stable Community Japan](https://discord.gg/VPrcE475HB) for assisting us in gathering a large amount of high-quality Japanese textual data for model training. ## How to cite ``` @misc{JapaneseStableLMBaseAlpha7B, url={[https://huggingface.co/stabilityai/japanese-stablelm-base-alpha-7b](https://huggingface.co/stabilityai/japanese-stablelm-base-alpha-7b)}, title={Japanese StableLM Base Alpha 7B}, author={Lee, Meng and Nakamura, Fujiki and Shing, Makoto and McCann, Paul and Akiba, Takuya and Orii, Naoki} } ``` ## Citations ```bibtext @software{gpt-neox-library, title = {{GPT-NeoX: Large Scale Autoregressive Language Modeling in PyTorch}}, author = {Andonian, Alex and Anthony, Quentin and Biderman, Stella and Black, Sid and Gali, Preetham and Gao, Leo and Hallahan, Eric and Levy-Kramer, Josh and Leahy, Connor and Nestler, Lucas and Parker, Kip and Pieler, Michael and Purohit, Shivanshu and Songz, Tri and Phil, Wang and Weinbach, Samuel}, url = {https://www.github.com/eleutherai/gpt-neox}, doi = {10.5281/zenodo.5879544}, month = {8}, year = {2021}, version = {0.0.1}, } ```
qgallouedec/plate-slide-back-v2
qgallouedec
2023-08-22T09:35:57Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-08T10:00:34Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: plate-slide-back-v2 type: plate-slide-back-v2 metrics: - type: mean_reward value: 709.93 +/- 82.31 name: mean_reward verified: false --- A(n) **APPO** model trained on the **plate-slide-back-v2** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r qgallouedec/plate-slide-back-v2 ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m enjoy --algo=APPO --env=plate-slide-back-v2 --train_dir=./train_dir --experiment=plate-slide-back-v2 ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m train --algo=APPO --env=plate-slide-back-v2 --train_dir=./train_dir --experiment=plate-slide-back-v2 --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
qgallouedec/pick-place-v2
qgallouedec
2023-08-22T09:33:11Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-08T10:00:04Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: pick-place-v2 type: pick-place-v2 metrics: - type: mean_reward value: 447.63 +/- 150.72 name: mean_reward verified: false --- A(n) **APPO** model trained on the **pick-place-v2** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r qgallouedec/pick-place-v2 ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m enjoy --algo=APPO --env=pick-place-v2 --train_dir=./train_dir --experiment=pick-place-v2 ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m train --algo=APPO --env=pick-place-v2 --train_dir=./train_dir --experiment=pick-place-v2 --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
qgallouedec/peg-insert-side-v2
qgallouedec
2023-08-22T09:30:22Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-08T09:59:35Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: peg-insert-side-v2 type: peg-insert-side-v2 metrics: - type: mean_reward value: 308.94 +/- 175.97 name: mean_reward verified: false --- A(n) **APPO** model trained on the **peg-insert-side-v2** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r qgallouedec/peg-insert-side-v2 ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m enjoy --algo=APPO --env=peg-insert-side-v2 --train_dir=./train_dir --experiment=peg-insert-side-v2 ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m train --algo=APPO --env=peg-insert-side-v2 --train_dir=./train_dir --experiment=peg-insert-side-v2 --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
pansysy/distilbert-base-uncased_emotion_ft_0416_emotion_ft_3306
pansysy
2023-08-22T09:29:51Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-22T09:24:30Z
--- tags: - generated_from_trainer datasets: - emotion model-index: - name: distilbert-base-uncased_emotion_ft_0416_emotion_ft_3306 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased_emotion_ft_0416_emotion_ft_3306 This model was trained from scratch on the emotion dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.2 - Tokenizers 0.13.3
qgallouedec/handle-pull-v2
qgallouedec
2023-08-22T09:28:29Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-08T09:59:15Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: handle-pull-v2 type: handle-pull-v2 metrics: - type: mean_reward value: 698.01 +/- 21.12 name: mean_reward verified: false --- A(n) **APPO** model trained on the **handle-pull-v2** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r qgallouedec/handle-pull-v2 ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m enjoy --algo=APPO --env=handle-pull-v2 --train_dir=./train_dir --experiment=handle-pull-v2 ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m train --algo=APPO --env=handle-pull-v2 --train_dir=./train_dir --experiment=handle-pull-v2 --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
qgallouedec/handle-pull-side-v2
qgallouedec
2023-08-22T09:27:34Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-08T09:59:05Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: handle-pull-side-v2 type: handle-pull-side-v2 metrics: - type: mean_reward value: 462.12 +/- 95.86 name: mean_reward verified: false --- A(n) **APPO** model trained on the **handle-pull-side-v2** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r qgallouedec/handle-pull-side-v2 ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m enjoy --algo=APPO --env=handle-pull-side-v2 --train_dir=./train_dir --experiment=handle-pull-side-v2 ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m train --algo=APPO --env=handle-pull-side-v2 --train_dir=./train_dir --experiment=handle-pull-side-v2 --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
qgallouedec/hand-insert-v2
qgallouedec
2023-08-22T09:24:46Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-08T09:58:36Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: hand-insert-v2 type: hand-insert-v2 metrics: - type: mean_reward value: 742.89 +/- 26.31 name: mean_reward verified: false --- A(n) **APPO** model trained on the **hand-insert-v2** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r qgallouedec/hand-insert-v2 ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m enjoy --algo=APPO --env=hand-insert-v2 --train_dir=./train_dir --experiment=hand-insert-v2 ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m train --algo=APPO --env=hand-insert-v2 --train_dir=./train_dir --experiment=hand-insert-v2 --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
qgallouedec/hammer-v2
qgallouedec
2023-08-22T09:23:53Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-08T09:58:25Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: hammer-v2 type: hammer-v2 metrics: - type: mean_reward value: 692.49 +/- 21.25 name: mean_reward verified: false --- A(n) **APPO** model trained on the **hammer-v2** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r qgallouedec/hammer-v2 ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m enjoy --algo=APPO --env=hammer-v2 --train_dir=./train_dir --experiment=hammer-v2 ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m train --algo=APPO --env=hammer-v2 --train_dir=./train_dir --experiment=hammer-v2 --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
qgallouedec/faucet-open-v2
qgallouedec
2023-08-22T09:22:52Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-08T09:58:13Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: faucet-open-v2 type: faucet-open-v2 metrics: - type: mean_reward value: 738.95 +/- 10.76 name: mean_reward verified: false --- A(n) **APPO** model trained on the **faucet-open-v2** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r qgallouedec/faucet-open-v2 ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m enjoy --algo=APPO --env=faucet-open-v2 --train_dir=./train_dir --experiment=faucet-open-v2 ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m train --algo=APPO --env=faucet-open-v2 --train_dir=./train_dir --experiment=faucet-open-v2 --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
qgallouedec/drawer-open-v2
qgallouedec
2023-08-22T09:21:02Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-08T09:57:55Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: drawer-open-v2 type: drawer-open-v2 metrics: - type: mean_reward value: 493.34 +/- 2.61 name: mean_reward verified: false --- A(n) **APPO** model trained on the **drawer-open-v2** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r qgallouedec/drawer-open-v2 ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m enjoy --algo=APPO --env=drawer-open-v2 --train_dir=./train_dir --experiment=drawer-open-v2 ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m train --algo=APPO --env=drawer-open-v2 --train_dir=./train_dir --experiment=drawer-open-v2 --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
qgallouedec/door-close-v2
qgallouedec
2023-08-22T09:16:26Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-08T09:57:05Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: door-close-v2 type: door-close-v2 metrics: - type: mean_reward value: 544.50 +/- 25.16 name: mean_reward verified: false --- A(n) **APPO** model trained on the **door-close-v2** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r qgallouedec/door-close-v2 ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m enjoy --algo=APPO --env=door-close-v2 --train_dir=./train_dir --experiment=door-close-v2 ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m train --algo=APPO --env=door-close-v2 --train_dir=./train_dir --experiment=door-close-v2 --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
IngeniousArtist/distilbert-finance
IngeniousArtist
2023-08-22T09:15:43Z
161
0
transformers
[ "transformers", "pytorch", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:financial_phrasebank", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-31T00:31:08Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - financial_phrasebank metrics: - accuracy model-index: - name: distilbert-finance results: - task: name: Text Classification type: text-classification dataset: name: financial_phrasebank type: financial_phrasebank config: sentences_50agree split: train args: sentences_50agree metrics: - name: Accuracy type: accuracy value: 0.7386363636363636 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-finance This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the financial_phrasebank dataset. It achieves the following results on the evaluation set: - Loss: 0.9962 - Accuracy: 0.7386 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.904 | 0.33 | 20 | 1.5959 | 0.4205 | | 0.6562 | 0.66 | 40 | 1.6665 | 0.4143 | | 0.539 | 0.98 | 60 | 1.6067 | 0.3936 | | 0.4759 | 1.31 | 80 | 1.5079 | 0.4236 | | 0.3882 | 1.64 | 100 | 1.4719 | 0.4298 | | 0.3782 | 1.97 | 120 | 1.2392 | 0.4267 | | 0.2729 | 2.3 | 140 | 1.0114 | 0.4928 | | 0.2607 | 2.62 | 160 | 0.9514 | 0.5930 | | 0.2889 | 2.95 | 180 | 0.8661 | 0.6477 | | 0.181 | 3.28 | 200 | 0.7093 | 0.7417 | | 0.1742 | 3.61 | 220 | 1.1042 | 0.5764 | | 0.1904 | 3.93 | 240 | 0.7439 | 0.7510 | | 0.1186 | 4.26 | 260 | 0.8587 | 0.7469 | | 0.137 | 4.59 | 280 | 0.7408 | 0.7603 | | 0.1166 | 4.92 | 300 | 1.0107 | 0.6705 | | 0.0938 | 5.25 | 320 | 0.7883 | 0.7624 | | 0.0881 | 5.57 | 340 | 1.0339 | 0.7056 | | 0.0812 | 5.9 | 360 | 0.8409 | 0.7490 | | 0.0586 | 6.23 | 380 | 0.9146 | 0.7345 | | 0.0572 | 6.56 | 400 | 0.9000 | 0.7366 | | 0.0527 | 6.89 | 420 | 0.9782 | 0.7335 | | 0.045 | 7.21 | 440 | 1.0102 | 0.7262 | | 0.0471 | 7.54 | 460 | 1.0322 | 0.7324 | | 0.0508 | 7.87 | 480 | 0.9381 | 0.7448 | | 0.039 | 8.2 | 500 | 0.9489 | 0.7459 | | 0.0419 | 8.52 | 520 | 0.9779 | 0.7469 | | 0.0256 | 8.85 | 540 | 0.9834 | 0.7407 | | 0.0264 | 9.18 | 560 | 0.9963 | 0.7376 | | 0.0378 | 9.51 | 580 | 0.9981 | 0.7376 | | 0.0421 | 9.84 | 600 | 0.9962 | 0.7386 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu117 - Datasets 2.14.4 - Tokenizers 0.13.3
qgallouedec/coffee-pull-v2
qgallouedec
2023-08-22T09:12:42Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-08T09:56:25Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: coffee-pull-v2 type: coffee-pull-v2 metrics: - type: mean_reward value: 262.59 +/- 63.08 name: mean_reward verified: false --- A(n) **APPO** model trained on the **coffee-pull-v2** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r qgallouedec/coffee-pull-v2 ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m enjoy --algo=APPO --env=coffee-pull-v2 --train_dir=./train_dir --experiment=coffee-pull-v2 ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m train --algo=APPO --env=coffee-pull-v2 --train_dir=./train_dir --experiment=coffee-pull-v2 --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
922-CA/gfl-ddlc-TI-tests
922-CA
2023-08-22T09:11:04Z
0
0
null
[ "ddlc", "gfl", "anime", "doki doki literature club", "girl's frontline", "license:creativeml-openrail-m", "region:us" ]
null
2022-11-18T12:13:39Z
--- license: creativeml-openrail-m tags: - ddlc - gfl - anime - doki doki literature club - girl's frontline --- # TEXTUAL INVERSION TESTS (~11/18/2022) # Various old TIs trained on the following characters: Girl's Frontline: * Persicaria (~200 images) * P90 (~200 images) * Springfield (~150 images) * Negev (~100 images) * KAC-PDW (~11 images) * FMG9 (~10 images) Doki Doki Literature Club: * Monika (~25 images) * Yuri (~25 images) * Sayori (~20 images) * Natsuki (~20 images) All were trained on NAI model, with as little as 5000 steps up to 20000 steps. (uploaded for archiving purposes: may look for older versions and more params/hyperparams used with these) # PREVIEWS (~08/14/2023) # Most generated with Silicon29 and a custom model based off it, using simple prompts and nothing else, following the format: 1girl, (solo:1.2), \<TI\>, \<char booru tag\>, \<color of\> eyes, best quality, \<one or two additional prompts, like "dress"\> neg prompt: (cropped:1.4), (loli:1.5), (child:1.5), text, low quality, normal quality, deformed, (bad-hands-5:1.2), (FastNegativeV2:1.1), deep fried, nsfw, big ribbon, red ribbon, painting, jpeg artifact, deformed, red ribbon, too many ribbons, messy ribbon, braids, umbrella, floating hair, see-through clothing, torn clothing, bad hair, gross, necklace, bad proportions, text, username, \<one or two additional prompts, like "school uniform"\> Girl's Frontline (best to worst): * Persicaria: [Example 1](08142023_ti_prevs/full/pers1%2000115-2787276333.png), [Example 2](08142023_ti_prevs/full/pers1%2000153-2615520266.png), [Example 3](08142023_ti_prevs/full/pers1%2000029-761374397.png) * P90: [Example 1](08142023_ti_prevs/full/p90%2000110-3022549351.png), [Example 2](08142023_ti_prevs/full/p90%2000112-1360325468.png), [Example 3](08142023_ti_prevs/full/p90%2000158-3721045199.png) * Negev: [Example 1](08142023_ti_prevs/full/negev5%2000128-1698072591.png), [Example 2](08142023_ti_prevs/full/negev5%2000129-1583369084.png), [Example 3](08142023_ti_prevs/full/negev5%2000027-1566859227.png) * Springfield: [Example 1](08142023_ti_prevs/full/spring1%2000106-1162523717.png), [Example 2](08142023_ti_prevs/full/spring1%2000107-3179963266.png) * KACPDW (overfits to clothes): [Example 1](08142023_ti_prevs/full/kacpdw%2000026-4038413503.png), [Example 2](08142023_ti_prevs/full/kacpdw%2000023-3787426706.png), [Example 3](08142023_ti_prevs/full/kacpdw%2000122-3185291820.png) * FMG9: [Example 1](08142023_ti_prevs/full/fmg9%2000118-3783199521.png), [Example 2](08142023_ti_prevs/full/fmg9%2000119-1811241202.png) Doki Doki Literature Club (best to worst): * Monika: [Example 1](08142023_ti_prevs/full/mon3a%2000141-4131642062.png), [Example 2](08142023_ti_prevs/full/mon3a%2000142-3718578348.png), [Example 3](08142023_ti_prevs/full/mon3a%2000164-4156171172.png) * Yuri (overfits to clothes): [Example 1](08142023_ti_prevs/full/yur3a%2000101-1766762200.png), [Example 2](08142023_ti_prevs/full/yur3a%2000013-2416113300.png), [Example 3](08142023_ti_prevs/full/yur3a%2000011-2706364566.png) * Natsuki: [Example 1](08142023_ti_prevs/full/nat3%2000001-3982624050.png), [Example 2](08142023_ti_prevs/full/nat3%2000145-420199443.png) * Sayori (overfits to chairs...): [Example 1](08142023_ti_prevs/full/say3%2000133-3970420654.png), [Example 2](08142023_ti_prevs/full/say3%2000134-2062223196.png) It's ~2022's files in near late 2023 SD world (as of writing). Maybe it's obsolete, but might be of mild interest.
922-CA/Modded-Berry
922-CA
2023-08-22T09:10:37Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2022-11-18T11:48:37Z
--- license: creativeml-openrail-m --- # INFO (11/18/2022) Old stable diffusion 1.5 merge of berry mix + anythingv3 at around ~30/70 ratio + further merges. (If can recall correctly...)
qgallouedec/box-close-v2
qgallouedec
2023-08-22T09:07:15Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-07T16:12:19Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: box-close-v2 type: box-close-v2 metrics: - type: mean_reward value: 515.82 +/- 160.02 name: mean_reward verified: false --- A(n) **APPO** model trained on the **box-close-v2** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r qgallouedec/box-close-v2 ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m enjoy --algo=APPO --env=box-close-v2 --train_dir=./train_dir --experiment=box-close-v2 ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m train --algo=APPO --env=box-close-v2 --train_dir=./train_dir --experiment=box-close-v2 --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
marksverdhei/t5-base-define
marksverdhei
2023-08-22T09:05:55Z
123
6
transformers
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "en", "dataset:marksverdhei/wordnet-definitions-en-2021", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-04-02T09:50:37Z
--- language: en widget: - text: 'define "toecoin": toecoin rose by 200% after Elon Musk mentioned it in his tweet' datasets: - 'marksverdhei/wordnet-definitions-en-2021' --- # T5-define (This model is still a work in progress. If you use it for fine tuning, make sure to save a local copy) This model is trained to generate word definitions based on the word and a context, using a subset of wordnet for all words that have an example and definition. The model uses task prompts on the format 'define "[word]": [example sentence]' This model in particular is a one-shot learner for unseen words, as it has to infer the definition by only one example How to run: ```python from transformers import T5ForConditionalGeneration, T5Tokenizer tokenizer = T5Tokenizer.from_pretrained("marksverdhei/t5-base-define") model = T5ForConditionalGeneration.from_pretrained("marksverdhei/t5-base-define") prompt = "define \"noseplow\": The children hid as the noseplow drove across the street" ids = tokenizer(prompt, return_tensors="pt").input_ids generated_tokens = model.generate(ids)[0][1:-1] print(tokenizer.decode(generated_tokens)) ``` See the gist for the source code to used to train the model: https://gist.github.com/marksverdhei/0a13f67e65460b71c05fcf558a6a91ae
qgallouedec/basketball-v2
qgallouedec
2023-08-22T09:05:27Z
0
1
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-07T16:11:22Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: basketball-v2 type: basketball-v2 metrics: - type: mean_reward value: 584.02 +/- 49.43 name: mean_reward verified: false --- A(n) **APPO** model trained on the **basketball-v2** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r qgallouedec/basketball-v2 ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m enjoy --algo=APPO --env=basketball-v2 --train_dir=./train_dir --experiment=basketball-v2 ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m train --algo=APPO --env=basketball-v2 --train_dir=./train_dir --experiment=basketball-v2 --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
newronai/lma2-7b-Chat-Adapter-N
newronai
2023-08-22T08:56:11Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-22T08:56:04Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0.dev0
grv805/llama2-qlora-finetunined-13b-gcp
grv805
2023-08-22T08:47:03Z
6
0
peft
[ "peft", "region:us" ]
null
2023-08-22T08:46:51Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0.dev0
tianpf/chinese-alpaca-2-qlora-finetunined-law2
tianpf
2023-08-22T08:46:11Z
1
0
peft
[ "peft", "region:us" ]
null
2023-08-22T08:46:07Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0.dev0
linoyts/lora-xl-3d_icons-0.0001-5e-05-2000-1-5
linoyts
2023-08-22T08:45:51Z
5
1
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-08-22T07:54:07Z
--- license: openrail++ base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: blb 3d icon tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - LinoyTsaban/lora-xl-3d_icons-0.0001-5e-05-2000-1-5 These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on blb 3d icon using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. LoRA for the text encoder was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
xray1111/ppo-LunarLander-v2
xray1111
2023-08-22T08:41:09Z
5
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-08-22T08:40:42Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 273.95 +/- 16.80 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
newronai/llama-2-7b-Chat-QLoRA-New-1.0
newronai
2023-08-22T08:36:35Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-22T08:36:31Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0.dev0
Tina-2005/my-pet-dog
Tina-2005
2023-08-22T08:33:08Z
0
0
null
[ "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-08-22T08:31:07Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Pet-Dog Dreambooth model trained by Tina-2005 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: TSEC230 Sample pictures of this concept: ![0](https://huggingface.co/Tina-2005/my-pet-dog/resolve/main/sample_images/06.jpg)
kaanhho/whisper-tiny-01
kaanhho
2023-08-22T08:29:58Z
76
0
transformers
[ "transformers", "pytorch", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:PolyAI/minds14", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-22T01:07:48Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_trainer datasets: - PolyAI/minds14 metrics: - wer model-index: - name: whisper-tiny-01 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: PolyAI/minds14 type: PolyAI/minds14 config: en-US split: train args: en-US metrics: - name: Wer type: wer value: 0.33884297520661155 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tiny-01 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset. It achieves the following results on the evaluation set: - Loss: 0.6410 - Wer Ortho: 0.3430 - Wer: 0.3388 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:| | 0.001 | 17.24 | 500 | 0.6410 | 0.3430 | 0.3388 | ### Framework versions - Transformers 4.33.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
AnnaMats/ppo-SnowballTarget
AnnaMats
2023-08-22T08:28:08Z
4
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-08-22T08:28:05Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: AnnaMats/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
SpecialOne/bloomz-560m_PROMPT_TUNING_CAUSAL_LM
SpecialOne
2023-08-22T08:27:49Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-22T08:27:46Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0
mahendra0203/lora-trained-xl-colab-5c-steps-standing
mahendra0203
2023-08-22T08:21:35Z
2
1
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-08-22T05:26:47Z
--- license: openrail++ base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of sks man tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - mahendra0203/lora-trained-xl-colab-5c-steps-standing These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks man using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
asenella/ms_config_1_alpha_10_beta_1_seed_1
asenella
2023-08-22T08:16:05Z
0
0
null
[ "multivae", "en", "license:apache-2.0", "region:us" ]
null
2023-08-22T08:16:03Z
--- language: en tags: - multivae license: apache-2.0 --- ### Downloading this model from the Hub This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub` ```python >>> from multivae.models import AutoModel >>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name") ```
ashoknpotti/bloom-1b7-qanda
ashoknpotti
2023-08-22T08:14:55Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-22T07:47:36Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
kcsteam1/0822_1100step
kcsteam1
2023-08-22T08:03:47Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-22T07:52:46Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0 - PEFT 0.5.0.dev0
ruanwz/transformers_issues_topics
ruanwz
2023-08-22T08:02:57Z
5
0
bertopic
[ "bertopic", "text-classification", "region:us" ]
text-classification
2023-08-22T08:02:54Z
--- tags: - bertopic library_name: bertopic pipeline_tag: text-classification --- # transformers_issues_topics This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model. BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets. ## Usage To use this model, please install BERTopic: ``` pip install -U bertopic ``` You can use the model as follows: ```python from bertopic import BERTopic topic_model = BERTopic.load("ruanwz/transformers_issues_topics") topic_model.get_topic_info() ``` ## Topic overview * Number of topics: 30 * Number of training documents: 9000 <details> <summary>Click here for an overview of all topics.</summary> | Topic ID | Topic Keywords | Topic Frequency | Label | |----------|----------------|-----------------|-------| | -1 | tensorflow - pytorch - tf - pretrained - gpu | 11 | -1_tensorflow_pytorch_tf_pretrained | | 0 | tokenizer - tokenizers - tokenize - tokenization - token | 2089 | 0_tokenizer_tokenizers_tokenize_tokenization | | 1 | gpt2 - gpt - gpt2doubleheadsmodel - gpt2lmheadmodel - distilgpt2 | 1471 | 1_gpt2_gpt_gpt2doubleheadsmodel_gpt2lmheadmodel | | 2 | ner - seq2seqtrainer - seq2seq - runseq2seqpy - valueerror | 856 | 2_ner_seq2seqtrainer_seq2seq_runseq2seqpy | | 3 | modelcard - modelcards - card - model - cards | 601 | 3_modelcard_modelcards_card_model | | 4 | trainer - trainertrain - trainers - training - evaluateduringtraining | 500 | 4_trainer_trainertrain_trainers_training | | 5 | longformer - longformers - longformerformultiplechoice - tf - longformertokenizerfast | 455 | 5_longformer_longformers_longformerformultiplechoice_tf | | 6 | typos - typo - fix - correction - fixed | 439 | 6_typos_typo_fix_correction | | 7 | albertbasev2 - albertforpretraining - albert - albertformaskedlm - xlnet | 407 | 7_albertbasev2_albertforpretraining_albert_albertformaskedlm | | 8 | summarization - summaries - summary - text - nlp | 351 | 8_summarization_summaries_summary_text | | 9 | readmemd - readmetxt - readme - modelcard - file | 333 | 9_readmemd_readmetxt_readme_modelcard | | 10 | transformerscli - transformers - transformer - transformerxl - importerror | 259 | 10_transformerscli_transformers_transformer_transformerxl | | 11 | ci - testing - tests - test - slow | 228 | 11_ci_testing_tests_test | | 12 | questionansweringpipeline - questionanswering - answering - tfalbertforquestionanswering - questionasnwering | 156 | 12_questionansweringpipeline_questionanswering_answering_tfalbertforquestionanswering | | 13 | pipeline - pipelines - pipelinespy - pipelineexception - fixpipeline | 137 | 13_pipeline_pipelines_pipelinespy_pipelineexception | | 14 | onnxonnxruntime - onnx - onnxexport - 04onnxexport - 04onnxexportipynb | 113 | 14_onnxonnxruntime_onnx_onnxexport_04onnxexport | | 15 | benchmark - benchmarks - accuracy - evaluation - metrics | 98 | 15_benchmark_benchmarks_accuracy_evaluation | | 16 | huggingfacemaster - huggingfacetokenizers297 - huggingface - huggingfaces - huggingfacetransformers | 81 | 16_huggingfacemaster_huggingfacetokenizers297_huggingface_huggingfaces | | 17 | generationbeamsearchpy - generatebeamsearch - generatebeamsearchoutputs - beamsearch - nonbeamsearch | 69 | 17_generationbeamsearchpy_generatebeamsearch_generatebeamsearchoutputs_beamsearch | | 18 | wav2vec2 - wav2vec - wav2vec20 - wav2vec2forctc - wav2vec2xlrswav2vec2 | 56 | 18_wav2vec2_wav2vec_wav2vec20_wav2vec2forctc | | 19 | flax - flaxelectraformaskedlm - flaxelectraforpretraining - flaxjax - flaxelectramodel | 53 | 19_flax_flaxelectraformaskedlm_flaxelectraforpretraining_flaxjax | | 20 | cachedir - cache - cachedpath - cached - caching | 43 | 20_cachedir_cache_cachedpath_cached | | 21 | notebook - notebooks - colab - community - t5 | 33 | 21_notebook_notebooks_colab_community | | 22 | wandbproject - wandb - sagemaker - sagemakertrainer - wandbcallback | 32 | 22_wandbproject_wandb_sagemaker_sagemakertrainer | | 23 | bigbird - py7zr - tapas - tres - v4 | 32 | 23_bigbird_py7zr_tapas_tres | | 24 | electra - electrapretrainedmodel - electraformaskedlm - electraformultiplechoice - electrafortokenclassification | 28 | 24_electra_electrapretrainedmodel_electraformaskedlm_electraformultiplechoice | | 25 | layoutlm - layout - layoutlmtokenizer - layoutlmbaseuncased - tf | 24 | 25_layoutlm_layout_layoutlmtokenizer_layoutlmbaseuncased | | 26 | isort - blackisortflake8 - github - repo - version | 18 | 26_isort_blackisortflake8_github_repo | | 27 | pplm - pr - deprecated - variable - ppl | 14 | 27_pplm_pr_deprecated_variable | | 28 | blenderbot - blenderbot3b - blenderbotforcausallm - chatbot - boto3 | 13 | 28_blenderbot_blenderbot3b_blenderbotforcausallm_chatbot | </details> ## Training hyperparameters * calculate_probabilities: False * language: english * low_memory: False * min_topic_size: 10 * n_gram_range: (1, 1) * nr_topics: 30 * seed_topic_list: None * top_n_words: 10 * verbose: True ## Framework versions * Numpy: 1.23.5 * HDBSCAN: 0.8.33 * UMAP: 0.5.3 * Pandas: 1.5.3 * Scikit-Learn: 1.2.2 * Sentence-transformers: 2.2.2 * Transformers: 4.31.0 * Numba: 0.56.4 * Plotly: 5.15.0 * Python: 3.10.12
Muhammadreza/mann-e-dark-fantasy-revised-2
Muhammadreza
2023-08-22T08:02:38Z
4
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-22T07:57:20Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### mann-e_dark-fantasy_revised-2 Dreambooth model trained by Muhammadreza with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
huyen89/ppo-LunarLander-v2
huyen89
2023-08-22T07:57:43Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-08-22T07:57:21Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 264.98 +/- 24.51 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
dkqjrm/20230822145721
dkqjrm
2023-08-22T07:51:51Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:super_glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-22T05:57:42Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - super_glue metrics: - accuracy model-index: - name: '20230822145721' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 20230822145721 This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset. It achieves the following results on the evaluation set: - Loss: 0.3478 - Accuracy: 0.5271 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 11 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | No log | 1.0 | 312 | 0.3504 | 0.5235 | | 0.3893 | 2.0 | 624 | 0.3582 | 0.4729 | | 0.3893 | 3.0 | 936 | 0.3531 | 0.5271 | | 0.3878 | 4.0 | 1248 | 0.3627 | 0.4729 | | 0.3764 | 5.0 | 1560 | 0.3488 | 0.5271 | | 0.3764 | 6.0 | 1872 | 0.3529 | 0.5271 | | 0.3735 | 7.0 | 2184 | 0.3598 | 0.5271 | | 0.3735 | 8.0 | 2496 | 0.3609 | 0.5271 | | 0.3703 | 9.0 | 2808 | 0.3605 | 0.4729 | | 0.3684 | 10.0 | 3120 | 0.3562 | 0.5271 | | 0.3684 | 11.0 | 3432 | 0.4032 | 0.4729 | | 0.3687 | 12.0 | 3744 | 0.3752 | 0.4729 | | 0.3667 | 13.0 | 4056 | 0.3566 | 0.4729 | | 0.3667 | 14.0 | 4368 | 0.3499 | 0.5271 | | 0.3689 | 15.0 | 4680 | 0.3503 | 0.5271 | | 0.3689 | 16.0 | 4992 | 0.3539 | 0.5271 | | 0.3663 | 17.0 | 5304 | 0.3485 | 0.5271 | | 0.3677 | 18.0 | 5616 | 0.3617 | 0.5271 | | 0.3677 | 19.0 | 5928 | 0.3666 | 0.4729 | | 0.3716 | 20.0 | 6240 | 0.3562 | 0.5271 | | 0.3671 | 21.0 | 6552 | 0.3573 | 0.5271 | | 0.3671 | 22.0 | 6864 | 0.3900 | 0.5271 | | 0.3642 | 23.0 | 7176 | 0.3554 | 0.5271 | | 0.3642 | 24.0 | 7488 | 0.3594 | 0.4729 | | 0.3649 | 25.0 | 7800 | 0.3498 | 0.5271 | | 0.3639 | 26.0 | 8112 | 0.3646 | 0.4729 | | 0.3639 | 27.0 | 8424 | 0.3498 | 0.5271 | | 0.3615 | 28.0 | 8736 | 0.3504 | 0.5271 | | 0.3606 | 29.0 | 9048 | 0.3485 | 0.5271 | | 0.3606 | 30.0 | 9360 | 0.3479 | 0.5271 | | 0.3623 | 31.0 | 9672 | 0.3498 | 0.5271 | | 0.3623 | 32.0 | 9984 | 0.3478 | 0.5271 | | 0.3623 | 33.0 | 10296 | 0.3545 | 0.5271 | | 0.3603 | 34.0 | 10608 | 0.3483 | 0.5271 | | 0.3603 | 35.0 | 10920 | 0.3481 | 0.5271 | | 0.3604 | 36.0 | 11232 | 0.3495 | 0.5271 | | 0.3586 | 37.0 | 11544 | 0.3507 | 0.5271 | | 0.3586 | 38.0 | 11856 | 0.3486 | 0.5271 | | 0.3593 | 39.0 | 12168 | 0.3492 | 0.5271 | | 0.3593 | 40.0 | 12480 | 0.3492 | 0.5271 | | 0.359 | 41.0 | 12792 | 0.3485 | 0.5271 | | 0.3584 | 42.0 | 13104 | 0.3579 | 0.4729 | | 0.3584 | 43.0 | 13416 | 0.3480 | 0.5271 | | 0.3606 | 44.0 | 13728 | 0.3479 | 0.5271 | | 0.3568 | 45.0 | 14040 | 0.3530 | 0.5271 | | 0.3568 | 46.0 | 14352 | 0.3499 | 0.5271 | | 0.3589 | 47.0 | 14664 | 0.3547 | 0.4729 | | 0.3589 | 48.0 | 14976 | 0.3499 | 0.5271 | | 0.3589 | 49.0 | 15288 | 0.3478 | 0.5271 | | 0.3573 | 50.0 | 15600 | 0.3481 | 0.5271 | | 0.3573 | 51.0 | 15912 | 0.3487 | 0.5271 | | 0.3569 | 52.0 | 16224 | 0.3481 | 0.5271 | | 0.3572 | 53.0 | 16536 | 0.3480 | 0.5271 | | 0.3572 | 54.0 | 16848 | 0.3481 | 0.5271 | | 0.3558 | 55.0 | 17160 | 0.3478 | 0.5271 | | 0.3558 | 56.0 | 17472 | 0.3479 | 0.5271 | | 0.3557 | 57.0 | 17784 | 0.3484 | 0.5271 | | 0.3558 | 58.0 | 18096 | 0.3478 | 0.5271 | | 0.3558 | 59.0 | 18408 | 0.3478 | 0.5271 | | 0.3548 | 60.0 | 18720 | 0.3478 | 0.5271 | ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
jimmyofdoom/Reinforce-Pixelcopter-PLE-v0
jimmyofdoom
2023-08-22T07:50:21Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-08-22T07:50:18Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 42.70 +/- 24.71 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
rushichavda/bloom_1b_lora
rushichavda
2023-08-22T07:49:47Z
3
0
peft
[ "peft", "region:us" ]
null
2023-08-22T07:49:42Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
acondess/lineartv1.1
acondess
2023-08-22T07:46:51Z
32
3
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-21T02:16:55Z
--- license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image inference: true --- 提示词格式:lineart of {obj} 例如:lineart of cat 本模型库是acondess/lineart模型库的完全HUB界面实现的示例。 仅通过HUB界面实现了:模型创建、文件上传及管理、Model card设置。 ```py from diffusers import DiffusionPipeline pipeline = DiffusionPipeline.from_pretrained("acondess/lineartv1.1") prompt = "lineart of cat" image = pipeline(prompt).images[0] image.save("lineart_cat.png") ```
dkqjrm/20230822144236
dkqjrm
2023-08-22T07:44:05Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:super_glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-22T05:42:53Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - super_glue metrics: - accuracy model-index: - name: '20230822144236' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 20230822144236 This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset. It achieves the following results on the evaluation set: - Loss: 0.3486 - Accuracy: 0.5235 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 11 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | No log | 1.0 | 312 | 0.3705 | 0.4729 | | 0.3743 | 2.0 | 624 | 0.3484 | 0.5162 | | 0.3743 | 3.0 | 936 | 0.3504 | 0.5162 | | 0.3726 | 4.0 | 1248 | 0.3527 | 0.5235 | | 0.3712 | 5.0 | 1560 | 0.3552 | 0.4729 | | 0.3712 | 6.0 | 1872 | 0.3480 | 0.5199 | | 0.3669 | 7.0 | 2184 | 0.3501 | 0.4729 | | 0.3669 | 8.0 | 2496 | 0.3503 | 0.4368 | | 0.3658 | 9.0 | 2808 | 0.3503 | 0.5343 | | 0.3656 | 10.0 | 3120 | 0.3483 | 0.5199 | | 0.3656 | 11.0 | 3432 | 0.3510 | 0.4729 | | 0.3634 | 12.0 | 3744 | 0.3557 | 0.4729 | | 0.3613 | 13.0 | 4056 | 0.3537 | 0.4729 | | 0.3613 | 14.0 | 4368 | 0.3505 | 0.5199 | | 0.3609 | 15.0 | 4680 | 0.3493 | 0.5199 | | 0.3609 | 16.0 | 4992 | 0.3488 | 0.5307 | | 0.3591 | 17.0 | 5304 | 0.3568 | 0.5235 | | 0.3574 | 18.0 | 5616 | 0.3486 | 0.5235 | | 0.3574 | 19.0 | 5928 | 0.3552 | 0.4729 | | 0.3599 | 20.0 | 6240 | 0.3553 | 0.5271 | | 0.3556 | 21.0 | 6552 | 0.3502 | 0.5307 | | 0.3556 | 22.0 | 6864 | 0.3525 | 0.5271 | | 0.3573 | 23.0 | 7176 | 0.3553 | 0.5199 | | 0.3573 | 24.0 | 7488 | 0.3492 | 0.5162 | | 0.3574 | 25.0 | 7800 | 0.3492 | 0.5235 | | 0.3559 | 26.0 | 8112 | 0.3531 | 0.4729 | | 0.3559 | 27.0 | 8424 | 0.3602 | 0.4729 | | 0.3544 | 28.0 | 8736 | 0.3501 | 0.5379 | | 0.3539 | 29.0 | 9048 | 0.3490 | 0.5018 | | 0.3539 | 30.0 | 9360 | 0.3491 | 0.5090 | | 0.3529 | 31.0 | 9672 | 0.3518 | 0.5271 | | 0.3529 | 32.0 | 9984 | 0.3489 | 0.5199 | | 0.3531 | 33.0 | 10296 | 0.3484 | 0.5307 | | 0.3527 | 34.0 | 10608 | 0.3487 | 0.5271 | | 0.3527 | 35.0 | 10920 | 0.3491 | 0.5307 | | 0.3521 | 36.0 | 11232 | 0.3498 | 0.5343 | | 0.3513 | 37.0 | 11544 | 0.3500 | 0.5235 | | 0.3513 | 38.0 | 11856 | 0.3487 | 0.5235 | | 0.3526 | 39.0 | 12168 | 0.3494 | 0.5415 | | 0.3526 | 40.0 | 12480 | 0.3495 | 0.5451 | | 0.352 | 41.0 | 12792 | 0.3489 | 0.5343 | | 0.353 | 42.0 | 13104 | 0.3530 | 0.4729 | | 0.353 | 43.0 | 13416 | 0.3492 | 0.5271 | | 0.3509 | 44.0 | 13728 | 0.3501 | 0.4693 | | 0.3523 | 45.0 | 14040 | 0.3525 | 0.4729 | | 0.3523 | 46.0 | 14352 | 0.3491 | 0.5054 | | 0.3506 | 47.0 | 14664 | 0.3515 | 0.4729 | | 0.3506 | 48.0 | 14976 | 0.3494 | 0.5379 | | 0.3518 | 49.0 | 15288 | 0.3483 | 0.5235 | | 0.3507 | 50.0 | 15600 | 0.3490 | 0.5271 | | 0.3507 | 51.0 | 15912 | 0.3489 | 0.5379 | | 0.3514 | 52.0 | 16224 | 0.3490 | 0.5090 | | 0.3509 | 53.0 | 16536 | 0.3484 | 0.5235 | | 0.3509 | 54.0 | 16848 | 0.3486 | 0.5199 | | 0.3499 | 55.0 | 17160 | 0.3485 | 0.5199 | | 0.3499 | 56.0 | 17472 | 0.3486 | 0.5199 | | 0.3504 | 57.0 | 17784 | 0.3493 | 0.5415 | | 0.3495 | 58.0 | 18096 | 0.3486 | 0.5307 | | 0.3495 | 59.0 | 18408 | 0.3485 | 0.5271 | | 0.3505 | 60.0 | 18720 | 0.3486 | 0.5235 | ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
mkuntz/a2c-PandaReachDense-v2-1
mkuntz
2023-08-22T07:27:09Z
2
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "arxiv:2106.13687", "model-index", "region:us" ]
reinforcement-learning
2023-02-26T22:15:27Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -1.08 +/- 0.53 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ``` Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687)
Roderick20/ddpm-celebahq-finetuned-butterflies-2epochs
Roderick20
2023-08-22T07:09:03Z
44
0
diffusers
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2023-08-22T07:08:07Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) Describe your model here ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('Roderick20/ddpm-celebahq-finetuned-butterflies-2epochs') image = pipeline().images[0] image ```
abdiharyadi/indobart-v2-amr-to-text-linearized-penman-ilmy-epochs-10-with-lemma-and-upos-and-voice
abdiharyadi
2023-08-22T07:06:57Z
165
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "base_model:indobenchmark/indobart-v2", "base_model:finetune:indobenchmark/indobart-v2", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-08-22T04:22:50Z
--- license: mit base_model: indobenchmark/indobart-v2 tags: - generated_from_trainer model-index: - name: indobart-v2-amr-to-text-linearized-penman-ilmy-epochs-10-with-lemma-and-upos-and-voice results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # indobart-v2-amr-to-text-linearized-penman-ilmy-epochs-10-with-lemma-and-upos-and-voice This model is a fine-tuned version of [indobenchmark/indobart-v2](https://huggingface.co/indobenchmark/indobart-v2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2628 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 331 | 0.1993 | | 0.2291 | 2.0 | 662 | 0.2123 | | 0.2291 | 3.0 | 993 | 0.2197 | | 0.0238 | 4.0 | 1324 | 0.2400 | | 0.0115 | 5.0 | 1655 | 0.2418 | | 0.0115 | 6.0 | 1986 | 0.2446 | | 0.0069 | 7.0 | 2317 | 0.2520 | | 0.0044 | 8.0 | 2648 | 0.2584 | | 0.0044 | 9.0 | 2979 | 0.2652 | | 0.0034 | 10.0 | 3310 | 0.2628 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
oananovac/enron_gpt2_model
oananovac
2023-08-22T06:58:52Z
135
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-22T05:44:51Z
--- license: mit base_model: gpt2 tags: - generated_from_trainer model-index: - name: enron_gpt2_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # enron_gpt2_model This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.0150 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.2618 | 1.0 | 336 | 3.1207 | | 3.0512 | 2.0 | 672 | 3.0692 | | 2.9447 | 3.0 | 1008 | 3.0390 | | 2.8642 | 4.0 | 1344 | 3.0270 | | 2.7997 | 5.0 | 1680 | 3.0158 | | 2.7479 | 6.0 | 2016 | 3.0123 | | 2.7066 | 7.0 | 2352 | 3.0106 | | 2.6745 | 8.0 | 2688 | 3.0139 | | 2.6514 | 9.0 | 3024 | 3.0126 | | 2.6367 | 10.0 | 3360 | 3.0150 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
dkqjrm/20230822135401
dkqjrm
2023-08-22T06:55:46Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:super_glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-22T04:54:19Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - super_glue metrics: - accuracy model-index: - name: '20230822135401' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 20230822135401 This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset. It achieves the following results on the evaluation set: - Loss: 0.3478 - Accuracy: 0.6065 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 11 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | No log | 1.0 | 312 | 0.3502 | 0.5451 | | 0.3914 | 2.0 | 624 | 0.3937 | 0.4729 | | 0.3914 | 3.0 | 936 | 0.3710 | 0.4729 | | 0.3806 | 4.0 | 1248 | 0.3529 | 0.4693 | | 0.3775 | 5.0 | 1560 | 0.3489 | 0.5487 | | 0.3775 | 6.0 | 1872 | 0.3466 | 0.5451 | | 0.3668 | 7.0 | 2184 | 0.4554 | 0.5379 | | 0.3668 | 8.0 | 2496 | 0.3811 | 0.5451 | | 0.3698 | 9.0 | 2808 | 0.3497 | 0.5271 | | 0.3659 | 10.0 | 3120 | 0.3462 | 0.5199 | | 0.3659 | 11.0 | 3432 | 0.4239 | 0.4729 | | 0.3675 | 12.0 | 3744 | 0.3535 | 0.5126 | | 0.3617 | 13.0 | 4056 | 0.3470 | 0.5090 | | 0.3617 | 14.0 | 4368 | 0.3630 | 0.5054 | | 0.3624 | 15.0 | 4680 | 0.3506 | 0.5235 | | 0.3624 | 16.0 | 4992 | 0.3747 | 0.5487 | | 0.359 | 17.0 | 5304 | 0.3704 | 0.5487 | | 0.3576 | 18.0 | 5616 | 0.3538 | 0.5343 | | 0.3576 | 19.0 | 5928 | 0.3597 | 0.5415 | | 0.3612 | 20.0 | 6240 | 0.3637 | 0.5596 | | 0.359 | 21.0 | 6552 | 0.3487 | 0.5704 | | 0.359 | 22.0 | 6864 | 0.3591 | 0.5415 | | 0.3566 | 23.0 | 7176 | 0.3946 | 0.5523 | | 0.3566 | 24.0 | 7488 | 0.3627 | 0.5018 | | 0.3551 | 25.0 | 7800 | 0.3540 | 0.5523 | | 0.353 | 26.0 | 8112 | 0.3461 | 0.5343 | | 0.353 | 27.0 | 8424 | 0.3469 | 0.5596 | | 0.3517 | 28.0 | 8736 | 0.3471 | 0.5993 | | 0.3549 | 29.0 | 9048 | 0.3504 | 0.5632 | | 0.3549 | 30.0 | 9360 | 0.3559 | 0.5812 | | 0.3523 | 31.0 | 9672 | 0.3769 | 0.5560 | | 0.3523 | 32.0 | 9984 | 0.3473 | 0.5704 | | 0.3514 | 33.0 | 10296 | 0.3632 | 0.5704 | | 0.3513 | 34.0 | 10608 | 0.3503 | 0.5848 | | 0.3513 | 35.0 | 10920 | 0.3464 | 0.5560 | | 0.3512 | 36.0 | 11232 | 0.3493 | 0.5740 | | 0.3494 | 37.0 | 11544 | 0.3479 | 0.6101 | | 0.3494 | 38.0 | 11856 | 0.3464 | 0.6029 | | 0.3478 | 39.0 | 12168 | 0.3495 | 0.6101 | | 0.3478 | 40.0 | 12480 | 0.3462 | 0.6065 | | 0.3479 | 41.0 | 12792 | 0.3519 | 0.6065 | | 0.3472 | 42.0 | 13104 | 0.3420 | 0.5704 | | 0.3472 | 43.0 | 13416 | 0.3555 | 0.5740 | | 0.3456 | 44.0 | 13728 | 0.3471 | 0.5957 | | 0.3448 | 45.0 | 14040 | 0.3434 | 0.5776 | | 0.3448 | 46.0 | 14352 | 0.3401 | 0.6209 | | 0.3439 | 47.0 | 14664 | 0.3439 | 0.5776 | | 0.3439 | 48.0 | 14976 | 0.3523 | 0.5921 | | 0.3442 | 49.0 | 15288 | 0.3466 | 0.6137 | | 0.3437 | 50.0 | 15600 | 0.3549 | 0.5776 | | 0.3437 | 51.0 | 15912 | 0.3417 | 0.6173 | | 0.3413 | 52.0 | 16224 | 0.3409 | 0.6209 | | 0.3416 | 53.0 | 16536 | 0.3607 | 0.5884 | | 0.3416 | 54.0 | 16848 | 0.3574 | 0.5848 | | 0.3401 | 55.0 | 17160 | 0.3494 | 0.5812 | | 0.3401 | 56.0 | 17472 | 0.3480 | 0.6137 | | 0.3395 | 57.0 | 17784 | 0.3434 | 0.6029 | | 0.3399 | 58.0 | 18096 | 0.3454 | 0.5993 | | 0.3399 | 59.0 | 18408 | 0.3477 | 0.5957 | | 0.3398 | 60.0 | 18720 | 0.3478 | 0.6065 | ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
RishuD7/t5_number_v6
RishuD7
2023-08-22T06:50:33Z
3
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-08-22T06:11:47Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: t5_number_v6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5_number_v6 This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0048 - Cer: 0.4061 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 197 | 0.0122 | 0.6142 | | 4.5743 | 2.0 | 394 | 0.0072 | 0.5203 | | 0.0132 | 3.0 | 591 | 0.0064 | 0.4695 | | 0.009 | 4.0 | 788 | 0.0058 | 0.4416 | | 0.0076 | 5.0 | 985 | 0.0056 | 0.4416 | | 0.0067 | 6.0 | 1182 | 0.0053 | 0.4264 | | 0.0062 | 7.0 | 1379 | 0.0051 | 0.4137 | | 0.0059 | 8.0 | 1576 | 0.0049 | 0.4036 | | 0.0057 | 9.0 | 1773 | 0.0049 | 0.4010 | | 0.0056 | 10.0 | 1970 | 0.0048 | 0.4061 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
ashoknpotti/bloom-3b-qanda
ashoknpotti
2023-08-22T06:47:10Z
6
0
peft
[ "peft", "region:us" ]
null
2023-08-22T06:09:08Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
gabrielyang/finance_news_classifier-KR_v7
gabrielyang
2023-08-22T06:40:38Z
106
1
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "text-classification", "generated_from_trainer", "ko", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-17T02:56:13Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: finance_news_classifier-KR_v7 results: [] language: - ko --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finance_news_classifier-KR_v7 This model is a fine-tuned version of [Hyeonseo/ko-finance_news_classifier](https://huggingface.co/Hyeonseo/ko-finance_news_classifier) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2265 - Accuracy: 0.9701 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 243 | 0.1994 | 0.9608 | | No log | 2.0 | 486 | 0.1584 | 0.9660 | | 0.1793 | 3.0 | 729 | 0.2011 | 0.9557 | | 0.1793 | 4.0 | 972 | 0.2214 | 0.9629 | | 0.085 | 5.0 | 1215 | 0.2885 | 0.9526 | | 0.085 | 6.0 | 1458 | 0.2288 | 0.9660 | | 0.0365 | 7.0 | 1701 | 0.2512 | 0.9619 | | 0.0365 | 8.0 | 1944 | 0.2201 | 0.9670 | | 0.0178 | 9.0 | 2187 | 0.2621 | 0.9598 | | 0.0178 | 10.0 | 2430 | 0.2265 | 0.9701 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
Chat-Error/Kurumi_lora
Chat-Error
2023-08-22T06:17:41Z
0
1
null
[ "arxiv:1910.09700", "region:us" ]
null
2023-02-26T11:21:30Z
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
asenella/ms_config_1_alpha_10_beta_1_seed_0
asenella
2023-08-22T06:13:34Z
0
0
null
[ "multivae", "en", "license:apache-2.0", "region:us" ]
null
2023-08-22T06:13:32Z
--- language: en tags: - multivae license: apache-2.0 --- ### Downloading this model from the Hub This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub` ```python >>> from multivae.models import AutoModel >>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name") ```
RishuD7/t5_number_v5
RishuD7
2023-08-22T05:53:37Z
3
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-08-21T11:58:34Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: t5_number_v5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5_number_v5 This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0055 - Cer: 0.4721 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 197 | 0.0132 | 0.7284 | | 4.8427 | 2.0 | 394 | 0.0077 | 0.5711 | | 0.0136 | 3.0 | 591 | 0.0068 | 0.5228 | | 0.0085 | 4.0 | 788 | 0.0064 | 0.5102 | | 0.0074 | 5.0 | 985 | 0.0062 | 0.4975 | | 0.0067 | 6.0 | 1182 | 0.0059 | 0.4898 | | 0.0062 | 7.0 | 1379 | 0.0057 | 0.4848 | | 0.0057 | 8.0 | 1576 | 0.0057 | 0.4772 | | 0.0054 | 9.0 | 1773 | 0.0056 | 0.4721 | | 0.0055 | 10.0 | 1970 | 0.0055 | 0.4721 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
dkqjrm/20230822124255
dkqjrm
2023-08-22T05:44:49Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:super_glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-22T03:43:13Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - super_glue metrics: - accuracy model-index: - name: '20230822124255' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 20230822124255 This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset. It achieves the following results on the evaluation set: - Loss: 0.3479 - Accuracy: 0.5271 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 11 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | No log | 1.0 | 312 | 0.4745 | 0.5271 | | 0.4082 | 2.0 | 624 | 0.3528 | 0.5307 | | 0.4082 | 3.0 | 936 | 0.4075 | 0.4729 | | 0.3905 | 4.0 | 1248 | 0.3634 | 0.4729 | | 0.3831 | 5.0 | 1560 | 0.3585 | 0.5271 | | 0.3831 | 6.0 | 1872 | 0.3679 | 0.5271 | | 0.3797 | 7.0 | 2184 | 0.3550 | 0.5271 | | 0.3797 | 8.0 | 2496 | 0.4011 | 0.5271 | | 0.3796 | 9.0 | 2808 | 0.3515 | 0.5271 | | 0.3836 | 10.0 | 3120 | 0.3478 | 0.5271 | | 0.3836 | 11.0 | 3432 | 0.3494 | 0.5271 | | 0.3815 | 12.0 | 3744 | 0.3707 | 0.4729 | | 0.3769 | 13.0 | 4056 | 0.3625 | 0.4729 | | 0.3769 | 14.0 | 4368 | 0.3498 | 0.5271 | | 0.3761 | 15.0 | 4680 | 0.3550 | 0.4729 | | 0.3761 | 16.0 | 4992 | 0.4420 | 0.5271 | | 0.3776 | 17.0 | 5304 | 0.3529 | 0.5271 | | 0.3704 | 18.0 | 5616 | 0.3486 | 0.5271 | | 0.3704 | 19.0 | 5928 | 0.3670 | 0.4729 | | 0.3765 | 20.0 | 6240 | 0.3586 | 0.5271 | | 0.3721 | 21.0 | 6552 | 0.3490 | 0.5271 | | 0.3721 | 22.0 | 6864 | 0.3729 | 0.5271 | | 0.3689 | 23.0 | 7176 | 0.3798 | 0.5271 | | 0.3689 | 24.0 | 7488 | 0.3861 | 0.4729 | | 0.3698 | 25.0 | 7800 | 0.3498 | 0.5271 | | 0.369 | 26.0 | 8112 | 0.3698 | 0.4729 | | 0.369 | 27.0 | 8424 | 0.3507 | 0.5271 | | 0.3658 | 28.0 | 8736 | 0.3494 | 0.5271 | | 0.3662 | 29.0 | 9048 | 0.3479 | 0.5271 | | 0.3662 | 30.0 | 9360 | 0.3504 | 0.5271 | | 0.3666 | 31.0 | 9672 | 0.3577 | 0.5271 | | 0.3666 | 32.0 | 9984 | 0.3509 | 0.5271 | | 0.3637 | 33.0 | 10296 | 0.3483 | 0.5271 | | 0.3647 | 34.0 | 10608 | 0.3493 | 0.5271 | | 0.3647 | 35.0 | 10920 | 0.3482 | 0.5271 | | 0.364 | 36.0 | 11232 | 0.3490 | 0.5271 | | 0.3635 | 37.0 | 11544 | 0.3478 | 0.5271 | | 0.3635 | 38.0 | 11856 | 0.3479 | 0.5271 | | 0.3634 | 39.0 | 12168 | 0.3501 | 0.5271 | | 0.3634 | 40.0 | 12480 | 0.3478 | 0.5271 | | 0.3643 | 41.0 | 12792 | 0.3479 | 0.5271 | | 0.3645 | 42.0 | 13104 | 0.3655 | 0.4729 | | 0.3645 | 43.0 | 13416 | 0.3512 | 0.5271 | | 0.363 | 44.0 | 13728 | 0.3491 | 0.5271 | | 0.3602 | 45.0 | 14040 | 0.3569 | 0.4729 | | 0.3602 | 46.0 | 14352 | 0.3571 | 0.4729 | | 0.3616 | 47.0 | 14664 | 0.3522 | 0.5307 | | 0.3616 | 48.0 | 14976 | 0.3485 | 0.5271 | | 0.3601 | 49.0 | 15288 | 0.3485 | 0.5271 | | 0.3606 | 50.0 | 15600 | 0.3481 | 0.5271 | | 0.3606 | 51.0 | 15912 | 0.3484 | 0.5271 | | 0.3592 | 52.0 | 16224 | 0.3478 | 0.5271 | | 0.3587 | 53.0 | 16536 | 0.3485 | 0.5271 | | 0.3587 | 54.0 | 16848 | 0.3483 | 0.5271 | | 0.3583 | 55.0 | 17160 | 0.3480 | 0.5271 | | 0.3583 | 56.0 | 17472 | 0.3478 | 0.5271 | | 0.358 | 57.0 | 17784 | 0.3485 | 0.5271 | | 0.3574 | 58.0 | 18096 | 0.3478 | 0.5271 | | 0.3574 | 59.0 | 18408 | 0.3479 | 0.5271 | | 0.3567 | 60.0 | 18720 | 0.3479 | 0.5271 | ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
dkqjrm/20230822124929
dkqjrm
2023-08-22T05:42:24Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:super_glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-22T03:49:47Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - super_glue metrics: - accuracy model-index: - name: '20230822124929' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 20230822124929 This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset. It achieves the following results on the evaluation set: - Loss: 0.3407 - Accuracy: 0.6570 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 11 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | No log | 1.0 | 312 | 0.3734 | 0.5307 | | 0.4216 | 2.0 | 624 | 0.3802 | 0.4729 | | 0.4216 | 3.0 | 936 | 0.4299 | 0.4765 | | 0.3883 | 4.0 | 1248 | 0.3490 | 0.5451 | | 0.3918 | 5.0 | 1560 | 0.3461 | 0.5884 | | 0.3918 | 6.0 | 1872 | 0.3599 | 0.5523 | | 0.3764 | 7.0 | 2184 | 0.3565 | 0.5451 | | 0.3764 | 8.0 | 2496 | 0.3611 | 0.5018 | | 0.3794 | 9.0 | 2808 | 0.4040 | 0.5415 | | 0.3778 | 10.0 | 3120 | 0.3622 | 0.4729 | | 0.3778 | 11.0 | 3432 | 0.4954 | 0.4693 | | 0.3813 | 12.0 | 3744 | 0.3602 | 0.4765 | | 0.3718 | 13.0 | 4056 | 0.3453 | 0.5415 | | 0.3718 | 14.0 | 4368 | 0.3640 | 0.5343 | | 0.3701 | 15.0 | 4680 | 0.3589 | 0.4838 | | 0.3701 | 16.0 | 4992 | 0.3700 | 0.5632 | | 0.371 | 17.0 | 5304 | 0.4147 | 0.5343 | | 0.3644 | 18.0 | 5616 | 0.3505 | 0.5740 | | 0.3644 | 19.0 | 5928 | 0.3736 | 0.4874 | | 0.3667 | 20.0 | 6240 | 0.3637 | 0.5704 | | 0.3629 | 21.0 | 6552 | 0.3412 | 0.6209 | | 0.3629 | 22.0 | 6864 | 0.3451 | 0.6282 | | 0.3574 | 23.0 | 7176 | 0.3626 | 0.6065 | | 0.3574 | 24.0 | 7488 | 0.3732 | 0.4874 | | 0.3565 | 25.0 | 7800 | 0.3427 | 0.6173 | | 0.3525 | 26.0 | 8112 | 0.3855 | 0.5812 | | 0.3525 | 27.0 | 8424 | 0.3384 | 0.6498 | | 0.3523 | 28.0 | 8736 | 0.3408 | 0.6282 | | 0.3505 | 29.0 | 9048 | 0.3548 | 0.6101 | | 0.3505 | 30.0 | 9360 | 0.3861 | 0.5921 | | 0.3509 | 31.0 | 9672 | 0.3710 | 0.5993 | | 0.3509 | 32.0 | 9984 | 0.3897 | 0.5993 | | 0.3494 | 33.0 | 10296 | 0.3535 | 0.6354 | | 0.3459 | 34.0 | 10608 | 0.3389 | 0.6282 | | 0.3459 | 35.0 | 10920 | 0.3397 | 0.6209 | | 0.3429 | 36.0 | 11232 | 0.3450 | 0.6101 | | 0.3432 | 37.0 | 11544 | 0.3925 | 0.6065 | | 0.3432 | 38.0 | 11856 | 0.3294 | 0.6715 | | 0.341 | 39.0 | 12168 | 0.3442 | 0.6390 | | 0.341 | 40.0 | 12480 | 0.3421 | 0.6462 | | 0.3392 | 41.0 | 12792 | 0.3371 | 0.6390 | | 0.3392 | 42.0 | 13104 | 0.3326 | 0.6534 | | 0.3392 | 43.0 | 13416 | 0.3714 | 0.6282 | | 0.337 | 44.0 | 13728 | 0.3535 | 0.6245 | | 0.3352 | 45.0 | 14040 | 0.3548 | 0.6245 | | 0.3352 | 46.0 | 14352 | 0.3361 | 0.6570 | | 0.3335 | 47.0 | 14664 | 0.3329 | 0.6859 | | 0.3335 | 48.0 | 14976 | 0.3423 | 0.6462 | | 0.3329 | 49.0 | 15288 | 0.3356 | 0.6534 | | 0.3308 | 50.0 | 15600 | 0.3398 | 0.6643 | | 0.3308 | 51.0 | 15912 | 0.3374 | 0.6679 | | 0.3291 | 52.0 | 16224 | 0.3315 | 0.6787 | | 0.3284 | 53.0 | 16536 | 0.3650 | 0.6318 | | 0.3284 | 54.0 | 16848 | 0.3537 | 0.6282 | | 0.3257 | 55.0 | 17160 | 0.3480 | 0.6426 | | 0.3257 | 56.0 | 17472 | 0.3424 | 0.6570 | | 0.3274 | 57.0 | 17784 | 0.3413 | 0.6679 | | 0.3265 | 58.0 | 18096 | 0.3442 | 0.6390 | | 0.3265 | 59.0 | 18408 | 0.3417 | 0.6534 | | 0.326 | 60.0 | 18720 | 0.3407 | 0.6570 | ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3