modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-09 00:41:25
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
549 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-09 00:41:08
card
stringlengths
11
1.01M
roleplaiapp/deepseek-r1-qwen-2.5-32B-ablated-Q3_K_L-GGUF
roleplaiapp
2025-01-31T09:02:08Z
11
0
transformers
[ "transformers", "gguf", "3-bit", "32b", "Q3_K_L", "ablated", "deepseek", "llama-cpp", "qwen", "text-generation", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2025-01-31T09:01:02Z
--- library_name: transformers pipeline_tag: text-generation tags: - 3-bit - 32b - Q3_K_L - ablated - deepseek - gguf - llama-cpp - qwen - text-generation --- # roleplaiapp/deepseek-r1-qwen-2.5-32B-ablated-Q3_K_L-GGUF **Repo:** `roleplaiapp/deepseek-r1-qwen-2.5-32B-ablated-Q3_K_L-GGUF` **Original Model:** `deepseek-r1-qwen-2.5-32B-ablated` **Quantized File:** `deepseek-r1-qwen-2.5-32B-ablated-Q3_K_L.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q3_K_L` ## Overview This is a GGUF Q3_K_L quantized version of deepseek-r1-qwen-2.5-32B-ablated ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
saimdev/speecht5_finetuned_haitian_creole_tts
saimdev
2025-01-31T09:00:21Z
19
0
transformers
[ "transformers", "tensorboard", "safetensors", "speecht5", "text-to-audio", "generated_from_trainer", "endpoints_compatible", "region:us" ]
text-to-audio
2025-01-24T21:24:10Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: speecht5_finetuned_haitian_creole_tts results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speecht5_finetuned_haitian_creole_tts This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3680 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 17504 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:--------:|:-----:|:---------------:| | 2.627 | 15.1550 | 500 | 0.3270 | | 2.5152 | 30.3101 | 1000 | 0.3280 | | 2.443 | 45.4651 | 1500 | 0.3289 | | 2.4247 | 60.6202 | 2000 | 0.3360 | | 2.4638 | 75.7752 | 2500 | 0.3318 | | 2.4189 | 90.9302 | 3000 | 0.3377 | | 2.311 | 106.0620 | 3500 | 0.3432 | | 2.2877 | 121.2171 | 4000 | 0.3439 | | 2.2857 | 136.3721 | 4500 | 0.3434 | | 2.2751 | 151.5271 | 5000 | 0.3427 | | 2.2665 | 166.6822 | 5500 | 0.3456 | | 2.2909 | 181.8372 | 6000 | 0.3531 | | 2.2922 | 196.9922 | 6500 | 0.3479 | | 2.2163 | 212.1240 | 7000 | 0.3468 | | 2.1916 | 227.2791 | 7500 | 0.3465 | | 2.1993 | 242.4341 | 8000 | 0.3517 | | 2.1799 | 257.5891 | 8500 | 0.3534 | | 2.1627 | 272.7442 | 9000 | 0.3481 | | 2.2402 | 287.8992 | 9500 | 0.3542 | | 2.1602 | 303.0310 | 10000 | 0.3541 | | 2.1541 | 318.1860 | 10500 | 0.3506 | | 2.1236 | 333.3411 | 11000 | 0.3619 | | 2.1321 | 348.4961 | 11500 | 0.3519 | | 2.1113 | 363.6512 | 12000 | 0.3588 | | 2.1757 | 378.8062 | 12500 | 0.3512 | | 2.1742 | 393.9612 | 13000 | 0.3578 | | 2.0891 | 409.0930 | 13500 | 0.3593 | | 2.0869 | 424.2481 | 14000 | 0.3601 | | 2.0978 | 439.4031 | 14500 | 0.3589 | | 2.0819 | 454.5581 | 15000 | 0.3637 | | 2.0664 | 469.7132 | 15500 | 0.3589 | | 2.1653 | 484.8682 | 16000 | 0.3586 | | 2.0797 | 500.0 | 16500 | 0.3606 | | 2.0506 | 515.1550 | 17000 | 0.3661 | | 2.0713 | 530.3101 | 17500 | 0.3680 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
mradermacher/LLilmonix3b-v0.4a-GGUF
mradermacher
2025-01-31T09:00:17Z
171
0
transformers
[ "transformers", "gguf", "en", "base_model:922-CA/LLilmonix3b-v0.4a", "base_model:quantized:922-CA/LLilmonix3b-v0.4a", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-01-31T08:17:49Z
--- base_model: 922-CA/LLilmonix3b-v0.4a language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> static quants of https://huggingface.co/922-CA/LLilmonix3b-v0.4a <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/LLilmonix3b-v0.4a-GGUF/resolve/main/LLilmonix3b-v0.4a.Q2_K.gguf) | Q2_K | 2.1 | | | [GGUF](https://huggingface.co/mradermacher/LLilmonix3b-v0.4a-GGUF/resolve/main/LLilmonix3b-v0.4a.Q3_K_S.gguf) | Q3_K_S | 2.1 | | | [GGUF](https://huggingface.co/mradermacher/LLilmonix3b-v0.4a-GGUF/resolve/main/LLilmonix3b-v0.4a.IQ4_XS.gguf) | IQ4_XS | 2.1 | | | [GGUF](https://huggingface.co/mradermacher/LLilmonix3b-v0.4a-GGUF/resolve/main/LLilmonix3b-v0.4a.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/LLilmonix3b-v0.4a-GGUF/resolve/main/LLilmonix3b-v0.4a.Q3_K_L.gguf) | Q3_K_L | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/LLilmonix3b-v0.4a-GGUF/resolve/main/LLilmonix3b-v0.4a.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/LLilmonix3b-v0.4a-GGUF/resolve/main/LLilmonix3b-v0.4a.Q4_K_M.gguf) | Q4_K_M | 2.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/LLilmonix3b-v0.4a-GGUF/resolve/main/LLilmonix3b-v0.4a.Q5_K_S.gguf) | Q5_K_S | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/LLilmonix3b-v0.4a-GGUF/resolve/main/LLilmonix3b-v0.4a.Q5_K_M.gguf) | Q5_K_M | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/LLilmonix3b-v0.4a-GGUF/resolve/main/LLilmonix3b-v0.4a.Q6_K.gguf) | Q6_K | 3.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/LLilmonix3b-v0.4a-GGUF/resolve/main/LLilmonix3b-v0.4a.Q8_0.gguf) | Q8_0 | 3.7 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/LLilmonix3b-v0.4a-GGUF/resolve/main/LLilmonix3b-v0.4a.f16.gguf) | f16 | 7.0 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
Best000/d6ef5ef4-583d-4099-94b0-9e06ea8ebd83
Best000
2025-01-31T09:00:11Z
7
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "custom_code", "base_model:NousResearch/Yarn-Mistral-7b-128k", "base_model:adapter:NousResearch/Yarn-Mistral-7b-128k", "license:apache-2.0", "region:us" ]
null
2025-01-31T08:45:01Z
--- library_name: peft license: apache-2.0 base_model: NousResearch/Yarn-Mistral-7b-128k tags: - axolotl - generated_from_trainer model-index: - name: d6ef5ef4-583d-4099-94b0-9e06ea8ebd83 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/Yarn-Mistral-7b-128k bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 248079f476a07bc3_train_data.json ds_type: json format: custom path: /workspace/input_data/248079f476a07bc3_train_data.json type: field_instruction: problem field_output: qwq format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: Best000/d6ef5ef4-583d-4099-94b0-9e06ea8ebd83 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/248079f476a07bc3_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: e6be45b1-93a3-491a-ac21-d779477a89fc wandb_project: Birthday-SN56-16-Gradients-On-Demand wandb_run: your_name wandb_runid: e6be45b1-93a3-491a-ac21-d779477a89fc warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # d6ef5ef4-583d-4099-94b0-9e06ea8ebd83 This model is a fine-tuned version of [NousResearch/Yarn-Mistral-7b-128k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5509 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0001 | 1 | 0.8032 | | 2.9891 | 0.0019 | 13 | 0.6126 | | 2.3716 | 0.0037 | 26 | 0.5676 | | 2.1593 | 0.0056 | 39 | 0.5509 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
alchemist69/8ddc7444-4f06-486b-add5-be695ba775a1
alchemist69
2025-01-31T08:59:59Z
15
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:TinyLlama/TinyLlama_v1.1", "base_model:adapter:TinyLlama/TinyLlama_v1.1", "license:apache-2.0", "region:us" ]
null
2025-01-31T08:41:53Z
--- library_name: peft license: apache-2.0 base_model: TinyLlama/TinyLlama_v1.1 tags: - axolotl - generated_from_trainer model-index: - name: 8ddc7444-4f06-486b-add5-be695ba775a1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: TinyLlama/TinyLlama_v1.1 bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - f6627dfddf7998ee_train_data.json ds_type: json format: custom path: /workspace/input_data/f6627dfddf7998ee_train_data.json type: field_input: traj_0_response field_instruction: prompt field_output: traj_0_solution_0 format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: alchemist69/8ddc7444-4f06-486b-add5-be695ba775a1 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 8 mlflow_experiment_name: /tmp/f6627dfddf7998ee_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 special_tokens: pad_token: </s> strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 41e012f9-ee25-49ae-abe0-b64021ea6e9d wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 41e012f9-ee25-49ae-abe0-b64021ea6e9d warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 8ddc7444-4f06-486b-add5-be695ba775a1 This model is a fine-tuned version of [TinyLlama/TinyLlama_v1.1](https://huggingface.co/TinyLlama/TinyLlama_v1.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8087 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.6269 | 0.0005 | 1 | 1.3732 | | 0.8229 | 0.0273 | 50 | 0.8983 | | 0.7812 | 0.0547 | 100 | 0.8361 | | 0.7785 | 0.0820 | 150 | 0.8107 | | 0.7647 | 0.1093 | 200 | 0.8087 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
robiulawaldev/f51fc536-ec44-4ee6-86aa-63f55f95a32d
robiulawaldev
2025-01-31T08:59:16Z
7
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "custom_code", "base_model:NousResearch/Yarn-Mistral-7b-128k", "base_model:adapter:NousResearch/Yarn-Mistral-7b-128k", "license:apache-2.0", "region:us" ]
null
2025-01-31T08:44:41Z
--- library_name: peft license: apache-2.0 base_model: NousResearch/Yarn-Mistral-7b-128k tags: - axolotl - generated_from_trainer model-index: - name: f51fc536-ec44-4ee6-86aa-63f55f95a32d results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/Yarn-Mistral-7b-128k bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 248079f476a07bc3_train_data.json ds_type: json format: custom path: /workspace/input_data/248079f476a07bc3_train_data.json type: field_instruction: problem field_output: qwq format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: false group_by_length: false hub_model_id: robiulawaldev/f51fc536-ec44-4ee6-86aa-63f55f95a32d hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: constant max_steps: 55 micro_batch_size: 4 mlflow_experiment_name: /tmp/248079f476a07bc3_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: e6be45b1-93a3-491a-ac21-d779477a89fc wandb_project: Birthday-SN56-37-Gradients-On-Demand wandb_run: your_name wandb_runid: e6be45b1-93a3-491a-ac21-d779477a89fc warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # f51fc536-ec44-4ee6-86aa-63f55f95a32d This model is a fine-tuned version of [NousResearch/Yarn-Mistral-7b-128k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5571 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 5 - training_steps: 55 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0001 | 1 | 0.7136 | | 1.3604 | 0.0020 | 14 | 0.5867 | | 1.1486 | 0.0040 | 28 | 0.5643 | | 1.1113 | 0.0060 | 42 | 0.5571 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
roleplaiapp/deepseek-r1-qwen-2.5-32B-ablated-Q2_K-GGUF
roleplaiapp
2025-01-31T08:54:48Z
519
0
transformers
[ "transformers", "gguf", "2-bit", "32b", "Q2_K", "ablated", "deepseek", "llama-cpp", "qwen", "text-generation", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2025-01-31T08:54:04Z
--- library_name: transformers pipeline_tag: text-generation tags: - 2-bit - 32b - Q2_K - ablated - deepseek - gguf - llama-cpp - qwen - text-generation --- # roleplaiapp/deepseek-r1-qwen-2.5-32B-ablated-Q2_K-GGUF **Repo:** `roleplaiapp/deepseek-r1-qwen-2.5-32B-ablated-Q2_K-GGUF` **Original Model:** `deepseek-r1-qwen-2.5-32B-ablated` **Quantized File:** `deepseek-r1-qwen-2.5-32B-ablated-Q2_K.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q2_K` ## Overview This is a GGUF Q2_K quantized version of deepseek-r1-qwen-2.5-32B-ablated ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
mrferr3t/218f357a-bf96-4f3d-9a32-ebc5d19ab814
mrferr3t
2025-01-31T08:53:41Z
7
0
peft
[ "peft", "safetensors", "phi3", "axolotl", "generated_from_trainer", "custom_code", "base_model:microsoft/Phi-3-mini-4k-instruct", "base_model:adapter:microsoft/Phi-3-mini-4k-instruct", "license:mit", "region:us" ]
null
2025-01-31T08:47:43Z
--- library_name: peft license: mit base_model: microsoft/Phi-3-mini-4k-instruct tags: - axolotl - generated_from_trainer model-index: - name: 218f357a-bf96-4f3d-9a32-ebc5d19ab814 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: microsoft/Phi-3-mini-4k-instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 8867f9c63654d921_train_data.json ds_type: json format: custom path: /workspace/input_data/8867f9c63654d921_train_data.json type: field_instruction: func_name field_output: func_documentation_string format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_steps: 50 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: mrferr3t/218f357a-bf96-4f3d-9a32-ebc5d19ab814 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0005 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 99 micro_batch_size: 2 mlflow_experiment_name: /tmp/8867f9c63654d921_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 300 saves_per_epoch: 0 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 36eaa8be-1d88-48f8-9ab8-9b8a8a7590a9 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 36eaa8be-1d88-48f8-9ab8-9b8a8a7590a9 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 218f357a-bf96-4f3d-9a32-ebc5d19ab814 This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8013 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 99 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 8.6082 | 0.0001 | 1 | 2.5309 | | 6.4575 | 0.0037 | 50 | 1.8013 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.3.1+cu121 - Datasets 3.0.1 - Tokenizers 0.20.1
Polly1231/llava-v1.5-7b-vlguard-without_helpfulnessData-20250131
Polly1231
2025-01-31T08:53:13Z
128
0
peft
[ "peft", "safetensors", "llava_llama", "arxiv:1910.09700", "base_model:liuhaotian/llava-v1.5-7b", "base_model:adapter:liuhaotian/llava-v1.5-7b", "region:us" ]
null
2025-01-31T08:33:21Z
--- base_model: liuhaotian/llava-v1.5-7b library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
robiual-awal/6357f811-c145-4c2a-805c-0b3d78e4a70a
robiual-awal
2025-01-31T08:52:33Z
6
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2.5-Coder-7B-Instruct", "base_model:adapter:Qwen/Qwen2.5-Coder-7B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-01-31T08:08:18Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2.5-Coder-7B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 6357f811-c145-4c2a-805c-0b3d78e4a70a results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen2.5-Coder-7B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 76456b933bd6f3db_train_data.json ds_type: json format: custom path: /workspace/input_data/76456b933bd6f3db_train_data.json type: field_input: tokens field_instruction: wikimedia_file field_output: caption format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: robiual-awal/6357f811-c145-4c2a-805c-0b3d78e4a70a hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: constant max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/76456b933bd6f3db_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 80ebeba5-ab02-4d0a-89cc-f03ad9df2399 wandb_project: Birthday-SN56-29-Gradients-On-Demand wandb_run: your_name wandb_runid: 80ebeba5-ab02-4d0a-89cc-f03ad9df2399 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 6357f811-c145-4c2a-805c-0b3d78e4a70a This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0189 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0000 | 1 | 0.2694 | | 0.0289 | 0.0014 | 50 | 0.0271 | | 0.019 | 0.0028 | 100 | 0.0235 | | 0.054 | 0.0042 | 150 | 0.0196 | | 0.0088 | 0.0056 | 200 | 0.0189 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
ypesk/frugal-ai-EURECOM-ct-bert-baseline
ypesk
2025-01-31T08:51:54Z
10
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "region:us" ]
null
2025-01-29T15:18:09Z
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: [More Information Needed] - Docs: [More Information Needed]
AndreasStrid/Andy-Model
AndreasStrid
2025-01-31T08:50:51Z
38
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-01-31T08:22:13Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: Andy --- # Andy Model <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `Andy` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('AndreasStrid/Andy-Model', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
bane5631/1266e6bf-0d72-411f-8fa4-69dbd4ee4ba9
bane5631
2025-01-31T08:50:49Z
9
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:unsloth/Mistral-Nemo-Base-2407", "base_model:adapter:unsloth/Mistral-Nemo-Base-2407", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T08:16:44Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Mistral-Nemo-Base-2407 tags: - axolotl - generated_from_trainer model-index: - name: 1266e6bf-0d72-411f-8fa4-69dbd4ee4ba9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Mistral-Nemo-Base-2407 bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - e25cb6311706a7c7_train_data.json ds_type: json format: custom path: /workspace/input_data/e25cb6311706a7c7_train_data.json type: field_instruction: prompt_attack field_output: output_vittima format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: null eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: bane5631/1266e6bf-0d72-411f-8fa4-69dbd4ee4ba9 hub_repo: null hub_strategy: end hub_token: null learning_rate: 0.0001 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/e25cb6311706a7c7_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null saves_per_epoch: null sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 768f12f5-c6fb-403d-9cec-27135dc3578c wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 768f12f5-c6fb-403d-9cec-27135dc3578c warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 1266e6bf-0d72-411f-8fa4-69dbd4ee4ba9 This model is a fine-tuned version of [unsloth/Mistral-Nemo-Base-2407](https://huggingface.co/unsloth/Mistral-Nemo-Base-2407) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1578 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 167 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 4.3808 | 0.9985 | 166 | 1.1600 | | 4.4278 | 1.0045 | 167 | 1.1578 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
lesso13/77b5485f-520e-422b-8e85-60c8f0281ecc
lesso13
2025-01-31T08:50:15Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/SmolLM2-360M-Instruct", "base_model:adapter:unsloth/SmolLM2-360M-Instruct", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T07:27:55Z
--- library_name: peft license: apache-2.0 base_model: unsloth/SmolLM2-360M-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 77b5485f-520e-422b-8e85-60c8f0281ecc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/SmolLM2-360M-Instruct bf16: auto chat_template: llama3 datasets: - data_files: - ed31b7df3268d6c5_train_data.json ds_type: json format: custom path: /workspace/input_data/ed31b7df3268d6c5_train_data.json type: field_input: '' field_instruction: input field_output: output format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: lesso13/77b5485f-520e-422b-8e85-60c8f0281ecc hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/ed31b7df3268d6c5_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 2ccd3dbf-7834-4a29-bd07-6df17c1f1f49 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 2ccd3dbf-7834-4a29-bd07-6df17c1f1f49 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 77b5485f-520e-422b-8e85-60c8f0281ecc This model is a fine-tuned version of [unsloth/SmolLM2-360M-Instruct](https://huggingface.co/unsloth/SmolLM2-360M-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0 | 0.0050 | 200 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Razvan1974/Lavinia
Razvan1974
2025-01-31T08:48:47Z
7
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-01-31T08:28:46Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: Lav --- # Lavinia <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `Lav` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Razvan1974/Lavinia', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
roleplaiapp/DeepSeek-R1-Distill-Alpaca-FineTuned-IQ4_XS-GGUF
roleplaiapp
2025-01-31T08:48:14Z
23
0
transformers
[ "transformers", "gguf", "IQ4_XS", "alpaca", "deepseek", "distill", "finetuned", "iq4", "llama-cpp", "text-generation", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-01-31T08:47:55Z
--- library_name: transformers pipeline_tag: text-generation tags: - IQ4_XS - alpaca - deepseek - distill - finetuned - gguf - iq4 - llama-cpp - text-generation --- # roleplaiapp/DeepSeek-R1-Distill-Alpaca-FineTuned-IQ4_XS-GGUF **Repo:** `roleplaiapp/DeepSeek-R1-Distill-Alpaca-FineTuned-IQ4_XS-GGUF` **Original Model:** `DeepSeek-R1-Distill-Alpaca-FineTuned` **Quantized File:** `DeepSeek-R1-Distill-Alpaca-FineTuned.IQ4_XS.gguf` **Quantization:** `GGUF` **Quantization Method:** `IQ4_XS` ## Overview This is a GGUF IQ4_XS quantized version of DeepSeek-R1-Distill-Alpaca-FineTuned ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
lightblue/DeepSeek-R1-Distill-Qwen-14B-Multilingual
lightblue
2025-01-31T08:47:51Z
338
10
null
[ "safetensors", "qwen2", "reasoning", "am", "ar", "bn", "zh", "cs", "nl", "en", "fr", "de", "el", "ha", "he", "hi", "id", "it", "ja", "jv", "km", "ko", "lo", "ms", "mr", "fa", "pl", "pt", "ro", "ru", "es", "sw", "sv", "tl", "ta", "te", "th", "tr", "uk", "ur", "vi", "dataset:lightblue/reasoning-multilingual-R1-Llama-70B-train", "license:apache-2.0", "region:us" ]
null
2025-01-31T08:13:40Z
--- language: - am - ar - bn - zh - cs - nl - en - fr - de - el - ha - he - hi - id - it - ja - jv - km - ko - lo - ms - mr - fa - pl - pt - ro - ru - es - sw - sv - tl - ta - te - th - tr - uk - ur - vi license: apache-2.0 datasets: - lightblue/reasoning-multilingual-R1-Llama-70B-train tags: - reasoning --- # lightblue/DeepSeek-R1-Distill-Qwen-14B-Multilingual <div style="width: 100%; height: 160px; display: flex; align-items: center; justify-content: center; border: 8px solid black; font-size: 120px; font-weight: bold; text-align: center; color: #438db8, font-family: 'Helvetica Neue', sans-serif;"> <span style="color: #438db8;">R1</span> &nbsp; <span style="color: blue;">m</span> <span style="color: green;">u</span> <span style="color: purple;">l</span> <span style="color: yellow;">t</span> <span style="color: pink;">i</span> <span style="color: cyan;">l</span> <span style="color: magenta;">i</span> <span style="color: lime;">n</span> <span style="color: teal;">g</span> </div> This is a Deepseek distill finetune trained on multilingual Chain-of-Thought (CoT). When this model is prompted in a language, it will both think and respond in that language, unlike the original R1 which will often think in either Chinese or English. This will make the outputs of these AIs more understandable and explainable to a wider audience. Hopefully this will be useful to the AI community, particularly those developing for languages aside from English and Chinese. This model is a multilingual fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-14B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B). Other fine-tuned versions of this model can be found in [our collection, here](https://huggingface.co/collections/lightblue/r1-multilingual-679c890166ac0a84e83e38fa). This model was trained was trained using our [lightblue/reasoning-multilingual-R1-Llama-70B-train](https://huggingface.co/datasets/lightblue/reasoning-multilingual-R1-Llama-70B-train) dataset for ~10 minutes on the 8 x L20 instance ([ecs.gn8is-8x.32xlarge](https://www.alibabacloud.com/help/en/ecs/user-guide/gpu-accelerated-compute-optimized-and-vgpu-accelerated-instance-families-1)) on [Alibaba Cloud](https://www.alibabacloud.com/). # How to use When using these models, we recommend using a sampling temperature of between 0.5-0.7, [as per the original distilled R1 models](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B#usage-recommendations). Additionally, we have observed that the model sometimes tends to repeat for more niche languages, so we also recommend setting `repetition_penalty` to 1.1, or higher if the model repeats itself when processing your prompts. We include scripts to use this model in vLLM: <ul> <li><b>vLLM</b> Install [vLLM](https://github.com/vllm-project/vllm/) using `pip install vllm`. <details open> <summary>Show vLLM code</summary> ```python from vllm import LLM, SamplingParams llm = LLM( model="lightblue/DeepSeek-R1-Distill-Qwen-7B-Multilingual", max_model_len=8_000 ) sampling_params = SamplingParams( temperature=0.5, max_tokens=8_000 ) prompts = [ """学校には1クラスにつき20人の生徒がおり、クラスは合計3つあります。 学校全体では男子と女子がそれぞれ50%ずついます。 1つ目のクラスには女子が15人、2つ目のクラスには女子が12人います。 3つ目のクラスには何人の男子がいますか?""" ] conversations = [ [{"role": "user", "content": x}] for x in prompts ] outputs = llm.chat(conversations, sampling_params=sampling_params) for output in outputs: print(output.outputs[0].text) # <think> # まず、学校の総生徒数を算出します。各クラスに20人の生徒があり、クラスは3つあるため、総生徒数は60人です。 # 次に、学校全体で男子と女子は同じ人数で分布しています。したがって、男子と女子各有30人。 ... # したがって、3つ目のクラスの男子数は20 - 3 = 17人です。 # </think> # **解答:** # 学校の総生徒数を算出します。 ... # **最終的な答え:** # \[ # \boxed{17} # \] ``` </details></li> </ul> # Evaluation Through some quick evaluation of our own, we found this model can produce much correctly formatted and accurate results for higher resource languages, such as Japanese, English, German, than lower resource languages, such as Amharic or Lao. We did a **very** quick evaluation of 5 questions with each dataset (written by me and translated by GPT4o Mini) on the [lightblue/DeepSeek-R1-Distill-Qwen-7B-Multilingual](https://huggingface.co/lightblue/DeepSeek-R1-Distill-Qwen-7B-Multilingual) model, and we find that the model is able to fairly reliably output the correct answers and in the correct language for a large variety of languages: For this evaluation, a score of >=0.8 is good, as one of the questions was very hard. The language detection was done using [pycld2](https://pypi.org/project/pycld2/) so errors may occur with the correct language being mistaken for another one. | language | Has a correct think statement | Has the think statement in the correct language | Is the response in the correct language | Is the answer correct | |:----------------|------------:|------------------------:|----------------------:|-------------:| | Amharic | 0.2 | 0 | 0 | 0 | | Arabic | 1 | 0.8 | 0.8 | 0.6 | | Bengali | 1 | 1 | 1 | 0.2 | | Chinese | 1 | 1 | 1 | 0.8 | | Czech | 1 | 1 | 1 | 0.8 | | Dutch | 1 | 1 | 1 | 0.8 | | English | 1 | 1 | 1 | 0.8 | | French | 1 | 1 | 1 | 0.8 | | German | 1 | 1 | 1 | 0.8 | | Greek | 1 | 1 | 1 | 0.6 | | Hausa | 0.4 | 0 | 0 | 0 | | Hebrew | 1 | 0.8 | 1 | 0.6 | | Hindi | 1 | 1 | 1 | 0.8 | | Indonesian | 1 | 1 | 1 | 0.8 | | Italian | 1 | 1 | 1 | 0.8 | | Japanese | 1 | 1 | 0.8 | 0.6 | | Javanese | 0.8 | 0.2 | 0.2 | 0.6 | | Khmer | 0.6 | 0.6 | 0.6 | 0 | | Korean | 1 | 1 | 1 | 1 | | Lao | 0.4 | 0.4 | 0.4 | 0 | | Malay | 1 | 0.4 | 0.4 | 0.8 | | Marathi | 0.6 | 0.4 | 0.6 | 0.2 | | Persian (Farsi) | 0.6 | None* | None* | 0.2 | | Polish | 1 | 1 | 1 | 0.6 | | Portuguese | 1 | 1 | 1 | 0.8 | | Romanian | 1 | 1 | 1 | 0.8 | | Russian | 1 | 1 | 1 | 0.8 | | Spanish | 1 | 1 | 1 | 0.8 | | Swahili | 0.4 | 0.4 | 0.4 | 0 | | Swedish | 1 | 1 | 1 | 0.8 | | Tagalog | 1 | 1 | 1 | 0.8 | | Tamil | 0.8 | 0.8 | 0.8 | 0.2 | | Telugu | 0.8 | 0.6 | 0.8 | 0 | | Thai | 1 | 1 | 1 | 0.8 | | Turkish | 1 | 1 | 1 | 0.8 | | Ukrainian | 1 | 1 | 1 | 0.8 | | Urdu | 1 | 1 | 1 | 0.6 | | Vietnamese | 1 | 1 | 1 | 1 | * There was an error with Farsi detection (my own fault) so we do not report Farsi scores. The evaluation code for this can be found [here](https://drive.google.com/file/d/1P33GpqvKmHoZUsWqqBPXHTToN2W7MDRG/view?usp=sharing). # Training code ```yaml ### model model_name_or_path: deepseek-ai/DeepSeek-R1-Distill-Qwen-14B ### method stage: sft do_train: true finetuning_type: full deepspeed: /root/LLaMA-Factory/examples/deepspeed/ds_z3_config.json ### dataset dataset: reasoning-multilingual-R1-Llama-70B-train template: qwen cutoff_len: 4096 overwrite_cache: true preprocessing_num_workers: 16 packing: true ### output output_dir: /root/train_outputs/DeepSeek-R1-Distill-Qwen-14B/reasoning-multilingual-R1-Llama-70B-train logging_steps: 1 save_steps: 0.99999 plot_loss: true overwrite_output_dir: true ### train per_device_train_batch_size: 1 gradient_accumulation_steps: 1 learning_rate: 1.0e-5 num_train_epochs: 1.0 lr_scheduler_type: cosine warmup_ratio: 0.01 bf16: true ddp_timeout: 180000000 ### eval val_size: 0.01 per_device_eval_batch_size: 1 eval_strategy: steps eval_steps: 0.1 ``` ```bash echo '{ "reasoning-multilingual-R1-Llama-70B-train": { "hf_hub_url": "lightblue/reasoning-multilingual-R1-Llama-70B-train", "formatting": "sharegpt" } }' > /root/LLaMA-Factory/data/dataset_info.json # # 14B Llama cd /root/LLaMA-Factory && llamafactory-cli train /root/reasoning_multilingual_train_14B.yaml rm -r /root/train_outputs/DeepSeek-R1-Distill-Qwen-14B/reasoning-multilingual-R1-Llama-70B-train/checkpoint* huggingface-cli upload lightblue/DeepSeek-R1-Distill-Qwen-14B-Multilingual /root/train_outputs/DeepSeek-R1-Distill-Qwen-14B/reasoning-multilingual-R1-Llama-70B-train ``` # License We share this model with the Apache 2.0 license. # Developed by <a href="https://www.lightblue-tech.com"> <img src="https://www.lightblue-tech.com/wp-content/uploads/2023/08/color_%E6%A8%AA%E5%9E%8B-1536x469.png" alt="Lightblue technology logo" width="400"/> </a> This model was trained by Peter Devine ([ptrdvn](https://huggingface.co/ptrdvn)) for Lightblue
Aescleah/stackexchange_parenting-Q2_K-GGUF
Aescleah
2025-01-31T08:46:49Z
21
0
transformers
[ "transformers", "gguf", "llama-factory", "full", "generated_from_trainer", "llama-cpp", "gguf-my-repo", "base_model:mlfoundations-dev/stackexchange_parenting", "base_model:quantized:mlfoundations-dev/stackexchange_parenting", "license:llama3.1", "endpoints_compatible", "region:us", "conversational" ]
null
2025-01-31T08:46:32Z
--- library_name: transformers license: llama3.1 base_model: mlfoundations-dev/stackexchange_parenting tags: - llama-factory - full - generated_from_trainer - llama-cpp - gguf-my-repo model-index: - name: stackexchange_parenting results: [] --- # Aescleah/stackexchange_parenting-Q2_K-GGUF This model was converted to GGUF format from [`mlfoundations-dev/stackexchange_parenting`](https://huggingface.co/mlfoundations-dev/stackexchange_parenting) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/mlfoundations-dev/stackexchange_parenting) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Aescleah/stackexchange_parenting-Q2_K-GGUF --hf-file stackexchange_parenting-q2_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Aescleah/stackexchange_parenting-Q2_K-GGUF --hf-file stackexchange_parenting-q2_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Aescleah/stackexchange_parenting-Q2_K-GGUF --hf-file stackexchange_parenting-q2_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Aescleah/stackexchange_parenting-Q2_K-GGUF --hf-file stackexchange_parenting-q2_k.gguf -c 2048 ```
kostiantynk/9f47bedf-e620-44dc-a3cd-ae5edc5612cd
kostiantynk
2025-01-31T08:44:49Z
6
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2.5-Coder-7B-Instruct", "base_model:adapter:Qwen/Qwen2.5-Coder-7B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-01-31T08:10:10Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2.5-Coder-7B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 9f47bedf-e620-44dc-a3cd-ae5edc5612cd results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen2.5-Coder-7B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 76456b933bd6f3db_train_data.json ds_type: json format: custom path: /workspace/input_data/76456b933bd6f3db_train_data.json type: field_input: tokens field_instruction: wikimedia_file field_output: caption format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: kostiantynk/9f47bedf-e620-44dc-a3cd-ae5edc5612cd hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/76456b933bd6f3db_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 80ebeba5-ab02-4d0a-89cc-f03ad9df2399 wandb_project: Birthday-SN56-7-Gradients-On-Demand wandb_run: your_name wandb_runid: 80ebeba5-ab02-4d0a-89cc-f03ad9df2399 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 9f47bedf-e620-44dc-a3cd-ae5edc5612cd This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0332 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0000 | 1 | 0.2820 | | 0.1616 | 0.0004 | 13 | 0.0460 | | 0.1001 | 0.0007 | 26 | 0.0358 | | 0.0308 | 0.0011 | 39 | 0.0332 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
WiroAI/Hyunjin-Flux-LoRA
WiroAI
2025-01-31T08:44:47Z
14
2
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "transformers", "template:sd-lora", "ai-toolkit", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-01-14T06:45:59Z
--- tags: - text-to-image - flux - lora - diffusers - transformers - template:sd-lora - ai-toolkit widget: - text: hyunjinwiro, a young Korean idol with soft blonde hair, flawless skin, and expressive brown eyes. He sits on a velvet couch in a luxurious studio, wearing a pastel sweater and silver accessories. Ultra-HD, realistic, focusing on his gentle charisma and intricate details. output: url: hyunjin2.png license: other instance_prompt: hyunjinwiro base_model: - black-forest-labs/FLUX.1-dev license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- <div align="center"> <img src="https://huggingface.co/WiroAI/wiroai-turkish-llm-9b/resolve/main/wiro_logo.png" width="15%" alt="Wiro AI" /> </div> <hr> <div align="center" style="line-height: 1;"> <a href="https://www.wiro.ai/" target="_blank" style="margin: 2px;"> <img alt="Homepage" src="https://huggingface.co/WiroAI/wiroai-turkish-llm-9b/resolve/main/homepage.svg" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://huggingface.co/WiroAI" target="_blank" style="margin: 2px;"> <img alt="Hugging Face" src="https://huggingface.co/WiroAI/wiroai-turkish-llm-9b/resolve/main/huggingface.svg" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://civitai.com/user/wiroai" target="_blank" style="margin: 2px;"> <img alt="CivitAI" src="https://huggingface.co/WiroAI/pokemon-flux-lora/resolve/main/civitai.svg" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://instagram.com/wiroai" target="_blank" style="margin: 2px;"> <img alt="Instagram Follow" src="https://img.shields.io/badge/Instagram-wiroai-555555?logo=instagram&logoColor=white&labelColor=E4405F" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://x.com/wiroai" target="_blank" style="margin: 2px;"> <img alt="X Follow" src="https://img.shields.io/badge/X-wiroai-555555?logo=x&logoColor=white&labelColor=000000" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://wiro.ai/agreement/terms-of-service" style="margin: 2px;"> <img alt="License" src="https://img.shields.io/badge/License-apache 2.0-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/> </a> </div> ## Model Details ### Model Description This LoRA is trained for anyone who like Hyunjin from Stray Kids. - **Developed by:** [Wiro AI - ML Team] - **Shared by:** [Wiro AI](https://wiro.ai/) <Gallery /> ## Trigger words You should use `hyunjinwiro` to trigger the image generation. ## Civitai model link: [civitai](https://civitai.com/models/1139290/hyunjin-from-stray-kids-flux-lora) ```py from diffusers import FluxPipeline import torch pipeline = FluxPipeline.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda') pipeline.load_lora_weights('WiroAI/Hyunjin-Flux-LoRA', weight_name='hyunjin_flux_lora.safetensors') image = pipeline('hyunjinwiro, a young Korean idol with soft blonde hair, flawless skin, and expressive brown eyes. He sits on a velvet couch in a luxurious studio, wearing a pastel sweater and silver accessories. Ultra-HD, realistic, focusing on his gentle charisma and intricate details.').images[0] image.save("output.png") ```
WiroAI/Momo-Flux-LoRA
WiroAI
2025-01-31T08:44:37Z
167
3
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "transformers", "template:sd-lora", "ai-toolkit", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-01-15T07:01:18Z
--- tags: - text-to-image - flux - lora - diffusers - transformers - template:sd-lora - ai-toolkit widget: - text: mmowiro, A Korean pop star with sleek black hair parted to one side and flawless fair skin. She wears a bold black leather jacket with silver embellishments, standing under cool-toned studio lighting. Her piercing gaze conveys confidence and intensity, with no smile. Ultra-HD, realistic, emphasizing her strong features and stylish outfit. output: url: momo2.png license: other instance_prompt: mmowiro base_model: - black-forest-labs/FLUX.1-dev license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- <div align="center"> <img src="https://huggingface.co/WiroAI/wiroai-turkish-llm-9b/resolve/main/wiro_logo.png" width="15%" alt="Wiro AI" /> </div> <hr> <div align="center" style="line-height: 1;"> <a href="https://www.wiro.ai/" target="_blank" style="margin: 2px;"> <img alt="Homepage" src="https://huggingface.co/WiroAI/wiroai-turkish-llm-9b/resolve/main/homepage.svg" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://huggingface.co/WiroAI" target="_blank" style="margin: 2px;"> <img alt="Hugging Face" src="https://huggingface.co/WiroAI/wiroai-turkish-llm-9b/resolve/main/huggingface.svg" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://civitai.com/user/wiroai" target="_blank" style="margin: 2px;"> <img alt="CivitAI" src="https://huggingface.co/WiroAI/pokemon-flux-lora/resolve/main/civitai.svg" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://instagram.com/wiroai" target="_blank" style="margin: 2px;"> <img alt="Instagram Follow" src="https://img.shields.io/badge/Instagram-wiroai-555555?logo=instagram&logoColor=white&labelColor=E4405F" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://x.com/wiroai" target="_blank" style="margin: 2px;"> <img alt="X Follow" src="https://img.shields.io/badge/X-wiroai-555555?logo=x&logoColor=white&labelColor=000000" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://wiro.ai/agreement/terms-of-service" style="margin: 2px;"> <img alt="License" src="https://img.shields.io/badge/License-apache 2.0-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/> </a> </div> ## Model Details ### Model Description This LoRA is trained for anyone who like Momo from Twice. - **Developed by:** [Wiro AI - ML Team] - **Shared by:** [Wiro AI](https://wiro.ai/) <Gallery /> ## Trigger words You should use `mmowiro` to trigger the image generation. ## Civitai model link: [civitai](https://civitai.com/models/1143080/momo-from-twice-flux-lora) ```py from diffusers import FluxPipeline import torch pipeline = FluxPipeline.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda') pipeline.load_lora_weights('WiroAI/Momo-Flux-LoRA', weight_name='momo_flux_lora.safetensors') image = pipeline('mmowiro, a Korean pop star with sleek black hair parted to one side and flawless fair skin. She wears a bold black leather jacket with silver embellishments, standing under cool-toned studio lighting. Her piercing gaze conveys confidence and intensity, with no smile. Ultra-HD, realistic, emphasizing her strong features and stylish outfit.').images[0] image.save("output.png") ```
nathanialhunt/03dfd3df-3142-4786-a8dd-14f6c5dd0472
nathanialhunt
2025-01-31T08:44:34Z
6
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2.5-Coder-7B-Instruct", "base_model:adapter:Qwen/Qwen2.5-Coder-7B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-01-31T08:10:15Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2.5-Coder-7B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 03dfd3df-3142-4786-a8dd-14f6c5dd0472 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen2.5-Coder-7B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 76456b933bd6f3db_train_data.json ds_type: json format: custom path: /workspace/input_data/76456b933bd6f3db_train_data.json type: field_input: tokens field_instruction: wikimedia_file field_output: caption format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: nathanialhunt/03dfd3df-3142-4786-a8dd-14f6c5dd0472 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/76456b933bd6f3db_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 80ebeba5-ab02-4d0a-89cc-f03ad9df2399 wandb_project: Birthday-SN56-24-Gradients-On-Demand wandb_run: your_name wandb_runid: 80ebeba5-ab02-4d0a-89cc-f03ad9df2399 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 03dfd3df-3142-4786-a8dd-14f6c5dd0472 This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0336 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0000 | 1 | 0.2820 | | 0.1613 | 0.0004 | 13 | 0.0465 | | 0.1008 | 0.0007 | 26 | 0.0361 | | 0.0314 | 0.0011 | 39 | 0.0336 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
WenWW/HNC_D1-1.5_2048_epoch3
WenWW
2025-01-31T08:44:10Z
27
0
transformers
[ "transformers", "safetensors", "clip", "zero-shot-image-classification", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
zero-shot-image-classification
2025-01-31T08:43:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
WiroAI/Jennie-Flux-LoRA
WiroAI
2025-01-31T08:44:00Z
55
3
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "transformers", "template:sd-lora", "ai-toolkit", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-01-16T10:55:52Z
--- tags: - text-to-image - flux - lora - diffusers - transformers - template:sd-lora - ai-toolkit widget: - text: jenniewiro, A cheerful K-pop star with wavy blonde hair and glowing skin, wearing a colorful oversized hoodie covered in cartoon prints. She poses playfully in a recording studio, holding a giant pair of headphones over her head with a wide, exaggerated grin. Ultra-HD, realistic, capturing her vibrant energy and comedic charm. output: url: jennie2.png license: other instance_prompt: jenniewiro base_model: - black-forest-labs/FLUX.1-dev license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- <div align="center"> <img src="https://huggingface.co/WiroAI/wiroai-turkish-llm-9b/resolve/main/wiro_logo.png" width="15%" alt="Wiro AI" /> </div> <hr> <div align="center" style="line-height: 1;"> <a href="https://www.wiro.ai/" target="_blank" style="margin: 2px;"> <img alt="Homepage" src="https://huggingface.co/WiroAI/wiroai-turkish-llm-9b/resolve/main/homepage.svg" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://huggingface.co/WiroAI" target="_blank" style="margin: 2px;"> <img alt="Hugging Face" src="https://huggingface.co/WiroAI/wiroai-turkish-llm-9b/resolve/main/huggingface.svg" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://civitai.com/user/wiroai" target="_blank" style="margin: 2px;"> <img alt="CivitAI" src="https://huggingface.co/WiroAI/pokemon-flux-lora/resolve/main/civitai.svg" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://instagram.com/wiroai" target="_blank" style="margin: 2px;"> <img alt="Instagram Follow" src="https://img.shields.io/badge/Instagram-wiroai-555555?logo=instagram&logoColor=white&labelColor=E4405F" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://x.com/wiroai" target="_blank" style="margin: 2px;"> <img alt="X Follow" src="https://img.shields.io/badge/X-wiroai-555555?logo=x&logoColor=white&labelColor=000000" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://wiro.ai/agreement/terms-of-service" style="margin: 2px;"> <img alt="License" src="https://img.shields.io/badge/License-apache 2.0-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/> </a> </div> ## Model Details ### Model Description This LoRA is trained for anyone who like Jennie from Blackpink. - **Developed by:** [Wiro AI - ML Team] - **Shared by:** [Wiro AI](https://wiro.ai/) <Gallery /> ## Trigger words You should use `jenniewiro` to trigger the image generation. ## Civitai model link: [civitai](https://civitai.com/models/1147143?modelVersionId=1290176) ```py from diffusers import FluxPipeline import torch pipeline = FluxPipeline.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda') pipeline.load_lora_weights('WiroAI/Jennie-Flux-LoRA', weight_name='jennie_flux_lora.safetensors') image = pipeline('jenniewiro, A cheerful K-pop star with wavy blonde hair and glowing skin, wearing a colorful oversized hoodie covered in cartoon prints. She poses playfully in a recording studio, holding a giant pair of headphones over her head with a wide, exaggerated grin. Ultra-HD, realistic, capturing her vibrant energy and comedic charm.').images[0] image.save("output.png") ```
WiroAI/Nayeon-Flux-LoRA
WiroAI
2025-01-31T08:43:56Z
191
4
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "transformers", "template:sd-lora", "ai-toolkit", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-01-08T09:03:21Z
--- tags: - text-to-image - flux - lora - diffusers - transformers - template:sd-lora - ai-toolkit widget: - text: nayeonwiro, a stylish woman with white skin, purple hair, and brown eyes. She is wearing a tailored black trench coat over a turtleneck sweater, standing on a bustling city street at dusk. The glow of neon lights reflects off nearby glass windows, creating a vibrant urban scene. output: url: nayeon1.png license: other instance_prompt: nayeonwiro base_model: - black-forest-labs/FLUX.1-dev license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- <div align="center"> <img src="https://huggingface.co/WiroAI/wiroai-turkish-llm-9b/resolve/main/wiro_logo.png" width="15%" alt="Wiro AI" /> </div> <hr> <div align="center" style="line-height: 1;"> <a href="https://www.wiro.ai/" target="_blank" style="margin: 2px;"> <img alt="Homepage" src="https://huggingface.co/WiroAI/wiroai-turkish-llm-9b/resolve/main/homepage.svg" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://huggingface.co/WiroAI" target="_blank" style="margin: 2px;"> <img alt="Hugging Face" src="https://huggingface.co/WiroAI/wiroai-turkish-llm-9b/resolve/main/huggingface.svg" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://civitai.com/user/wiroai" target="_blank" style="margin: 2px;"> <img alt="CivitAI" src="https://huggingface.co/WiroAI/pokemon-flux-lora/resolve/main/civitai.svg" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://instagram.com/wiroai" target="_blank" style="margin: 2px;"> <img alt="Instagram Follow" src="https://img.shields.io/badge/Instagram-wiroai-555555?logo=instagram&logoColor=white&labelColor=E4405F" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://x.com/wiroai" target="_blank" style="margin: 2px;"> <img alt="X Follow" src="https://img.shields.io/badge/X-wiroai-555555?logo=x&logoColor=white&labelColor=000000" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://wiro.ai/agreement/terms-of-service" style="margin: 2px;"> <img alt="License" src="https://img.shields.io/badge/License-apache 2.0-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/> </a> </div> ## Model Details ### Model Description This LoRA is trained for anyone who like Nayeon from Twice. - **Developed by:** [Wiro AI - ML Team] - **Shared by:** [Wiro AI] <Gallery /> ## Trigger words You should use `nayeonwiro` to trigger the image generation. ## Civitai model link: [civitai](https://civitai.com/models/1095496/nayeon-from-twice-flux-lora) ```py from diffusers import FluxPipeline import torch pipeline = FluxPipeline.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda') pipeline.load_lora_weights('WiroAI/Nayeon-Flux-LoRA', weight_name='nayeon_flux_lora.safetensors') image = pipeline('nayeonwiro, a stylish woman with white skin, purple hair, and brown eyes. She is wearing a tailored black trench coat over a turtleneck sweater, standing on a bustling city street at dusk. The glow of neon lights reflects off nearby glass windows, creating a vibrant urban scene.').images[0] image.save("output.png") ```
adammandic87/0bde45f2-6a3f-4cdc-b420-228b0bf659a3
adammandic87
2025-01-31T08:43:14Z
6
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2.5-Coder-7B-Instruct", "base_model:adapter:Qwen/Qwen2.5-Coder-7B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-01-31T08:08:50Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2.5-Coder-7B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 0bde45f2-6a3f-4cdc-b420-228b0bf659a3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen2.5-Coder-7B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 76456b933bd6f3db_train_data.json ds_type: json format: custom path: /workspace/input_data/76456b933bd6f3db_train_data.json type: field_input: tokens field_instruction: wikimedia_file field_output: caption format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: adammandic87/0bde45f2-6a3f-4cdc-b420-228b0bf659a3 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/76456b933bd6f3db_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 80ebeba5-ab02-4d0a-89cc-f03ad9df2399 wandb_project: Birthday-SN56-34-Gradients-On-Demand wandb_run: your_name wandb_runid: 80ebeba5-ab02-4d0a-89cc-f03ad9df2399 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 0bde45f2-6a3f-4cdc-b420-228b0bf659a3 This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0388 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0000 | 1 | 0.2820 | | 0.2279 | 0.0004 | 13 | 0.0746 | | 0.1341 | 0.0007 | 26 | 0.0444 | | 0.0518 | 0.0011 | 39 | 0.0388 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
WiroAI/Jisoo-Flux-LoRA
WiroAI
2025-01-31T08:43:10Z
97
1
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "transformers", "template:sd-lora", "ai-toolkit", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-01-22T07:30:53Z
--- tags: - text-to-image - flux - lora - diffusers - transformers - template:sd-lora - ai-toolkit widget: - text: jisoowiro, A 20-year-old Korean singer with short black hair and soft fair skin, wearing a casual oversized hoodie and ripped jeans. She stands on a lively city street at sunset, holding a guitar case. Her warm smile reflects her youthful energy, while neon shop signs illuminate the background. Ultra-HD, realistic, with intricate details of her outfit and the vibrant urban setting. output: url: jisoo1.png license: other instance_prompt: jisoowiro base_model: - black-forest-labs/FLUX.1-dev license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- <div align="center"> <img src="https://huggingface.co/WiroAI/wiroai-turkish-llm-9b/resolve/main/wiro_logo.png" width="15%" alt="Wiro AI" /> </div> <hr> <div align="center" style="line-height: 1;"> <a href="https://www.wiro.ai/" target="_blank" style="margin: 2px;"> <img alt="Homepage" src="https://huggingface.co/WiroAI/wiroai-turkish-llm-9b/resolve/main/homepage.svg" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://huggingface.co/WiroAI" target="_blank" style="margin: 2px;"> <img alt="Hugging Face" src="https://huggingface.co/WiroAI/wiroai-turkish-llm-9b/resolve/main/huggingface.svg" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://civitai.com/user/wiroai" target="_blank" style="margin: 2px;"> <img alt="CivitAI" src="https://huggingface.co/WiroAI/pokemon-flux-lora/resolve/main/civitai.svg" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://instagram.com/wiroai" target="_blank" style="margin: 2px;"> <img alt="Instagram Follow" src="https://img.shields.io/badge/Instagram-wiroai-555555?logo=instagram&logoColor=white&labelColor=E4405F" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://x.com/wiroai" target="_blank" style="margin: 2px;"> <img alt="X Follow" src="https://img.shields.io/badge/X-wiroai-555555?logo=x&logoColor=white&labelColor=000000" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://wiro.ai/agreement/terms-of-service" style="margin: 2px;"> <img alt="License" src="https://img.shields.io/badge/License-apache 2.0-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/> </a> </div> ## Model Details ### Model Description This LoRA is trained for anyone who like Jisoo from Blackpink. - **Developed by:** [Wiro AI - ML Team] - **Shared by:** [Wiro AI](https://wiro.ai/) <Gallery /> ## Trigger words You should use `jisoowiro` to trigger the image generation. ## Civitai model link: [civitai](https://civitai.com/models/1168879/jisoo-from-blackpink-flux-lora) ```py from diffusers import FluxPipeline import torch pipeline = FluxPipeline.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda') pipeline.load_lora_weights('WiroAI/Jisoo-Flux-LoRA', weight_name='jisoo_flux_lora.safetensors') image = pipeline('jisoowiro, A 20-year-old Korean singer with short black hair and soft fair skin, wearing a casual oversized hoodie and ripped jeans. She stands on a lively city street at sunset, holding a guitar case. Her warm smile reflects her youthful energy, while neon shop signs illuminate the background. Ultra-HD, realistic, with intricate details of her outfit and the vibrant urban setting.').images[0] image.save("output.png") ```
great0001/9b980662-9ddb-4ded-af12-9004df3b18a6
great0001
2025-01-31T08:42:18Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "custom_code", "base_model:NousResearch/CodeLlama-7b-hf", "base_model:adapter:NousResearch/CodeLlama-7b-hf", "region:us" ]
null
2025-01-31T08:30:47Z
--- library_name: peft base_model: NousResearch/CodeLlama-7b-hf tags: - axolotl - generated_from_trainer model-index: - name: 9b980662-9ddb-4ded-af12-9004df3b18a6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/CodeLlama-7b-hf bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - a80f531073244c9f_train_data.json ds_type: json format: custom path: /workspace/input_data/a80f531073244c9f_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: great0001/9b980662-9ddb-4ded-af12-9004df3b18a6 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: constant max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/a80f531073244c9f_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 846f22c8-74e1-47e8-9e98-11b3498ed786 wandb_project: Birthday-SN56-33-Gradients-On-Demand wandb_run: your_name wandb_runid: 846f22c8-74e1-47e8-9e98-11b3498ed786 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 9b980662-9ddb-4ded-af12-9004df3b18a6 This model is a fine-tuned version of [NousResearch/CodeLlama-7b-hf](https://huggingface.co/NousResearch/CodeLlama-7b-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0254 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0002 | 1 | 2.5910 | | 9.1513 | 0.0082 | 50 | 2.2809 | | 8.3367 | 0.0163 | 100 | 2.1738 | | 7.867 | 0.0245 | 150 | 2.0864 | | 8.0229 | 0.0327 | 200 | 2.0254 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
WenWW/HNC_D1-1.5_2048_epoch2
WenWW
2025-01-31T08:42:16Z
27
0
transformers
[ "transformers", "safetensors", "clip", "zero-shot-image-classification", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
zero-shot-image-classification
2025-01-31T08:41:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
WiroAI/Dahyun-Flux-LoRA
WiroAI
2025-01-31T08:41:37Z
80
4
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "transformers", "template:sd-lora", "ai-toolkit", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-01-30T07:43:40Z
--- tags: - text-to-image - flux - lora - diffusers - transformers - template:sd-lora - ai-toolkit widget: - text: dahyunwiro, A young woman with shoulder-length brown hair, wearing a denim jacket and white sneakers, walking down a busy city street. output: url: dahyun1.png license: other instance_prompt: dahyunwiro base_model: - black-forest-labs/FLUX.1-dev license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- <div align="center"> <img src="https://huggingface.co/WiroAI/wiroai-turkish-llm-9b/resolve/main/wiro_logo.png" width="15%" alt="Wiro AI" /> </div> <hr> <div align="center" style="line-height: 1;"> <a href="https://www.wiro.ai/" target="_blank" style="margin: 2px;"> <img alt="Homepage" src="https://huggingface.co/WiroAI/wiroai-turkish-llm-9b/resolve/main/homepage.svg" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://huggingface.co/WiroAI" target="_blank" style="margin: 2px;"> <img alt="Hugging Face" src="https://huggingface.co/WiroAI/wiroai-turkish-llm-9b/resolve/main/huggingface.svg" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://civitai.com/user/wiroai" target="_blank" style="margin: 2px;"> <img alt="CivitAI" src="https://huggingface.co/WiroAI/pokemon-flux-lora/resolve/main/civitai.svg" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://instagram.com/wiroai" target="_blank" style="margin: 2px;"> <img alt="Instagram Follow" src="https://img.shields.io/badge/Instagram-wiroai-555555?logo=instagram&logoColor=white&labelColor=E4405F" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://x.com/wiroai" target="_blank" style="margin: 2px;"> <img alt="X Follow" src="https://img.shields.io/badge/X-wiroai-555555?logo=x&logoColor=white&labelColor=000000" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://wiro.ai/agreement/terms-of-service" style="margin: 2px;"> <img alt="License" src="https://img.shields.io/badge/License-apache 2.0-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/> </a> </div> ## Model Details ### Model Description This LoRA is trained for anyone who like Dahyun from Twice. - **Developed by:** [Wiro AI - ML Team] - **Shared by:** [Wiro AI](https://wiro.ai/) <Gallery /> ## Trigger words You should use `dahyunwiro` to trigger the image generation. ## Civitai model link: [civitai](https://civitai.com/models/1198223/dahyun-from-twice-flux-lora) ```py from diffusers import FluxPipeline import torch pipeline = FluxPipeline.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda') pipeline.load_lora_weights('WiroAI/Dahyun-Flux-LoRA', weight_name='dahyun_flux_lora.safetensors') image = pipeline('dahyunwiro, A young woman with shoulder-length brown hair, wearing a denim jacket and white sneakers, walking down a busy city street.').images[0] image.save("output.png") ```
neuralmagic/Mistral-Small-24B-Instruct-2501-FP8-Dynamic
neuralmagic
2025-01-31T08:41:28Z
4,779
4
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mistral-small", "fp8", "vllm", "conversational", "en", "base_model:mistralai/Mistral-Small-24B-Instruct-2501", "base_model:quantized:mistralai/Mistral-Small-24B-Instruct-2501", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "compressed-tensors", "region:us" ]
text-generation
2025-01-30T21:19:42Z
--- license: apache-2.0 language: - en tags: - mistral - mistral-small - fp8 - vllm base_model: mistralai/Mistral-Small-24B-Instruct-2501 library_name: transformers --- # Mistral-Small-24B-Instruct-2501-FP8-Dynamic ## Model Overview - **Model Architecture:** Mistral-Small-24B-Instruct-2501 - **Input:** Text - **Output:** Text - **Model Optimizations:** - **Weight quantization:** FP8 - **Activation quantization:** FP8 - **Release Date:** 3/1/2025 - **Version:** 1.0 - **Model Developers:** Neural Magic Quantized version of [Mistral-Small-24B-Instruct-2501](https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501). It achieves an average score of 78.88 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 79.45. ### Model Optimizations This model was obtained by quantizing the weights and activations to FP8 data type, ready for inference with vLLM. This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%. Only the weights and activations of the linear operators within transformers blocks are quantized. ## Deployment ### Use with vLLM This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below. ```python from transformers import AutoTokenizer from vllm import LLM, SamplingParams max_model_len, tp_size = 4096, 1 model_name = "neuralmagic/Mistral-Small-24B-Instruct-2501-FP8-Dynamic" tokenizer = AutoTokenizer.from_pretrained(model_name) llm = LLM(model=model_name, tensor_parallel_size=tp_size, max_model_len=max_model_len, trust_remote_code=True) sampling_params = SamplingParams(temperature=0.3, max_tokens=256, stop_token_ids=[tokenizer.eos_token_id]) messages_list = [ [{"role": "user", "content": "Who are you? Please respond in pirate speak!"}], ] prompt_token_ids = [tokenizer.apply_chat_template(messages, add_generation_prompt=True) for messages in messages_list] outputs = llm.generate(prompt_token_ids=prompt_token_ids, sampling_params=sampling_params) generated_text = [output.outputs[0].text for output in outputs] print(generated_text) ``` vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details. ## Creation This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below. ```python import argparse from transformers import AutoModelForCausalLM, AutoTokenizer from llmcompressor.modifiers.quantization import QuantizationModifier from llmcompressor.transformers import oneshot import os def main(): parser = argparse.ArgumentParser(description='Quantize a transformer model to FP8') parser.add_argument('--model_id', type=str, required=True, help='The model ID from HuggingFace (e.g., "meta-llama/Meta-Llama-3-8B-Instruct")') parser.add_argument('--save_path', type=str, default='.', help='Custom path to save the quantized model. If not provided, will use model_name-FP8-dynamic') args = parser.parse_args() # Load model model = AutoModelForCausalLM.from_pretrained( args.model_id, device_map="auto", torch_dtype="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(args.model_id) # Configure the quantization algorithm and scheme recipe = QuantizationModifier( targets="Linear", scheme="FP8_DYNAMIC", ignore=["lm_head"] ) # Apply quantization oneshot(model=model, recipe=recipe) save_path = os.path.join(args.save_path, args.model_id.split("/")[1] + "-FP8-dynamic") os.makedirs(save_path, exist_ok=True) # Save to disk in compressed-tensors format model.save_pretrained(save_path) tokenizer.save_pretrained(save_path) print(f"Model and tokenizer saved to: {save_path}") if __name__ == "__main__": main() ``` ## Evaluation The model was evaluated on OpenLLM Leaderboard [V1](https://huggingface.co/spaces/open-llm-leaderboard-old/open_llm_leaderboard) and [V2](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/), using the following commands: OpenLLM Leaderboard V1: ``` lm_eval \ --model vllm \ --model_args pretrained="neuralmagic/Mistral-Small-24B-Instruct-2501-FP8-Dynamic",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True \ --tasks openllm \ --write_out \ --batch_size auto \ --output_path output_dir \ --show_config ``` OpenLLM Leaderboard V2: ``` lm_eval \ --model vllm \ --model_args pretrained="neuralmagic/Mistral-Small-24B-Instruct-2501-FP8-Dynamic",dtype=auto,add_bos_token=False,max_model_len=4096,tensor_parallel_size=1,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True \ --apply_chat_template \ --fewshot_as_multiturn \ --tasks leaderboard \ --write_out \ --batch_size auto \ --output_path output_dir \ --show_config ``` ### Accuracy #### OpenLLM Leaderboard V1 evaluation scores | Metric | mistralai/Mistral-Small-24B-Instruct-2501 | nm-testing/Mistral-Small-24B-Instruct-2501-FP8-dynamic | |-----------------------------------------|:---------------------------------:|:-------------------------------------------:| | ARC-Challenge (Acc-Norm, 25-shot) | 72.18 | 71.76 | | GSM8K (Strict-Match, 5-shot) | 90.14 | 89.01 | | HellaSwag (Acc-Norm, 10-shot) | 85.05 | 84.65 | | MMLU (Acc, 5-shot) | 80.69 | 80.55 | | TruthfulQA (MC2, 0-shot) | 65.55 | 64.85 | | Winogrande (Acc, 5-shot) | 83.11 | 82.48 | | **Average Score** | **79.45** | **78.88** | | **Recovery (%)** | **100.00** | **99.28** | #### OpenLLM Leaderboard V2 evaluation scores | Metric | mistralai/Mistral-Small-24B-Instruct-2501 | nm-testing/Mistral-Small-24B-Instruct-2501-FP8-dynamic | |---------------------------------------------------------|:---------------------------------:|:-------------------------------------------:| | IFEval (Inst-and-Prompt Level Strict Acc, 0-shot) | 73.27 | 73.53 | | BBH (Acc-Norm, 3-shot) | 45.18 | 44.39 | | MMLU-Pro (Acc, 5-shot) | 38.83 | 37.28 | | **Average Score** | **52.42** | **51.73** | | **Recovery (%)** | **100.00** | **98.68** | | Math-Hard (Exact-Match, 4-shot) | 6.35 | 2.99 | | GPQA (Acc-Norm, 0-shot) | 8.29 | 6.97 | | MUSR (Acc-Norm, 0-shot) | 7.84 | 8.04 | Results on Math-Hard, GPQA, and MUSR are not considred for accuracy recovery calculation because the unquantized model has close to random prediction accuracy (6.35, 8.29, 7.84) which doesn't provide a reliable baseline for recovery calculation.
alchemist69/27922ebe-f1b6-4aa2-a504-319b445673f1
alchemist69
2025-01-31T08:39:35Z
9
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2-0.5B-Instruct", "base_model:adapter:unsloth/Qwen2-0.5B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-01-31T08:39:01Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2-0.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 27922ebe-f1b6-4aa2-a504-319b445673f1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2-0.5B-Instruct bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 09bdae8113c1b1e3_train_data.json ds_type: json format: custom path: /workspace/input_data/09bdae8113c1b1e3_train_data.json type: field_instruction: inputs field_output: targets format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: alchemist69/27922ebe-f1b6-4aa2-a504-319b445673f1 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 8 mlflow_experiment_name: /tmp/09bdae8113c1b1e3_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: b1e9a00c-aacb-4b8d-8b7b-ef64c7ac8d32 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: b1e9a00c-aacb-4b8d-8b7b-ef64c7ac8d32 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 27922ebe-f1b6-4aa2-a504-319b445673f1 This model is a fine-tuned version of [unsloth/Qwen2-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2-0.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9725 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.1501 | 0.4 | 1 | 0.9725 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Sourabh1172/layoutlmv3-document-classification_207
Sourabh1172
2025-01-31T08:38:08Z
49
0
transformers
[ "transformers", "safetensors", "layoutlmv3", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-01-31T08:37:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
alchemist69/3b1753cf-46d9-49f3-8960-70051fad54a4
alchemist69
2025-01-31T08:37:47Z
6
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2-7B-Instruct", "base_model:adapter:unsloth/Qwen2-7B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-01-31T06:29:37Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2-7B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 3b1753cf-46d9-49f3-8960-70051fad54a4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2-7B-Instruct bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 5fb110e3c74c3130_train_data.json ds_type: json format: custom path: /workspace/input_data/5fb110e3c74c3130_train_data.json type: field_instruction: instruction field_output: response format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: alchemist69/3b1753cf-46d9-49f3-8960-70051fad54a4 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 8 mlflow_experiment_name: /tmp/5fb110e3c74c3130_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 5cf40287-99df-483d-bba9-4777509422cc wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 5cf40287-99df-483d-bba9-4777509422cc warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 3b1753cf-46d9-49f3-8960-70051fad54a4 This model is a fine-tuned version of [unsloth/Qwen2-7B-Instruct](https://huggingface.co/unsloth/Qwen2-7B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5028 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.8395 | 0.0001 | 1 | 0.8063 | | 0.714 | 0.0058 | 50 | 0.5623 | | 0.4633 | 0.0117 | 100 | 0.5227 | | 0.4731 | 0.0175 | 150 | 0.5065 | | 0.398 | 0.0234 | 200 | 0.5028 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
roleplaiapp/DeepSeek-R1-Distill-Alpaca-FineTuned-Q8_0-GGUF
roleplaiapp
2025-01-31T08:37:27Z
269
0
transformers
[ "transformers", "gguf", "8-bit", "Q8_0", "alpaca", "deepseek", "distill", "finetuned", "llama-cpp", "text-generation", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-01-31T08:36:52Z
--- library_name: transformers pipeline_tag: text-generation tags: - 8-bit - Q8_0 - alpaca - deepseek - distill - finetuned - gguf - llama-cpp - text-generation --- # roleplaiapp/DeepSeek-R1-Distill-Alpaca-FineTuned-Q8_0-GGUF **Repo:** `roleplaiapp/DeepSeek-R1-Distill-Alpaca-FineTuned-Q8_0-GGUF` **Original Model:** `DeepSeek-R1-Distill-Alpaca-FineTuned` **Quantized File:** `DeepSeek-R1-Distill-Alpaca-FineTuned.Q8_0.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q8_0` ## Overview This is a GGUF Q8_0 quantized version of DeepSeek-R1-Distill-Alpaca-FineTuned ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
beingbatman/CTMAE-P2-V4-S3
beingbatman
2025-01-31T08:35:56Z
24
0
transformers
[ "transformers", "safetensors", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-large-finetuned-kinetics", "base_model:finetune:MCG-NJU/videomae-large-finetuned-kinetics", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2025-01-29T22:16:52Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: MCG-NJU/videomae-large-finetuned-kinetics tags: - generated_from_trainer metrics: - accuracy model-index: - name: CTMAE-P2-V4-S3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CTMAE-P2-V4-S3 This model is a fine-tuned version of [MCG-NJU/videomae-large-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-large-finetuned-kinetics) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.1094 - Accuracy: 0.7111 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 13050 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.5461 | 0.02 | 261 | 2.1854 | 0.5556 | | 0.6074 | 1.02 | 522 | 2.6518 | 0.5556 | | 1.5766 | 2.02 | 783 | 1.9843 | 0.5556 | | 0.7713 | 3.02 | 1044 | 2.2332 | 0.5556 | | 1.797 | 4.02 | 1305 | 1.7064 | 0.5556 | | 0.8914 | 5.02 | 1566 | 1.8977 | 0.5556 | | 0.7372 | 6.02 | 1827 | 2.2072 | 0.5556 | | 1.0467 | 7.02 | 2088 | 1.7544 | 0.5556 | | 1.2248 | 8.02 | 2349 | 2.0315 | 0.5556 | | 0.7126 | 9.02 | 2610 | 1.7717 | 0.5556 | | 1.2486 | 10.02 | 2871 | 2.0448 | 0.5556 | | 2.2836 | 11.02 | 3132 | 2.1988 | 0.5556 | | 0.8409 | 12.02 | 3393 | 1.6258 | 0.6444 | | 0.4642 | 13.02 | 3654 | 1.3451 | 0.6667 | | 0.007 | 14.02 | 3915 | 2.2438 | 0.5556 | | 0.9377 | 15.02 | 4176 | 1.1871 | 0.6444 | | 0.7025 | 16.02 | 4437 | 1.8905 | 0.6444 | | 0.2657 | 17.02 | 4698 | 2.1760 | 0.6222 | | 1.3937 | 18.02 | 4959 | 2.0622 | 0.6 | | 1.9924 | 19.02 | 5220 | 1.8416 | 0.6667 | | 0.0009 | 20.02 | 5481 | 1.9068 | 0.6444 | | 1.0231 | 21.02 | 5742 | 1.8428 | 0.6667 | | 0.7099 | 22.02 | 6003 | 2.3108 | 0.6 | | 0.3243 | 23.02 | 6264 | 2.2084 | 0.5778 | | 2.748 | 24.02 | 6525 | 1.8855 | 0.6889 | | 0.0002 | 25.02 | 6786 | 1.9443 | 0.6667 | | 1.1288 | 26.02 | 7047 | 1.6372 | 0.6444 | | 0.0024 | 27.02 | 7308 | 2.0813 | 0.6444 | | 1.3731 | 28.02 | 7569 | 2.1846 | 0.6444 | | 0.0085 | 29.02 | 7830 | 2.2414 | 0.6222 | | 0.0004 | 30.02 | 8091 | 2.5363 | 0.5778 | | 0.7817 | 31.02 | 8352 | 2.8433 | 0.5778 | | 0.3487 | 32.02 | 8613 | 2.6374 | 0.6444 | | 0.0014 | 33.02 | 8874 | 3.0313 | 0.5778 | | 0.0009 | 34.02 | 9135 | 2.6187 | 0.6667 | | 0.014 | 35.02 | 9396 | 2.1094 | 0.7111 | | 0.512 | 36.02 | 9657 | 2.1110 | 0.6667 | | 0.0003 | 37.02 | 9918 | 3.0441 | 0.5778 | | 0.0001 | 38.02 | 10179 | 2.4423 | 0.6889 | | 0.0009 | 39.02 | 10440 | 2.3538 | 0.6889 | | 0.0001 | 40.02 | 10701 | 2.4812 | 0.6667 | | 0.0001 | 41.02 | 10962 | 2.5847 | 0.6667 | | 0.0 | 42.02 | 11223 | 2.5525 | 0.6889 | | 0.002 | 43.02 | 11484 | 2.6746 | 0.6889 | | 0.0004 | 44.02 | 11745 | 2.4888 | 0.6667 | | 0.0001 | 45.02 | 12006 | 2.5662 | 0.6444 | | 0.0011 | 46.02 | 12267 | 2.5288 | 0.6667 | | 0.0001 | 47.02 | 12528 | 2.5611 | 0.6667 | | 0.7043 | 48.02 | 12789 | 2.7606 | 0.6667 | | 0.0001 | 49.02 | 13050 | 2.7966 | 0.6667 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.0.1+cu117 - Datasets 3.0.1 - Tokenizers 0.20.0
jsn33/llama-agnia
jsn33
2025-01-31T08:34:47Z
20
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-01-31T08:00:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sleepdeprived3/Mistral-Small-24B-Instruct-2501_EXL2_6bpw_H8
sleepdeprived3
2025-01-31T08:30:34Z
8
0
vllm
[ "vllm", "safetensors", "mistral", "text-generation", "transformers", "conversational", "en", "fr", "de", "es", "it", "pt", "zh", "ja", "ru", "ko", "base_model:mistralai/Mistral-Small-24B-Base-2501", "base_model:quantized:mistralai/Mistral-Small-24B-Base-2501", "license:apache-2.0", "text-generation-inference", "6-bit", "exl2", "region:us" ]
text-generation
2025-01-31T07:23:48Z
--- language: - en - fr - de - es - it - pt - zh - ja - ru - ko license: apache-2.0 library_name: vllm inference: false base_model: - mistralai/Mistral-Small-24B-Base-2501 extra_gated_description: >- If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>. tags: - transformers --- # Model Card for Mistral-Small-24B-Instruct-2501 Mistral Small 3 ( 2501 ) sets a new benchmark in the "small" Large Language Models category below 70B, boasting 24B parameters and achieving state-of-the-art capabilities comparable to larger models! This model is an instruction-fine-tuned version of the base model: [Mistral-Small-24B-Base-2501](https://huggingface.co/mistralai/Mistral-Small-24B-Base-2501). Mistral Small can be deployed locally and is exceptionally "knowledge-dense", fitting in a single RTX 4090 or a 32GB RAM MacBook once quantized. Perfect for: - Fast response conversational agents. - Low latency function calling. - Subject matter experts via fine-tuning. - Local inference for hobbyists and organizations handling sensitive data. For enterprises that need specialized capabilities (increased context, particular modalities, domain specific knowledge, etc.), we will be releasing commercial models beyond what Mistral AI contributes to the community. This release demonstrates our commitment to open source, serving as a strong base model. Learn more about Mistral Small in our [blog post](https://mistral.ai/news/mistral-small-3/). Model developper: Mistral AI Team ## Key Features - **Multilingual:** Supports dozens of languages, including English, French, German, Spanish, Italian, Chinese, Japanese, Korean, Portuguese, Dutch, and Polish. - **Agent-Centric:** Offers best-in-class agentic capabilities with native function calling and JSON outputting. - **Advanced Reasoning:** State-of-the-art conversational and reasoning capabilities. - **Apache 2.0 License:** Open license allowing usage and modification for both commercial and non-commercial purposes. - **Context Window:** A 32k context window. - **System Prompt:** Maintains strong adherence and support for system prompts. - **Tokenizer:** Utilizes a Tekken tokenizer with a 131k vocabulary size. ## Benchmark results ### Human evaluated benchmarks | Category | Gemma-2-27B | Qwen-2.5-32B | Llama-3.3-70B | Gpt4o-mini | |----------|-------------|--------------|---------------|------------| | Mistral is better | 0.536 | 0.496 | 0.192 | 0.200 | | Mistral is slightly better | 0.196 | 0.184 | 0.164 | 0.204 | | Ties | 0.052 | 0.060 | 0.236 | 0.160 | | Other is slightly better | 0.060 | 0.088 | 0.112 | 0.124 | | Other is better | 0.156 | 0.172 | 0.296 | 0.312 | **Note**: - We conducted side by side evaluations with an external third-party vendor, on a set of over 1k proprietary coding and generalist prompts. - Evaluators were tasked with selecting their preferred model response from anonymized generations produced by Mistral Small 3 vs another model. - We are aware that in some cases the benchmarks on human judgement starkly differ from publicly available benchmarks, but have taken extra caution in verifying a fair evaluation. We are confident that the above benchmarks are valid. ### Publicly accesible benchmarks **Reasoning & Knowledge** | Evaluation | mistral-small-24B-instruct-2501 | gemma-2b-27b | llama-3.3-70b | qwen2.5-32b | gpt-4o-mini-2024-07-18 | |------------|---------------|--------------|---------------|---------------|-------------| | mmlu_pro_5shot_cot_instruct | 0.663 | 0.536 | 0.666 | 0.683 | 0.617 | | gpqa_main_cot_5shot_instruct | 0.453 | 0.344 | 0.531 | 0.404 | 0.377 | **Math & Coding** | Evaluation | mistral-small-24B-instruct-2501 | gemma-2b-27b | llama-3.3-70b | qwen2.5-32b | gpt-4o-mini-2024-07-18 | |------------|---------------|--------------|---------------|---------------|-------------| | humaneval_instruct_pass@1 | 0.848 | 0.732 | 0.854 | 0.909 | 0.890 | | math_instruct | 0.706 | 0.535 | 0.743 | 0.819 | 0.761 | **Instruction following** | Evaluation | mistral-small-24B-instruct-2501 | gemma-2b-27b | llama-3.3-70b | qwen2.5-32b | gpt-4o-mini-2024-07-18 | |------------|---------------|--------------|---------------|---------------|-------------| | mtbench_dev | 8.35 | 7.86 | 7.96 | 8.26 | 8.33 | | wildbench | 52.27 | 48.21 | 50.04 | 52.73 | 56.13 | | arena_hard | 0.873 | 0.788 | 0.840 | 0.860 | 0.897 | | ifeval | 0.829 | 0.8065 | 0.8835 | 0.8401 | 0.8499 | **Note**: - Performance accuracy on all benchmarks were obtained through the same internal evaluation pipeline - as such, numbers may vary slightly from previously reported performance ([Qwen2.5-32B-Instruct](https://qwenlm.github.io/blog/qwen2.5/), [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct), [Gemma-2-27B-IT](https://huggingface.co/google/gemma-2-27b-it)). - Judge based evals such as Wildbench, Arena hard and MTBench were based on gpt-4o-2024-05-13. ### Basic Instruct Template (V7-Tekken) ``` <s>[SYSTEM_PROMPT]<system prompt>[/SYSTEM_PROMPT][INST]<user message>[/INST]<assistant response></s>[INST]<user message>[/INST] ``` *`<system_prompt>`, `<user message>` and `<assistant response>` are placeholders.* ***Please make sure to use [mistral-common](https://github.com/mistralai/mistral-common) as the source of truth*** ## Usage The model can be used with the following frameworks; - [`vllm`](https://github.com/vllm-project/vllm): See [here](#vLLM) - [`transformers`](https://github.com/huggingface/transformers): See [here](#Transformers) ### vLLM We recommend using this model with the [vLLM library](https://github.com/vllm-project/vllm) to implement production-ready inference pipelines. **Note 1**: We recommond using a relatively low temperature, such as `temperature=0.15`. **Note 2**: Make sure to add a system prompt to the model to best tailer it for your needs. If you want to use the model as a general assistant, we recommend the following system prompt: ``` system_prompt = """You are Mistral Small 3, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris. Your knowledge base was last updated on 2023-10-01. The current date is 2025-01-30. When you're not sure about some information, you say that you don't have the information and don't make up anything. If the user's question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. \"What are some good restaurants around me?\" => \"Where are you?\" or \"When is the next flight to Tokyo\" => \"Where do you travel from?\")""" ``` **_Installation_** Make sure you install [`vLLM >= 0.6.4`](https://github.com/vllm-project/vllm/releases/tag/v0.6.4): ``` pip install --upgrade vllm ``` Also make sure you have [`mistral_common >= 1.5.2`](https://github.com/mistralai/mistral-common/releases/tag/v1.5.2) installed: ``` pip install --upgrade mistral_common ``` You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile) or on the [docker hub](https://hub.docker.com/layers/vllm/vllm-openai/latest/images/sha256-de9032a92ffea7b5c007dad80b38fd44aac11eddc31c435f8e52f3b7404bbf39). #### Server We recommand that you use Mistral-Small-24B-Instruct-2501 in a server/client setting. 1. Spin up a server: ``` vllm serve mistralai/Mistral-Small-24B-Instruct-2501 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice ``` **Note:** Running Mistral-Small-24B-Instruct-2501 on GPU requires ~55 GB of GPU RAM in bf16 or fp16. 2. To ping the client you can use a simple Python snippet. ```py import requests import json from datetime import datetime, timedelta url = "http://<your-server>:8000/v1/chat/completions" headers = {"Content-Type": "application/json", "Authorization": "Bearer token"} model = "mistralai/Mistral-Small-24B-Instruct-2501" messages = [ { "role": "system", "content": "You are a conversational agent that always answers straight to the point, always end your accurate response with an ASCII drawing of a cat." }, { "role": "user", "content": "Give me 5 non-formal ways to say 'See you later' in French." }, ] data = {"model": model, "messages": messages} response = requests.post(url, headers=headers, data=json.dumps(data)) print(response.json()["choices"][0]["message"]["content"]) # Sure, here are five non-formal ways to say "See you later" in French: # # 1. À plus tard # 2. À plus # 3. Salut # 4. À toute # 5. Bisous # # ``` # /\_/\ # ( o.o ) # > ^ < # ``` ``` ### Function calling Mistral-Small-24-Instruct-2501 is excellent at function / tool calling tasks via vLLM. *E.g.:* <details> <summary>Example</summary> ```py import requests import json from huggingface_hub import hf_hub_download from datetime import datetime, timedelta url = "http://<your-url>:8000/v1/chat/completions" headers = {"Content-Type": "application/json", "Authorization": "Bearer token"} model = "mistralai/Mistral-Small-24B-Instruct-2501" def load_system_prompt(repo_id: str, filename: str) -> str: file_path = hf_hub_download(repo_id=repo_id, filename=filename) with open(file_path, "r") as file: system_prompt = file.read() today = datetime.today().strftime("%Y-%m-%d") yesterday = (datetime.today() - timedelta(days=1)).strftime("%Y-%m-%d") model_name = repo_id.split("/")[-1] return system_prompt.format(name=model_name, today=today, yesterday=yesterday) SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt") tools = [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "city": { "type": "string", "description": "The city to find the weather for, e.g. 'San Francisco'", }, "state": { "type": "string", "description": "The state abbreviation, e.g. 'CA' for California", }, "unit": { "type": "string", "description": "The unit for temperature", "enum": ["celsius", "fahrenheit"], }, }, "required": ["city", "state", "unit"], }, }, }, { "type": "function", "function": { "name": "rewrite", "description": "Rewrite a given text for improved clarity", "parameters": { "type": "object", "properties": { "text": { "type": "string", "description": "The input text to rewrite", } }, }, }, }, ] messages = [ {"role": "system", "content": SYSTEM_PROMPT}, { "role": "user", "content": "Could you please make the below article more concise?\n\nOpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership.", }, { "role": "assistant", "content": "", "tool_calls": [ { "id": "bbc5b7ede", "type": "function", "function": { "name": "rewrite", "arguments": '{"text": "OpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership."}', }, } ], }, { "role": "tool", "content": '{"action":"rewrite","outcome":"OpenAI is a FOR-profit company."}', "tool_call_id": "bbc5b7ede", "name": "rewrite", }, { "role": "assistant", "content": "---\n\nOpenAI is a FOR-profit company.", }, { "role": "user", "content": "Can you tell me what the temperature will be in Dallas, in Fahrenheit?", }, ] data = {"model": model, "messages": messages, "tools": tools} response = requests.post(url, headers=headers, data=json.dumps(data)) import ipdb; ipdb.set_trace() print(response.json()["choices"][0]["message"]["tool_calls"]) # [{'id': '8PdihwL6d', 'type': 'function', 'function': {'name': 'get_current_weather', 'arguments': '{"city": "Dallas", "state": "TX", "unit": "fahrenheit"}'}}] ``` </details> #### Offline ```py from vllm import LLM from vllm.sampling_params import SamplingParams from datetime import datetime, timedelta SYSTEM_PROMPT = "You are a conversational agent that always answers straight to the point, always end your accurate response with an ASCII drawing of a cat." user_prompt = "Give me 5 non-formal ways to say 'See you later' in French." messages = [ { "role": "system", "content": SYSTEM_PROMPT }, { "role": "user", "content": user_prompt }, ] # note that running this model on GPU requires over 60 GB of GPU RAM llm = LLM(model=model_name, tokenizer_mode="mistral", tensor_parallel_size=8) sampling_params = SamplingParams(max_tokens=512, temperature=0.15) outputs = llm.chat(messages, sampling_params=sampling_params) print(outputs[0].outputs[0].text) # Sure, here are five non-formal ways to say "See you later" in French: # # 1. À plus tard # 2. À plus # 3. Salut # 4. À toute # 5. Bisous # # ``` # /\_/\ # ( o.o ) # > ^ < # ``` ``` ### Transformers If you want to use Hugging Face transformers to generate text, you can do something like this. ```py from transformers import pipeline import torch messages = [ {"role": "user", "content": "Give me 5 non-formal ways to say 'See you later' in French."}, ] chatbot = pipeline("text-generation", model="mistralai/Mistral-Small-24B-Instruct-2501", max_new_tokens=256, torch_dtype=torch.bfloat16) chatbot(messages) ``` ### Ollama [Ollama](https://github.com/ollama/ollama) can run this model locally on MacOS, Windows and Linux. ``` ollama run mistral-small ``` 4-bit quantization (aliased to default): ``` ollama run mistral-small:24b-instruct-2501-q4_K_M ``` 8-bit quantization: ``` ollama run mistral-small:24b-instruct-2501-q8_0 ``` FP16: ``` ollama run mistral-small:24b-instruct-2501-fp16 ```
mrferr3t/50b6074f-6657-435d-937f-7637148d1de6
mrferr3t
2025-01-31T08:26:37Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "custom_code", "base_model:NousResearch/CodeLlama-7b-hf", "base_model:adapter:NousResearch/CodeLlama-7b-hf", "region:us" ]
null
2025-01-31T08:21:49Z
--- library_name: peft base_model: NousResearch/CodeLlama-7b-hf tags: - axolotl - generated_from_trainer model-index: - name: 50b6074f-6657-435d-937f-7637148d1de6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/CodeLlama-7b-hf bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - a80f531073244c9f_train_data.json ds_type: json format: custom path: /workspace/input_data/a80f531073244c9f_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_steps: 50 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: mrferr3t/50b6074f-6657-435d-937f-7637148d1de6 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0005 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 99 micro_batch_size: 2 mlflow_experiment_name: /tmp/a80f531073244c9f_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 300 saves_per_epoch: 0 sequence_len: 512 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 846f22c8-74e1-47e8-9e98-11b3498ed786 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 846f22c8-74e1-47e8-9e98-11b3498ed786 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 50b6074f-6657-435d-937f-7637148d1de6 This model is a fine-tuned version of [NousResearch/CodeLlama-7b-hf](https://huggingface.co/NousResearch/CodeLlama-7b-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2561 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 99 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 9.5571 | 0.0002 | 1 | 2.5971 | | 7.4951 | 0.0082 | 50 | 2.2561 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.3.1+cu121 - Datasets 3.0.1 - Tokenizers 0.20.1
lesso05/6ea3d3d3-1f6e-4651-b875-7cf33f2d1596
lesso05
2025-01-31T08:25:43Z
6
0
peft
[ "peft", "safetensors", "falcon", "axolotl", "generated_from_trainer", "base_model:katuni4ka/tiny-random-falcon-40b", "base_model:adapter:katuni4ka/tiny-random-falcon-40b", "region:us" ]
null
2025-01-31T08:23:02Z
--- library_name: peft base_model: katuni4ka/tiny-random-falcon-40b tags: - axolotl - generated_from_trainer model-index: - name: 6ea3d3d3-1f6e-4651-b875-7cf33f2d1596 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: katuni4ka/tiny-random-falcon-40b bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - a3cf44da3e78ec4e_train_data.json ds_type: json format: custom path: /workspace/input_data/a3cf44da3e78ec4e_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: lesso05/6ea3d3d3-1f6e-4651-b875-7cf33f2d1596 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mixed_precision: bf16 mlflow_experiment_name: /tmp/a3cf44da3e78ec4e_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 special_tokens: pad_token: <|endoftext|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 508c98c4-3e90-426b-af24-88a55e802816 wandb_project: new-01-29 wandb_run: your_name wandb_runid: 508c98c4-3e90-426b-af24-88a55e802816 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 6ea3d3d3-1f6e-4651-b875-7cf33f2d1596 This model is a fine-tuned version of [katuni4ka/tiny-random-falcon-40b](https://huggingface.co/katuni4ka/tiny-random-falcon-40b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 11.0517 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 44.2332 | 0.0085 | 200 | 11.0517 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Best000/9818028c-a609-43ff-a59a-08e6c6e3f331
Best000
2025-01-31T08:24:41Z
6
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:HuggingFaceH4/zephyr-7b-beta", "base_model:adapter:HuggingFaceH4/zephyr-7b-beta", "license:mit", "region:us" ]
null
2025-01-31T08:07:42Z
--- library_name: peft license: mit base_model: HuggingFaceH4/zephyr-7b-beta tags: - axolotl - generated_from_trainer model-index: - name: 9818028c-a609-43ff-a59a-08e6c6e3f331 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: HuggingFaceH4/zephyr-7b-beta bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ecd7cec85692169d_train_data.json ds_type: json format: custom path: /workspace/input_data/ecd7cec85692169d_train_data.json type: field_instruction: input_persona field_output: prompt format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: Best000/9818028c-a609-43ff-a59a-08e6c6e3f331 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/ecd7cec85692169d_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 7bdc132e-e198-4b8f-bee8-34caa4c4cbb2 wandb_project: Birthday-SN56-15-Gradients-On-Demand wandb_run: your_name wandb_runid: 7bdc132e-e198-4b8f-bee8-34caa4c4cbb2 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 9818028c-a609-43ff-a59a-08e6c6e3f331 This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0001 | 1 | nan | | 0.2605 | 0.0007 | 13 | nan | | 0.0 | 0.0015 | 26 | nan | | 2.3517 | 0.0022 | 39 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Legalaz/22_llamboch2_03_21
Legalaz
2025-01-31T08:24:40Z
9
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2203.05482", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-01-31T08:22:28Z
--- base_model: [] library_name: transformers tags: - mergekit - merge --- # top This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method. ### Models Merged The following models were included in the merge: * /root/top2 * /root/top1 ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: /root/top2 parameters: weight: 0.8441 - model: /root/top1 parameters: weight: 0.0628 merge_method: linear dtype: bfloat16 ```
thakkkkkk/1b5b8e50-5540-4e8f-8732-18686c5215df
thakkkkkk
2025-01-31T08:24:00Z
6
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:HuggingFaceH4/zephyr-7b-beta", "base_model:adapter:HuggingFaceH4/zephyr-7b-beta", "license:mit", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T07:18:39Z
--- library_name: peft license: mit base_model: HuggingFaceH4/zephyr-7b-beta tags: - axolotl - generated_from_trainer model-index: - name: 1b5b8e50-5540-4e8f-8732-18686c5215df results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: HuggingFaceH4/zephyr-7b-beta bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ecd7cec85692169d_train_data.json ds_type: json format: custom path: /workspace/input_data/ecd7cec85692169d_train_data.json type: field_instruction: input_persona field_output: prompt format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: thakkkkkk/1b5b8e50-5540-4e8f-8732-18686c5215df hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/ecd7cec85692169d_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 7bdc132e-e198-4b8f-bee8-34caa4c4cbb2 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 7bdc132e-e198-4b8f-bee8-34caa4c4cbb2 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 1b5b8e50-5540-4e8f-8732-18686c5215df This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7223 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.9709 | 0.0225 | 200 | 0.7223 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
lesso17/096d3016-103f-482a-81fe-15405ab7e87f
lesso17
2025-01-31T08:23:27Z
6
0
peft
[ "peft", "safetensors", "falcon", "axolotl", "generated_from_trainer", "base_model:katuni4ka/tiny-random-falcon-40b", "base_model:adapter:katuni4ka/tiny-random-falcon-40b", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T08:21:32Z
--- library_name: peft base_model: katuni4ka/tiny-random-falcon-40b tags: - axolotl - generated_from_trainer model-index: - name: 096d3016-103f-482a-81fe-15405ab7e87f results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: katuni4ka/tiny-random-falcon-40b bf16: auto chat_template: llama3 datasets: - data_files: - a3cf44da3e78ec4e_train_data.json ds_type: json format: custom path: /workspace/input_data/a3cf44da3e78ec4e_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: lesso17/096d3016-103f-482a-81fe-15405ab7e87f hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/a3cf44da3e78ec4e_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 special_tokens: pad_token: <|endoftext|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 508c98c4-3e90-426b-af24-88a55e802816 wandb_project: new-01-29 wandb_run: your_name wandb_runid: 508c98c4-3e90-426b-af24-88a55e802816 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 096d3016-103f-482a-81fe-15405ab7e87f This model is a fine-tuned version of [katuni4ka/tiny-random-falcon-40b](https://huggingface.co/katuni4ka/tiny-random-falcon-40b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 10.9892 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 43.9604 | 0.0085 | 200 | 10.9892 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Legalaz/17_llamboch2_03_17
Legalaz
2025-01-31T08:20:37Z
11
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2203.05482", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-01-31T08:18:20Z
--- base_model: [] library_name: transformers tags: - mergekit - merge --- # top This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method. ### Models Merged The following models were included in the merge: * /root/top2 * /root/top1 ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: /root/top2 parameters: weight: 0.8413 - model: /root/top1 parameters: weight: 0.0628 merge_method: linear dtype: bfloat16 ```
DoppelReflEx/MN-12B-WolFrame-Q6_K-GGUF
DoppelReflEx
2025-01-31T08:18:28Z
439
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:DoppelReflEx/MN-12B-WolFrame", "base_model:quantized:DoppelReflEx/MN-12B-WolFrame", "endpoints_compatible", "region:us", "conversational" ]
null
2025-01-29T06:20:29Z
--- base_model: DoppelReflEx/MN-12B-Mimicore-WhiteSnake-v2-Experiment-4 library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # DoppelReflEx/MN-12B-Mimicore-WhiteSnake-v2-Experiment-4-Q6_K-GGUF This model was converted to GGUF format from [`DoppelReflEx/MN-12B-Mimicore-WhiteSnake-v2-Experiment-4`](https://huggingface.co/DoppelReflEx/MN-12B-Mimicore-WhiteSnake-v2-Experiment-4) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/DoppelReflEx/MN-12B-Mimicore-WhiteSnake-v2-Experiment-4) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo DoppelReflEx/MN-12B-Mimicore-WhiteSnake-v2-Experiment-4-Q6_K-GGUF --hf-file mn-12b-mimicore-whitesnake-v2-experiment-4-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo DoppelReflEx/MN-12B-Mimicore-WhiteSnake-v2-Experiment-4-Q6_K-GGUF --hf-file mn-12b-mimicore-whitesnake-v2-experiment-4-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo DoppelReflEx/MN-12B-Mimicore-WhiteSnake-v2-Experiment-4-Q6_K-GGUF --hf-file mn-12b-mimicore-whitesnake-v2-experiment-4-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo DoppelReflEx/MN-12B-Mimicore-WhiteSnake-v2-Experiment-4-Q6_K-GGUF --hf-file mn-12b-mimicore-whitesnake-v2-experiment-4-q6_k.gguf -c 2048 ```
ptyagi13/parul-tyagi
ptyagi13
2025-01-31T08:16:29Z
12
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-01-31T07:51:59Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: parul --- # Parul Tyagi <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `parul` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('ptyagi13/parul-tyagi', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
Legalaz/25_llamboch2_03_13
Legalaz
2025-01-31T08:16:20Z
10
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2203.05482", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-01-31T08:14:10Z
--- base_model: [] library_name: transformers tags: - mergekit - merge --- # top This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method. ### Models Merged The following models were included in the merge: * /root/top2 * /root/top1 ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: /root/top2 parameters: weight: 0.9304 - model: /root/top1 parameters: weight: 0.0628 merge_method: linear dtype: bfloat16 ```
lesso09/d68bd6b3-2474-42fa-9def-1b719249ca4d
lesso09
2025-01-31T08:16:07Z
6
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2-7B-Instruct", "base_model:adapter:Qwen/Qwen2-7B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-01-31T07:48:54Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2-7B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: d68bd6b3-2474-42fa-9def-1b719249ca4d results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen2-7B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 4dcb711299282333_train_data.json ds_type: json format: custom path: /workspace/input_data/4dcb711299282333_train_data.json type: field_input: phonemes field_instruction: text_description field_output: text format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: lesso09/d68bd6b3-2474-42fa-9def-1b719249ca4d hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mixed_precision: bf16 mlflow_experiment_name: /tmp/4dcb711299282333_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: ab649ea5-2df5-460b-bb5c-9011a949e67b wandb_project: new-01-29 wandb_run: your_name wandb_runid: ab649ea5-2df5-460b-bb5c-9011a949e67b warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # d68bd6b3-2474-42fa-9def-1b719249ca4d This model is a fine-tuned version of [Qwen/Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0644 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0401 | 0.0406 | 200 | 0.0644 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
willhsp/headline-generator-opus-mt-mul-en
willhsp
2025-01-31T08:15:54Z
5
0
transformers
[ "transformers", "safetensors", "marian", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-01-31T08:15:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
thangla01/765c8ff0-96bd-4421-aa23-8d9944c6b43e
thangla01
2025-01-31T08:15:45Z
7
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2-7B-Instruct", "base_model:adapter:Qwen/Qwen2-7B-Instruct", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T07:50:22Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2-7B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 765c8ff0-96bd-4421-aa23-8d9944c6b43e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen2-7B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 4dcb711299282333_train_data.json ds_type: json format: custom path: /workspace/input_data/4dcb711299282333_train_data.json type: field_input: phonemes field_instruction: text_description field_output: text format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: thangla01/765c8ff0-96bd-4421-aa23-8d9944c6b43e hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/4dcb711299282333_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: ab649ea5-2df5-460b-bb5c-9011a949e67b wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: ab649ea5-2df5-460b-bb5c-9011a949e67b warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 765c8ff0-96bd-4421-aa23-8d9944c6b43e This model is a fine-tuned version of [Qwen/Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0649 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0385 | 0.0406 | 200 | 0.0649 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
abaddon182/63e624d5-1153-4f43-994a-fca7143b1b99
abaddon182
2025-01-31T08:15:21Z
15
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:unsloth/Mistral-Nemo-Base-2407", "base_model:adapter:unsloth/Mistral-Nemo-Base-2407", "license:apache-2.0", "region:us" ]
null
2025-01-31T07:32:10Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Mistral-Nemo-Base-2407 tags: - axolotl - generated_from_trainer model-index: - name: 63e624d5-1153-4f43-994a-fca7143b1b99 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Mistral-Nemo-Base-2407 bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - e25cb6311706a7c7_train_data.json ds_type: json format: custom path: /workspace/input_data/e25cb6311706a7c7_train_data.json type: field_instruction: prompt_attack field_output: output_vittima format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: abaddon182/63e624d5-1153-4f43-994a-fca7143b1b99 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 8 mlflow_experiment_name: /tmp/e25cb6311706a7c7_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 768f12f5-c6fb-403d-9cec-27135dc3578c wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 768f12f5-c6fb-403d-9cec-27135dc3578c warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 63e624d5-1153-4f43-994a-fca7143b1b99 This model is a fine-tuned version of [unsloth/Mistral-Nemo-Base-2407](https://huggingface.co/unsloth/Mistral-Nemo-Base-2407) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1252 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 5.2221 | 0.0120 | 1 | 1.7107 | | 4.5626 | 0.6006 | 50 | 1.1805 | | 3.4418 | 1.2012 | 100 | 1.1583 | | 4.0999 | 1.8018 | 150 | 1.1127 | | 3.1203 | 2.4024 | 200 | 1.1252 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
nhung02/36efda89-f2cb-4764-97eb-104c48ce50c5
nhung02
2025-01-31T08:14:38Z
7
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2-7B-Instruct", "base_model:adapter:Qwen/Qwen2-7B-Instruct", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T07:51:20Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2-7B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 36efda89-f2cb-4764-97eb-104c48ce50c5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen2-7B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 4dcb711299282333_train_data.json ds_type: json format: custom path: /workspace/input_data/4dcb711299282333_train_data.json type: field_input: phonemes field_instruction: text_description field_output: text format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: nhung02/36efda89-f2cb-4764-97eb-104c48ce50c5 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/4dcb711299282333_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: ab649ea5-2df5-460b-bb5c-9011a949e67b wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: ab649ea5-2df5-460b-bb5c-9011a949e67b warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 36efda89-f2cb-4764-97eb-104c48ce50c5 This model is a fine-tuned version of [Qwen/Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0652 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0395 | 0.0406 | 200 | 0.0652 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
auxyus/f1c3235f-b365-43f5-8318-6fa068383582
auxyus
2025-01-31T08:14:06Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:MLP-KTLim/llama-3-Korean-Bllossom-8B", "base_model:adapter:MLP-KTLim/llama-3-Korean-Bllossom-8B", "license:llama3", "region:us" ]
null
2025-01-31T07:01:11Z
--- library_name: peft license: llama3 base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B tags: - axolotl - generated_from_trainer model-index: - name: f1c3235f-b365-43f5-8318-6fa068383582 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 423760bfd2fbfffa_train_data.json ds_type: json format: custom path: /workspace/input_data/423760bfd2fbfffa_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: auxyus/f1c3235f-b365-43f5-8318-6fa068383582 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: 0 logging_steps: 3 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_steps: 100 micro_batch_size: 8 mlflow_experiment_name: /tmp/423760bfd2fbfffa_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: techspear-hub wandb_mode: online wandb_name: 84585b20-d892-48c7-a995-1238079422b0 wandb_project: Gradients-On-Two wandb_run: your_name wandb_runid: 84585b20-d892-48c7-a995-1238079422b0 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # f1c3235f-b365-43f5-8318-6fa068383582 This model is a fine-tuned version of [MLP-KTLim/llama-3-Korean-Bllossom-8B](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5960 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0004 | 1 | 2.1241 | | 2.0895 | 0.0037 | 9 | 1.9419 | | 1.7431 | 0.0074 | 18 | 1.7472 | | 1.6979 | 0.0111 | 27 | 1.6884 | | 1.6666 | 0.0147 | 36 | 1.6593 | | 1.6459 | 0.0184 | 45 | 1.6361 | | 1.6994 | 0.0221 | 54 | 1.6213 | | 1.5952 | 0.0258 | 63 | 1.6098 | | 1.6636 | 0.0295 | 72 | 1.6024 | | 1.6419 | 0.0332 | 81 | 1.5986 | | 1.6241 | 0.0369 | 90 | 1.5964 | | 1.591 | 0.0405 | 99 | 1.5960 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
qa-02/llama-3-8b-Instruct-bnb-4bit-FE-trial
qa-02
2025-01-31T08:11:30Z
23
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:quantized:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-01-31T08:08:37Z
--- base_model: unsloth/llama-3-8b-Instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** qa-02 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
sercetexam9/afro-xlmr-base-tat-MICRO
sercetexam9
2025-01-31T08:09:22Z
5
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:Davlan/afro-xlmr-base", "base_model:finetune:Davlan/afro-xlmr-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-01-31T07:48:46Z
--- library_name: transformers license: mit base_model: Davlan/afro-xlmr-base tags: - generated_from_trainer metrics: - f1 - accuracy model-index: - name: afro-xlmr-base-tat-MICRO results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # afro-xlmr-base-tat-MICRO This model is a fine-tuned version of [Davlan/afro-xlmr-base](https://huggingface.co/Davlan/afro-xlmr-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3352 - F1: 0.7041 - Roc Auc: 0.8304 - Accuracy: 0.6909 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:| | 0.2295 | 1.0 | 345 | 0.2586 | 0.5653 | 0.7200 | 0.5818 | | 0.174 | 2.0 | 690 | 0.2525 | 0.6211 | 0.7728 | 0.6295 | | 0.1379 | 3.0 | 1035 | 0.2428 | 0.6566 | 0.7980 | 0.6477 | | 0.0958 | 4.0 | 1380 | 0.2517 | 0.6689 | 0.7849 | 0.6636 | | 0.0594 | 5.0 | 1725 | 0.2693 | 0.6667 | 0.8033 | 0.65 | | 0.0605 | 6.0 | 2070 | 0.3010 | 0.6637 | 0.8047 | 0.6545 | | 0.0325 | 7.0 | 2415 | 0.3619 | 0.6569 | 0.8053 | 0.6545 | | 0.0141 | 8.0 | 2760 | 0.3174 | 0.6944 | 0.8326 | 0.6727 | | 0.03 | 9.0 | 3105 | 0.3352 | 0.7041 | 0.8304 | 0.6909 | | 0.0101 | 10.0 | 3450 | 0.3533 | 0.6766 | 0.8117 | 0.6682 | | 0.0054 | 11.0 | 3795 | 0.3688 | 0.6950 | 0.8274 | 0.6795 | | 0.007 | 12.0 | 4140 | 0.3798 | 0.6983 | 0.8345 | 0.675 | | 0.0075 | 13.0 | 4485 | 0.4220 | 0.6791 | 0.8228 | 0.6614 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
GPD1/DeepSeek-R1-Distill-phi-3-mini-4k-lorar8-alpha16-50000samples
GPD1
2025-01-31T08:09:07Z
13
0
null
[ "safetensors", "phi3", "Deepseek", "Distillation", "text-generation", "conversational", "custom_code", "en", "dataset:Magpie-Align/Magpie-Reasoning-V2-250K-CoT-Deepseek-R1-Llama-70B", "base_model:microsoft/Phi-3-mini-4k-instruct", "base_model:finetune:microsoft/Phi-3-mini-4k-instruct", "license:mit", "region:us" ]
text-generation
2025-01-31T03:01:49Z
--- license: mit datasets: - Magpie-Align/Magpie-Reasoning-V2-250K-CoT-Deepseek-R1-Llama-70B language: - en base_model: - microsoft/Phi-3-mini-4k-instruct pipeline_tag: text-generation tags: - Deepseek - Distillation --- ## How to Get Started with the Model Distilled model created from Deepseek-R1 Knowledge. You can follow the medium blog for more details Blog: How to distill Deepseek-R1: A Comprehensive Guide Blog link: https://medium.com/@prabhudev.guntur/how-to-distill-deepseek-r1-a-comprehensive-guide-c8ba04e2c28c
pookienumnums/DpictClassicalIllustration
pookienumnums
2025-01-31T08:09:01Z
7
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2025-01-31T08:08:46Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/ComfyUI_02059_.png - text: '-' output: url: images/ComfyUI_02057_.png - text: '-' output: url: images/ComfyUI_02051_.png - text: '-' output: url: images/ComfyUI_02086_.png - text: '-' output: url: images/ComfyUI_02085_.png - text: '-' output: url: images/ComfyUI_02084_.png - text: '-' output: url: images/ComfyUI_02083_.png - text: '-' output: url: images/ComfyUI_02082_.png - text: '-' output: url: images/ComfyUI_02081_.png - text: '-' output: url: images/ComfyUI_02077_.png - text: '-' output: url: images/ComfyUI_02076_.png - text: '-' output: url: images/ComfyUI_02074_.png - text: '-' output: url: images/ComfyUI_02066_.png - text: '-' output: url: images/ComfyUI_02065_.png - text: '-' output: url: images/ComfyUI_02061_.png base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: >- dpict, monochrome, classical illustration, woodblock print, high quality, high resolution, crosshatching, greyscale, holding, weapon, multiple boys, traditional media, sword, border, holding weapon, sitting, black border, standing, 1girl, armor, barefoot, 6+boys, tree, polearm, helmet, 1boy, wings, solo, male focus, multiple girls, fine art parody, robe, feathered wings, angel, facial hair, spear, outdoors, beard, sheath, holding sword, hood, old, cloud, full body, angel wings, halo, muscular, braid, 2boys, hat, horseback riding, horse, riding, 1other, breastplate, full armor, shield, dress, monster, nude, statue, 2girls, dated, long sleeves, smoking pipe, looking at viewer, instrument, harp, boots, throne, crown, kneeling, table, book, lamp, indoors, cat, bookshelf, ambiguous gender, plume, bird, stairs, staff, bandages, navel, branch, nipples, completely nude, looking at another, penis, flaccid, nature, convenient censoring, arrow (projectile), skull, bow (weapon), knight, skeleton, parody, dragon, watercraft, mountain, sky, hatching (texture), multiple others, brown theme, sepia, laurel crown, closed mouth, old woman, long hair, flower, ass, short hair, signature, cloak, baby, shoulder armor, gauntlets, sheathed, pauldrons license: creativeml-openrail-m --- # Dpict Classical Illustration <Gallery /> ## Model description This model diffuses a images in the style of classical illustrations. It was trained on a dataset containing scans of historical art currently held in the library of congress. It wasnt clear if it was the actual illustrations or prints in a book. Either way, it turned out quite well in my opinion. Recommend paring with juggernautxl or similar model. ## Trigger words You should use `dpict` to trigger the image generation. You should use `monochrome` to trigger the image generation. You should use `classical illustration` to trigger the image generation. You should use `woodblock print` to trigger the image generation. You should use `high quality` to trigger the image generation. You should use `high resolution` to trigger the image generation. You should use `crosshatching` to trigger the image generation. You should use `greyscale` to trigger the image generation. You should use `holding` to trigger the image generation. You should use `weapon` to trigger the image generation. You should use `multiple boys` to trigger the image generation. You should use `traditional media` to trigger the image generation. You should use `sword` to trigger the image generation. You should use `border` to trigger the image generation. You should use `holding weapon` to trigger the image generation. You should use `sitting` to trigger the image generation. You should use `black border` to trigger the image generation. You should use `standing` to trigger the image generation. You should use `1girl` to trigger the image generation. You should use `armor` to trigger the image generation. You should use `barefoot` to trigger the image generation. You should use `6+boys` to trigger the image generation. You should use `tree` to trigger the image generation. You should use `polearm` to trigger the image generation. You should use `helmet` to trigger the image generation. You should use `1boy` to trigger the image generation. You should use `wings` to trigger the image generation. You should use `solo` to trigger the image generation. You should use `male focus` to trigger the image generation. You should use `multiple girls` to trigger the image generation. You should use `fine art parody` to trigger the image generation. You should use `robe` to trigger the image generation. You should use `feathered wings` to trigger the image generation. You should use `angel` to trigger the image generation. You should use `facial hair` to trigger the image generation. You should use `spear` to trigger the image generation. You should use `outdoors` to trigger the image generation. You should use `beard` to trigger the image generation. You should use `sheath` to trigger the image generation. You should use `holding sword` to trigger the image generation. You should use `hood` to trigger the image generation. You should use `old` to trigger the image generation. You should use `cloud` to trigger the image generation. You should use `full body` to trigger the image generation. You should use `angel wings` to trigger the image generation. You should use `halo` to trigger the image generation. You should use `muscular` to trigger the image generation. You should use `braid` to trigger the image generation. You should use `2boys` to trigger the image generation. You should use `hat` to trigger the image generation. You should use `horseback riding` to trigger the image generation. You should use `horse` to trigger the image generation. You should use `riding` to trigger the image generation. You should use `1other` to trigger the image generation. You should use `breastplate` to trigger the image generation. You should use `full armor` to trigger the image generation. You should use `shield` to trigger the image generation. You should use `dress` to trigger the image generation. You should use `monster` to trigger the image generation. You should use `nude` to trigger the image generation. You should use `statue` to trigger the image generation. You should use `2girls` to trigger the image generation. You should use `dated` to trigger the image generation. You should use `long sleeves` to trigger the image generation. You should use `smoking pipe` to trigger the image generation. You should use `looking at viewer` to trigger the image generation. You should use `instrument` to trigger the image generation. You should use `harp` to trigger the image generation. You should use `boots` to trigger the image generation. You should use `throne` to trigger the image generation. You should use `crown` to trigger the image generation. You should use `kneeling` to trigger the image generation. You should use `table` to trigger the image generation. You should use `book` to trigger the image generation. You should use `lamp` to trigger the image generation. You should use `indoors` to trigger the image generation. You should use `cat` to trigger the image generation. You should use `bookshelf` to trigger the image generation. You should use `ambiguous gender` to trigger the image generation. You should use `plume` to trigger the image generation. You should use `bird` to trigger the image generation. You should use `stairs` to trigger the image generation. You should use `staff` to trigger the image generation. You should use `bandages` to trigger the image generation. You should use `navel` to trigger the image generation. You should use `branch` to trigger the image generation. You should use `nipples` to trigger the image generation. You should use `completely nude` to trigger the image generation. You should use `looking at another` to trigger the image generation. You should use `penis` to trigger the image generation. You should use `flaccid` to trigger the image generation. You should use `nature` to trigger the image generation. You should use `convenient censoring` to trigger the image generation. You should use `arrow (projectile)` to trigger the image generation. You should use `skull` to trigger the image generation. You should use `bow (weapon)` to trigger the image generation. You should use `knight` to trigger the image generation. You should use `skeleton` to trigger the image generation. You should use `parody` to trigger the image generation. You should use `dragon` to trigger the image generation. You should use `watercraft` to trigger the image generation. You should use `mountain` to trigger the image generation. You should use `sky` to trigger the image generation. You should use `hatching (texture)` to trigger the image generation. You should use `multiple others` to trigger the image generation. You should use `brown theme` to trigger the image generation. You should use `sepia` to trigger the image generation. You should use `laurel crown` to trigger the image generation. You should use `closed mouth` to trigger the image generation. You should use `old woman` to trigger the image generation. You should use `long hair` to trigger the image generation. You should use `flower` to trigger the image generation. You should use `ass` to trigger the image generation. You should use `short hair` to trigger the image generation. You should use `signature` to trigger the image generation. You should use `cloak` to trigger the image generation. You should use `baby` to trigger the image generation. You should use `shoulder armor` to trigger the image generation. You should use `gauntlets` to trigger the image generation. You should use `sheathed` to trigger the image generation. You should use `pauldrons` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/pookienumnums/DpictClassicalIllustration/tree/main) them in the Files & versions tab.
lesso17/1bf071bf-5fcd-4453-94b4-fcb16e081a52
lesso17
2025-01-31T08:06:35Z
6
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:HuggingFaceH4/zephyr-7b-beta", "base_model:adapter:HuggingFaceH4/zephyr-7b-beta", "license:mit", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T07:17:57Z
--- library_name: peft license: mit base_model: HuggingFaceH4/zephyr-7b-beta tags: - axolotl - generated_from_trainer model-index: - name: 1bf071bf-5fcd-4453-94b4-fcb16e081a52 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: HuggingFaceH4/zephyr-7b-beta bf16: auto chat_template: llama3 datasets: - data_files: - ecd7cec85692169d_train_data.json ds_type: json format: custom path: /workspace/input_data/ecd7cec85692169d_train_data.json type: field_instruction: input_persona field_output: prompt format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: lesso17/1bf071bf-5fcd-4453-94b4-fcb16e081a52 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/ecd7cec85692169d_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 7bdc132e-e198-4b8f-bee8-34caa4c4cbb2 wandb_project: new-01-29 wandb_run: your_name wandb_runid: 7bdc132e-e198-4b8f-bee8-34caa4c4cbb2 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 1bf071bf-5fcd-4453-94b4-fcb16e081a52 This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0 | 0.0112 | 200 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
roleplaiapp/DeepSeek-R1-Distill-Llama-70B-Uncensored-v2-Q4_K_M-GGUF
roleplaiapp
2025-01-31T08:05:58Z
3,353
0
transformers
[ "transformers", "gguf", "4-bit", "70b", "Q4_K_M", "deepseek", "distill", "llama", "llama-cpp", "text-generation", "uncensored", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-01-31T08:01:11Z
--- library_name: transformers pipeline_tag: text-generation tags: - 4-bit - 70b - Q4_K_M - deepseek - distill - gguf - llama - llama-cpp - text-generation - uncensored --- # roleplaiapp/DeepSeek-R1-Distill-Llama-70B-Uncensored-v2-Q4_K_M-GGUF **Repo:** `roleplaiapp/DeepSeek-R1-Distill-Llama-70B-Uncensored-v2-Q4_K_M-GGUF` **Original Model:** `DeepSeek-R1-Distill-Llama-70B-Uncensored-v2` **Quantized File:** `DeepSeek-R1-Distill-Llama-70B-Uncensored-v2.Q4_K_M.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q4_K_M` ## Overview This is a GGUF Q4_K_M quantized version of DeepSeek-R1-Distill-Llama-70B-Uncensored-v2 ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
FabihaHaider/transliterated_nmt
FabihaHaider
2025-01-31T08:05:57Z
60
1
null
[ "safetensors", "t5", "arxiv:1910.09700", "region:us" ]
null
2025-01-31T06:21:00Z
<!-- --- library_name: transformers tags: [] --- --> # transliterated_nmt This repository contains the Banglanmt_bn_en model finetuned on the BanglaTLit dataset for thee downstream task of Bangla to Transliterated Bangla. <!-- ## Model Details --> <!-- ### Model Description --> <!-- Provide a longer summary of what this model is. --> <!-- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] --> <!-- ### Model Sources [optional] --> <!-- Provide the basic links for the model. --> <!-- - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] --> ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM import torch model_name = "FabihaHaider/transliterated_nmt" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) torch_device = 'cuda' if torch.cuda.is_available() else 'cpu' model = AutoModelForSeq2SeqLM.from_pretrained(model_name).to(torch_device) print(torch_device) def predict_output(input_sentence): input_ids = tokenizer((input_sentence), return_tensors="pt").input_ids generated_tokens = model.generate(input_ids) decoded_tokens = tokenizer.batch_decode(generated_tokens)[0] decoded_tokens = normalize(decoded_tokens) return decoded_tokens predict_output("আমি।") ``` <!-- ### Direct Use --> <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> <!-- [More Information Needed] --> <!-- ### Downstream Use [optional] --> <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> <!-- [More Information Needed] --> <!-- ### Out-of-Scope Use --> <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> <!-- [More Information Needed] --> <!-- ## Bias, Risks, and Limitations --> <!-- This section is meant to convey both technical and sociotechnical limitations. --> <!-- [More Information Needed] --> <!-- ### Recommendations --> <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> <!-- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. --> <!-- ## How to Get Started with the Model --> <!-- Use the code below to get started with the model. --> <!-- [More Information Needed] --> <!-- ## Training Details --> ### Finetuning Dataset [BanglaTLit](https://aclanthology.org/2024.findings-emnlp.859/) <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> <!-- [More Information Needed] --> <!-- ### Training Procedure --> <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> <!-- #### Preprocessing [optional] --> <!-- [More Information Needed] --> <!-- #### Training Hyperparameters --> <!-- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> <!-- #### Speeds, Sizes, Times [optional] --> <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> <!-- [More Information Needed] --> <!-- ## Evaluation --> <!-- This section describes the evaluation protocols and provides the results. --> <!-- ### Testing Data, Factors & Metrics --> <!-- #### Testing Data --> <!-- This should link to a Dataset Card if possible. --> <!-- [More Information Needed] --> <!-- #### Factors --> <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> <!-- [More Information Needed] --> <!-- #### Metrics --> <!-- These are the evaluation metrics being used, ideally with a description of why. --> <!-- [More Information Needed] --> <!-- ### Results --> <!-- [More Information Needed] --> <!-- #### Summary --> <!-- ## Model Examination [optional] --> <!-- Relevant interpretability work for the model goes here --> <!-- [More Information Needed] --> <!-- ## Environmental Impact --> <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> <!-- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). --> <!-- - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] --> <!-- ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] --> <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> <!-- **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] --> <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> <!-- [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] -->
bane5631/df28a789-11cb-4c15-8a77-3fd4df4403dc
bane5631
2025-01-31T08:05:42Z
6
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:unsloth/Mistral-Nemo-Instruct-2407", "base_model:adapter:unsloth/Mistral-Nemo-Instruct-2407", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T07:23:26Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Mistral-Nemo-Instruct-2407 tags: - axolotl - generated_from_trainer model-index: - name: df28a789-11cb-4c15-8a77-3fd4df4403dc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Mistral-Nemo-Instruct-2407 bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 272aed5fd2352d41_train_data.json ds_type: json format: custom path: /workspace/input_data/272aed5fd2352d41_train_data.json type: field_input: text field_instruction: instruction field_output: summary format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: null eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: bane5631/df28a789-11cb-4c15-8a77-3fd4df4403dc hub_repo: null hub_strategy: end hub_token: null learning_rate: 0.0001 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/272aed5fd2352d41_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null saves_per_epoch: null sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 1919911b-3d63-4d23-a0b1-85362cc587f6 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 1919911b-3d63-4d23-a0b1-85362cc587f6 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # df28a789-11cb-4c15-8a77-3fd4df4403dc This model is a fine-tuned version of [unsloth/Mistral-Nemo-Instruct-2407](https://huggingface.co/unsloth/Mistral-Nemo-Instruct-2407) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7214 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 4.9135 | 0.3438 | 200 | 0.7214 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
danieldimp/marcusmp
danieldimp
2025-01-31T08:04:32Z
6
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-01-31T07:52:59Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: marcusmp --- # Marcusmp <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `marcusmp` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('danieldimp/marcusmp', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
lesso17/845a71f4-1e0d-481b-83c0-5b8c70b405e7
lesso17
2025-01-31T08:04:07Z
12
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:TinyLlama/TinyLlama_v1.1", "base_model:adapter:TinyLlama/TinyLlama_v1.1", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T07:17:38Z
--- library_name: peft license: apache-2.0 base_model: TinyLlama/TinyLlama_v1.1 tags: - axolotl - generated_from_trainer model-index: - name: 845a71f4-1e0d-481b-83c0-5b8c70b405e7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: TinyLlama/TinyLlama_v1.1 bf16: auto chat_template: llama3 datasets: - data_files: - f6627dfddf7998ee_train_data.json ds_type: json format: custom path: /workspace/input_data/f6627dfddf7998ee_train_data.json type: field_input: traj_0_response field_instruction: prompt field_output: traj_0_solution_0 format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: lesso17/845a71f4-1e0d-481b-83c0-5b8c70b405e7 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/f6627dfddf7998ee_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 41e012f9-ee25-49ae-abe0-b64021ea6e9d wandb_project: new-01-29 wandb_run: your_name wandb_runid: 41e012f9-ee25-49ae-abe0-b64021ea6e9d warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 845a71f4-1e0d-481b-83c0-5b8c70b405e7 This model is a fine-tuned version of [TinyLlama/TinyLlama_v1.1](https://huggingface.co/TinyLlama/TinyLlama_v1.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2559 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.9969 | 0.0273 | 200 | 1.2559 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Legalaz/06_llamboch2_02_59
Legalaz
2025-01-31T08:02:58Z
11
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2203.05482", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-01-31T08:00:42Z
--- base_model: [] library_name: transformers tags: - mergekit - merge --- # top This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method. ### Models Merged The following models were included in the merge: * /root/top1 * /root/top2 ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: /root/top2 parameters: weight: 0.9890 - model: /root/top1 parameters: weight: 0.0628 merge_method: linear dtype: bfloat16 ```
Best000/3fb34623-7d43-4e0e-86a2-18c4cf10dbe1
Best000
2025-01-31T08:01:54Z
7
0
peft
[ "peft", "safetensors", "gemma", "axolotl", "generated_from_trainer", "base_model:unsloth/codegemma-7b", "base_model:adapter:unsloth/codegemma-7b", "license:apache-2.0", "region:us" ]
null
2025-01-31T07:54:13Z
--- library_name: peft license: apache-2.0 base_model: unsloth/codegemma-7b tags: - axolotl - generated_from_trainer model-index: - name: 3fb34623-7d43-4e0e-86a2-18c4cf10dbe1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/codegemma-7b bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - df637254d2930ff2_train_data.json ds_type: json format: custom path: /workspace/input_data/df637254d2930ff2_train_data.json type: field_input: '' field_instruction: prompt field_output: response format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: Best000/3fb34623-7d43-4e0e-86a2-18c4cf10dbe1 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/df637254d2930ff2_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: ae731b77-90f6-489c-a8d2-69167bce2830 wandb_project: Birthday-SN56-16-Gradients-On-Demand wandb_run: your_name wandb_runid: ae731b77-90f6-489c-a8d2-69167bce2830 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 3fb34623-7d43-4e0e-86a2-18c4cf10dbe1 This model is a fine-tuned version of [unsloth/codegemma-7b](https://huggingface.co/unsloth/codegemma-7b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9499 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0003 | 1 | 1.1307 | | 1.0891 | 0.0040 | 13 | 1.0453 | | 0.9926 | 0.0080 | 26 | 0.9781 | | 1.0255 | 0.0120 | 39 | 0.9499 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
shibajustfor/0b8828f0-1359-48f5-92e7-5887ef998e05
shibajustfor
2025-01-31T08:01:44Z
5
0
peft
[ "peft", "safetensors", "gemma", "axolotl", "generated_from_trainer", "base_model:unsloth/codegemma-7b", "base_model:adapter:unsloth/codegemma-7b", "license:apache-2.0", "region:us" ]
null
2025-01-31T07:54:01Z
--- library_name: peft license: apache-2.0 base_model: unsloth/codegemma-7b tags: - axolotl - generated_from_trainer model-index: - name: 0b8828f0-1359-48f5-92e7-5887ef998e05 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/codegemma-7b bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - df637254d2930ff2_train_data.json ds_type: json format: custom path: /workspace/input_data/df637254d2930ff2_train_data.json type: field_input: '' field_instruction: prompt field_output: response format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: shibajustfor/0b8828f0-1359-48f5-92e7-5887ef998e05 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/df637254d2930ff2_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: ae731b77-90f6-489c-a8d2-69167bce2830 wandb_project: Birthday-SN56-11-Gradients-On-Demand wandb_run: your_name wandb_runid: ae731b77-90f6-489c-a8d2-69167bce2830 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 0b8828f0-1359-48f5-92e7-5887ef998e05 This model is a fine-tuned version of [unsloth/codegemma-7b](https://huggingface.co/unsloth/codegemma-7b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9498 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0003 | 1 | 1.1307 | | 1.0824 | 0.0040 | 13 | 1.0394 | | 0.9829 | 0.0080 | 26 | 0.9763 | | 1.0237 | 0.0120 | 39 | 0.9498 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Jellon/Mistral-Small-24B-Instruct-2501-exl2-6bpw
Jellon
2025-01-31T08:01:37Z
19
0
vllm
[ "vllm", "safetensors", "mistral", "text-generation", "transformers", "conversational", "en", "fr", "de", "es", "it", "pt", "zh", "ja", "ru", "ko", "base_model:mistralai/Mistral-Small-24B-Instruct-2501", "base_model:quantized:mistralai/Mistral-Small-24B-Instruct-2501", "license:apache-2.0", "text-generation-inference", "6-bit", "exl2", "region:us" ]
text-generation
2025-01-31T06:57:45Z
--- language: - en - fr - de - es - it - pt - zh - ja - ru - ko license: apache-2.0 library_name: vllm inference: false base_model: mistralai/Mistral-Small-24B-Instruct-2501 extra_gated_description: >- If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>. tags: - transformers --- 6bpw exl2 quant of: https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501 --- # Model Card for Mistral-Small-24B-Instruct-2501 Mistral Small 3 ( 2501 ) sets a new benchmark in the "small" Large Language Models category below 70B, boasting 24B parameters and achieving state-of-the-art capabilities comparable to larger models! This model is an instruction-fine-tuned version of the base model: [Mistral-Small-24B-Base-2501](https://huggingface.co/mistralai/Mistral-Small-24B-Base-2501). Mistral Small can be deployed locally and is exceptionally "knowledge-dense", fitting in a single RTX 4090 or a 32GB RAM MacBook once quantized. Perfect for: - Fast response conversational agents. - Low latency function calling. - Subject matter experts via fine-tuning. - Local inference for hobbyists and organizations handling sensitive data. For enterprises that need specialized capabilities (increased context, particular modalities, domain specific knowledge, etc.), we will be releasing commercial models beyond what Mistral AI contributes to the community. This release demonstrates our commitment to open source, serving as a strong base model. Learn more about Mistral Small in our [blog post](https://mistral.ai/news/mistral-small-3/). Model developper: Mistral AI Team ## Key Features - **Multilingual:** Supports dozens of languages, including English, French, German, Spanish, Italian, Chinese, Japanese, Korean, Portuguese, Dutch, and Polish. - **Agent-Centric:** Offers best-in-class agentic capabilities with native function calling and JSON outputting. - **Advanced Reasoning:** State-of-the-art conversational and reasoning capabilities. - **Apache 2.0 License:** Open license allowing usage and modification for both commercial and non-commercial purposes. - **Context Window:** A 32k context window. - **System Prompt:** Maintains strong adherence and support for system prompts. - **Tokenizer:** Utilizes a Tekken tokenizer with a 131k vocabulary size. ## Benchmark results ### Human evaluated benchmarks | Category | Gemma-2-27B | Qwen-2.5-32B | Llama-3.3-70B | Gpt4o-mini | |----------|-------------|--------------|---------------|------------| | Mistral is better | 0.536 | 0.496 | 0.192 | 0.200 | | Mistral is slightly better | 0.196 | 0.184 | 0.164 | 0.204 | | Ties | 0.052 | 0.060 | 0.236 | 0.160 | | Other is slightly better | 0.060 | 0.088 | 0.112 | 0.124 | | Other is better | 0.156 | 0.172 | 0.296 | 0.312 | **Note**: - We conducted side by side evaluations with an external third-party vendor, on a set of over 1k proprietary coding and generalist prompts. - Evaluators were tasked with selecting their preferred model response from anonymized generations produced by Mistral Small 3 vs another model. - We are aware that in some cases the benchmarks on human judgement starkly differ from publicly available benchmarks, but have taken extra caution in verifying a fair evaluation. We are confident that the above benchmarks are valid. ### Publicly accesible benchmarks **Reasoning & Knowledge** | Evaluation | mistral-small-24B-instruct-2501 | gemma-2b-27b | llama-3.3-70b | qwen2.5-32b | gpt-4o-mini-2024-07-18 | |------------|---------------|--------------|---------------|---------------|-------------| | mmlu_pro_5shot_cot_instruct | 0.663 | 0.536 | 0.666 | 0.683 | 0.617 | | gpqa_main_cot_5shot_instruct | 0.453 | 0.344 | 0.531 | 0.404 | 0.377 | **Math & Coding** | Evaluation | mistral-small-24B-instruct-2501 | gemma-2b-27b | llama-3.3-70b | qwen2.5-32b | gpt-4o-mini-2024-07-18 | |------------|---------------|--------------|---------------|---------------|-------------| | humaneval_instruct_pass@1 | 0.848 | 0.732 | 0.854 | 0.909 | 0.890 | | math_instruct | 0.706 | 0.535 | 0.743 | 0.819 | 0.761 | **Instruction following** | Evaluation | mistral-small-24B-instruct-2501 | gemma-2b-27b | llama-3.3-70b | qwen2.5-32b | gpt-4o-mini-2024-07-18 | |------------|---------------|--------------|---------------|---------------|-------------| | mtbench_dev | 8.35 | 7.86 | 7.96 | 8.26 | 8.33 | | wildbench | 52.27 | 48.21 | 50.04 | 52.73 | 56.13 | | arena_hard | 0.873 | 0.788 | 0.840 | 0.860 | 0.897 | | ifeval | 0.829 | 0.8065 | 0.8835 | 0.8401 | 0.8499 | **Note**: - Performance accuracy on all benchmarks were obtained through the same internal evaluation pipeline - as such, numbers may vary slightly from previously reported performance ([Qwen2.5-32B-Instruct](https://qwenlm.github.io/blog/qwen2.5/), [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct), [Gemma-2-27B-IT](https://huggingface.co/google/gemma-2-27b-it)). - Judge based evals such as Wildbench, Arena hard and MTBench were based on gpt-4o-2024-05-13. ### Basic Instruct Template (V7-Tekken) ``` <s>[SYSTEM_PROMPT]<system prompt>[/SYSTEM_PROMPT][INST]<user message>[/INST]<assistant response></s>[INST]<user message>[/INST] ``` *`<system_prompt>`, `<user message>` and `<assistant response>` are placeholders.* ***Please make sure to use [mistral-common](https://github.com/mistralai/mistral-common) as the source of truth*** ## Usage The model can be used with the following frameworks; - [`vllm`](https://github.com/vllm-project/vllm): See [here](#vLLM) - [`transformers`](https://github.com/huggingface/transformers): See [here](#Transformers) ### vLLM We recommend using this model with the [vLLM library](https://github.com/vllm-project/vllm) to implement production-ready inference pipelines. **Note 1**: We recommond using a relatively low temperature, such as `temperature=0.15`. **Note 2**: Make sure to add a system prompt to the model to best tailer it for your needs. If you want to use the model as a general assistant, we recommend the following system prompt: ``` system_prompt = """You are Mistral Small 3, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris. Your knowledge base was last updated on 2023-10-01. The current date is 2025-01-30. When you're not sure about some information, you say that you don't have the information and don't make up anything. If the user's question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. \"What are some good restaurants around me?\" => \"Where are you?\" or \"When is the next flight to Tokyo\" => \"Where do you travel from?\")""" ``` **_Installation_** Make sure you install [`vLLM >= 0.6.4`](https://github.com/vllm-project/vllm/releases/tag/v0.6.4): ``` pip install --upgrade vllm ``` Also make sure you have [`mistral_common >= 1.5.2`](https://github.com/mistralai/mistral-common/releases/tag/v1.5.2) installed: ``` pip install --upgrade mistral_common ``` You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile) or on the [docker hub](https://hub.docker.com/layers/vllm/vllm-openai/latest/images/sha256-de9032a92ffea7b5c007dad80b38fd44aac11eddc31c435f8e52f3b7404bbf39). #### Server We recommand that you use Mistral-Small-24B-Instruct-2501 in a server/client setting. 1. Spin up a server: ``` vllm serve mistralai/Mistral-Small-24B-Instruct-2501 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice ``` **Note:** Running Mistral-Small-24B-Instruct-2501 on GPU requires ~55 GB of GPU RAM in bf16 or fp16. 2. To ping the client you can use a simple Python snippet. ```py import requests import json from datetime import datetime, timedelta url = "http://<your-server>:8000/v1/chat/completions" headers = {"Content-Type": "application/json", "Authorization": "Bearer token"} model = "mistralai/Mistral-Small-24B-Instruct-2501" messages = [ { "role": "system", "content": "You are a conversational agent that always answers straight to the point, always end your accurate response with an ASCII drawing of a cat." }, { "role": "user", "content": "Give me 5 non-formal ways to say 'See you later' in French." }, ] data = {"model": model, "messages": messages} response = requests.post(url, headers=headers, data=json.dumps(data)) print(response.json()["choices"][0]["message"]["content"]) # Sure, here are five non-formal ways to say "See you later" in French: # # 1. À plus tard # 2. À plus # 3. Salut # 4. À toute # 5. Bisous # # ``` # /\_/\ # ( o.o ) # > ^ < # ``` ``` ### Function calling Mistral-Small-24-Instruct-2501 is excellent at function / tool calling tasks via vLLM. *E.g.:* <details> <summary>Example</summary> ```py import requests import json from huggingface_hub import hf_hub_download from datetime import datetime, timedelta url = "http://<your-url>:8000/v1/chat/completions" headers = {"Content-Type": "application/json", "Authorization": "Bearer token"} model = "mistralai/Mistral-Small-24B-Instruct-2501" def load_system_prompt(repo_id: str, filename: str) -> str: file_path = hf_hub_download(repo_id=repo_id, filename=filename) with open(file_path, "r") as file: system_prompt = file.read() today = datetime.today().strftime("%Y-%m-%d") yesterday = (datetime.today() - timedelta(days=1)).strftime("%Y-%m-%d") model_name = repo_id.split("/")[-1] return system_prompt.format(name=model_name, today=today, yesterday=yesterday) SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt") tools = [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "city": { "type": "string", "description": "The city to find the weather for, e.g. 'San Francisco'", }, "state": { "type": "string", "description": "The state abbreviation, e.g. 'CA' for California", }, "unit": { "type": "string", "description": "The unit for temperature", "enum": ["celsius", "fahrenheit"], }, }, "required": ["city", "state", "unit"], }, }, }, { "type": "function", "function": { "name": "rewrite", "description": "Rewrite a given text for improved clarity", "parameters": { "type": "object", "properties": { "text": { "type": "string", "description": "The input text to rewrite", } }, }, }, }, ] messages = [ {"role": "system", "content": SYSTEM_PROMPT}, { "role": "user", "content": "Could you please make the below article more concise?\n\nOpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership.", }, { "role": "assistant", "content": "", "tool_calls": [ { "id": "bbc5b7ede", "type": "function", "function": { "name": "rewrite", "arguments": '{"text": "OpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership."}', }, } ], }, { "role": "tool", "content": '{"action":"rewrite","outcome":"OpenAI is a FOR-profit company."}', "tool_call_id": "bbc5b7ede", "name": "rewrite", }, { "role": "assistant", "content": "---\n\nOpenAI is a FOR-profit company.", }, { "role": "user", "content": "Can you tell me what the temperature will be in Dallas, in Fahrenheit?", }, ] data = {"model": model, "messages": messages, "tools": tools} response = requests.post(url, headers=headers, data=json.dumps(data)) import ipdb; ipdb.set_trace() print(response.json()["choices"][0]["message"]["tool_calls"]) # [{'id': '8PdihwL6d', 'type': 'function', 'function': {'name': 'get_current_weather', 'arguments': '{"city": "Dallas", "state": "TX", "unit": "fahrenheit"}'}}] ``` </details> #### Offline ```py from vllm import LLM from vllm.sampling_params import SamplingParams from datetime import datetime, timedelta SYSTEM_PROMPT = "You are a conversational agent that always answers straight to the point, always end your accurate response with an ASCII drawing of a cat." user_prompt = "Give me 5 non-formal ways to say 'See you later' in French." messages = [ { "role": "system", "content": SYSTEM_PROMPT }, { "role": "user", "content": user_prompt }, ] # note that running this model on GPU requires over 60 GB of GPU RAM llm = LLM(model=model_name, tokenizer_mode="mistral", tensor_parallel_size=8) sampling_params = SamplingParams(max_tokens=512, temperature=0.15) outputs = llm.chat(messages, sampling_params=sampling_params) print(outputs[0].outputs[0].text) # Sure, here are five non-formal ways to say "See you later" in French: # # 1. À plus tard # 2. À plus # 3. Salut # 4. À toute # 5. Bisous # # ``` # /\_/\ # ( o.o ) # > ^ < # ``` ``` ### Transformers If you want to use Hugging Face transformers to generate text, you can do something like this. ```py from transformers import pipeline import torch messages = [ {"role": "user", "content": "Give me 5 non-formal ways to say 'See you later' in French."}, ] chatbot = pipeline("text-generation", model="mistralai/Mistral-Small-24B-Instruct-2501", max_new_tokens=256, torch_dtype=torch.bfloat16) chatbot(messages) ``` ### Ollama [Ollama](https://github.com/ollama/ollama) can run this model locally on MacOS, Windows and Linux. ``` ollama run mistral-small ``` 4-bit quantization (aliased to default): ``` ollama run mistral-small:24b-instruct-2501-q4_K_M ``` 8-bit quantization: ``` ollama run mistral-small:24b-instruct-2501-q8_0 ``` FP16: ``` ollama run mistral-small:24b-instruct-2501-fp16 ```
roleplaiapp/DeepSeek-R1-Distill-Llama-70B-Uncensored-v2-Q3_K_S-GGUF
roleplaiapp
2025-01-31T08:00:26Z
22
0
transformers
[ "transformers", "gguf", "3-bit", "70b", "Q3_K_S", "deepseek", "distill", "llama", "llama-cpp", "text-generation", "uncensored", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-01-31T07:58:38Z
--- library_name: transformers pipeline_tag: text-generation tags: - 3-bit - 70b - Q3_K_S - deepseek - distill - gguf - llama - llama-cpp - text-generation - uncensored --- # roleplaiapp/DeepSeek-R1-Distill-Llama-70B-Uncensored-v2-Q3_K_S-GGUF **Repo:** `roleplaiapp/DeepSeek-R1-Distill-Llama-70B-Uncensored-v2-Q3_K_S-GGUF` **Original Model:** `DeepSeek-R1-Distill-Llama-70B-Uncensored-v2` **Quantized File:** `DeepSeek-R1-Distill-Llama-70B-Uncensored-v2.Q3_K_S.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q3_K_S` ## Overview This is a GGUF Q3_K_S quantized version of DeepSeek-R1-Distill-Llama-70B-Uncensored-v2 ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
nomadrp/tq-llama-binary-20each-ws-all-langs-2epochs
nomadrp
2025-01-31T07:59:59Z
18
0
peft
[ "peft", "safetensors", "trl", "dpo", "generated_from_trainer", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:adapter:meta-llama/Llama-3.1-8B-Instruct", "license:llama3.1", "region:us" ]
null
2025-01-31T06:39:22Z
--- library_name: peft license: llama3.1 base_model: meta-llama/Meta-Llama-3.1-8B-Instruct tags: - trl - dpo - generated_from_trainer model-index: - name: tq-llama-binary-20each-ws-all-langs-2epochs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tq-llama-binary-20each-ws-all-langs-2epochs This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-07 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.14.0 - Transformers 4.45.2 - Pytorch 2.0.1+cu117 - Datasets 2.14.4 - Tokenizers 0.20.3
kostiantynk1205/fa5cda42-2977-4ae4-9f64-2655b2619396
kostiantynk1205
2025-01-31T07:46:57Z
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "custom_code", "base_model:NousResearch/CodeLlama-7b-hf-flash", "base_model:adapter:NousResearch/CodeLlama-7b-hf-flash", "region:us" ]
null
2025-01-31T07:45:38Z
--- library_name: peft base_model: NousResearch/CodeLlama-7b-hf-flash tags: - axolotl - generated_from_trainer model-index: - name: fa5cda42-2977-4ae4-9f64-2655b2619396 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/CodeLlama-7b-hf-flash bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ef066a96964aba8a_train_data.json ds_type: json format: custom path: /workspace/input_data/ef066a96964aba8a_train_data.json type: field_instruction: title field_output: description format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: kostiantynk1205/fa5cda42-2977-4ae4-9f64-2655b2619396 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/ef066a96964aba8a_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 7cf2646b-3084-4458-ab3f-4af8618983fd wandb_project: Birthday-SN56-23-Gradients-On-Demand wandb_run: your_name wandb_runid: 7cf2646b-3084-4458-ab3f-4af8618983fd warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # fa5cda42-2977-4ae4-9f64-2655b2619396 This model is a fine-tuned version of [NousResearch/CodeLlama-7b-hf-flash](https://huggingface.co/NousResearch/CodeLlama-7b-hf-flash) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4431 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0040 | 1 | 2.4172 | | 8.7174 | 0.0519 | 13 | 1.8004 | | 6.8923 | 0.1038 | 26 | 1.5359 | | 5.9897 | 0.1557 | 39 | 1.4431 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
EpistemeAI/Reasoning-Llama-3.1-CoT-RE1-NMT
EpistemeAI
2025-01-31T07:44:17Z
109
1
null
[ "safetensors", "llama", "dataset:AI-MO/NuminaMath-TIR", "dataset:bespokelabs/Bespoke-Stratos-17k", "license:apache-2.0", "region:us" ]
null
2025-01-29T05:51:48Z
--- datasets: - AI-MO/NuminaMath-TIR - bespokelabs/Bespoke-Stratos-17k license: apache-2.0 --- Upgrade version [EpistemeAI/Reasoning-Llama-3.1-CoT-RE1-NMT-V2] (https://huggingface.co/EpistemeAI/Reasoning-Llama-3.1-CoT-RE1-NMT-V2) ## Introduction Introducing Reasoning Llama 3.1: The Next Evolution in Conversational AI We are thrilled to unveil Reasoning Llama 3.1, the latest advancement in our suite of AI models. Building upon the robust foundation of the renowned Llama series, Reasoning Llama 3.1 introduces the groundbreaking Chain of Thought (CoT) capabilities, elevating its reasoning prowess to new heights. ## Key Features of Reasoning Llama 3.1: Enhanced Chain of Thought Reasoning: At the core of Reasoning Llama 3.1 lies its sophisticated CoT framework, enabling the model to perform multi-step reasoning with greater accuracy and coherence. This ensures more reliable and contextually appropriate responses, especially for complex queries that require logical progression. Conversational Excellence: Designed with interactivity in mind, Reasoning Llama 3.1 excels in maintaining engaging and fluid conversations. Whether it's casual dialogue or in-depth discussions, the model adapts seamlessly to various conversational styles, providing users with a natural and intuitive interaction experience. Instruction-Supervised Fine-Tuning: Leveraging advanced supervised fine-tuning techniques, Reasoning Llama 3.1 has been meticulously trained on diverse instructional data. This fine-tuning process enhances the model's ability to understand and execute user instructions with precision, making it an invaluable tool for a wide range of applications. Unsloth Integration: Incorporating Unsloth, our proprietary unsupervised learning framework, Reasoning Llama 3.1 benefits from continuous learning capabilities. This integration allows the model to adapt and improve over time, ensuring it remains up-to-date with evolving language patterns and user needs without the constant need for manual intervention. ## Why Choose Reasoning Llama 3.1? Reasoning Llama 3.1 stands out as a versatile and powerful AI solution tailored for both developers and end-users. Its combination of advanced reasoning, conversational intelligence, and adaptive learning mechanisms make it ideally suited for applications ranging from customer support and virtual assistants to educational tools and creative content generation. As we continue to push the boundaries of artificial intelligence, Reasoning Llama 3.1 exemplifies our commitment to delivering state-of-the-art models that empower users with intelligent, reliable, and user-friendly technology. Experience the future of conversational AI with Reasoning Llama 3.1 and unlock new possibilities in human-machine interaction. ## How to use Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function. Make sure to update your transformers installation via `pip install --upgrade transformers`. ```python import torch from transformers import pipeline model_id = "EpistemeAI/Reasoning-Llama-3.1-CoT-RE1-NMT" pipe = pipeline( "text-generation", model=model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a powerful AI math assistant"}, {"role": "user", "content": "Given the quadratic function $f(x)=ax^{2}+bx+c$ with its derivative $f′(x)$, where $f′(0) > 0$, and $f(x)\geqslant 0$ for any real number $x$, find the minimum value of $\frac{f(1)}{f′(0)}$."}, ] outputs = pipe( messages, max_new_tokens=2048, ) print(outputs[0]["generated_text"][-1]) ``` # Uploaded model - **Developed by:** EpistemeAI - **License:** apache-2.0 - **Finetuned from model :** EpistemeAI/Reasoning-Llama-3.1-CoT-RE1 This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## 5. Citation ``` @misc{EpistemeAI2025, title = {EpistemeAI}, author={Thomas Yiu}, year={2025}, } @misc{bespoke_stratos, author = {Bespoke Labs}, title = {Bespoke-Stratos: The unreasonable effectiveness of reasoning distillation}, howpublished = {https://www.bespokelabs.ai/blog/bespoke-stratos-the-unreasonable-effectiveness-of-reasoning-distillation}, note = {Accessed: 2025-01-22}, year = {2025} } @misc{numina_math_datasets, author = {Jia LI, Edward Beeching, Lewis Tunstall, Ben Lipkin, Roman Soletskyi, Shengyi Costa Huang, Kashif Rasul, Longhui Yu, Albert Jiang, Ziju Shen, Zihan Qin, Bin Dong, Li Zhou, Yann Fleureau, Guillaume Lample, and Stanislas Polu}, title = {NuminaMath TIR}, year = {2024}, publisher = {Numina}, journal = {Hugging Face repository}, howpublished = {\url{[https://huggingface.co/AI-MO/NuminaMath-TIR](https://github.com/project-numina/aimo-progress-prize/blob/main/report/numina_dataset.pdf)}} } ``` ## 6. Contact If you have any questions, please raise an issue or contact us at [episteme.ai@proton.me](mailto:episteme.ai@proton.me). # Reference/Inspired [Open-R1: a fully open reproduction of DeepSeek-R1](https://huggingface.co/blog/open-r1)
roleplaiapp/DeepSeek-R1-Distill-Llama-70B-Uncensored-v2-Q2_K-GGUF
roleplaiapp
2025-01-31T07:43:36Z
326
0
transformers
[ "transformers", "gguf", "2-bit", "70b", "Q2_K", "deepseek", "distill", "llama", "llama-cpp", "text-generation", "uncensored", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-01-31T07:42:07Z
--- library_name: transformers pipeline_tag: text-generation tags: - 2-bit - 70b - Q2_K - deepseek - distill - gguf - llama - llama-cpp - text-generation - uncensored --- # roleplaiapp/DeepSeek-R1-Distill-Llama-70B-Uncensored-v2-Q2_K-GGUF **Repo:** `roleplaiapp/DeepSeek-R1-Distill-Llama-70B-Uncensored-v2-Q2_K-GGUF` **Original Model:** `DeepSeek-R1-Distill-Llama-70B-Uncensored-v2` **Quantized File:** `DeepSeek-R1-Distill-Llama-70B-Uncensored-v2.Q2_K.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q2_K` ## Overview This is a GGUF Q2_K quantized version of DeepSeek-R1-Distill-Llama-70B-Uncensored-v2 ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
mrferr3t/a73aeffa-c13c-45b8-ad56-12d3e31085fe
mrferr3t
2025-01-31T07:39:27Z
12
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:TinyLlama/TinyLlama_v1.1", "base_model:adapter:TinyLlama/TinyLlama_v1.1", "license:apache-2.0", "region:us" ]
null
2025-01-31T07:26:30Z
--- library_name: peft license: apache-2.0 base_model: TinyLlama/TinyLlama_v1.1 tags: - axolotl - generated_from_trainer model-index: - name: a73aeffa-c13c-45b8-ad56-12d3e31085fe results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: TinyLlama/TinyLlama_v1.1 bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - f6627dfddf7998ee_train_data.json ds_type: json format: custom path: /workspace/input_data/f6627dfddf7998ee_train_data.json type: field_input: traj_0_response field_instruction: prompt field_output: traj_0_solution_0 format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_steps: 50 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: mrferr3t/a73aeffa-c13c-45b8-ad56-12d3e31085fe hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0005 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 99 micro_batch_size: 2 mlflow_experiment_name: /tmp/f6627dfddf7998ee_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 300 saves_per_epoch: 0 sequence_len: 512 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 41e012f9-ee25-49ae-abe0-b64021ea6e9d wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 41e012f9-ee25-49ae-abe0-b64021ea6e9d warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # a73aeffa-c13c-45b8-ad56-12d3e31085fe This model is a fine-tuned version of [TinyLlama/TinyLlama_v1.1](https://huggingface.co/TinyLlama/TinyLlama_v1.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9268 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 99 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.2394 | 0.0001 | 1 | 1.3524 | | 0.6882 | 0.0068 | 50 | 0.9268 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.3.1+cu121 - Datasets 3.0.1 - Tokenizers 0.20.1
cilooor/046b85c9-23cf-42fa-ad72-faea29e54f78
cilooor
2025-01-31T07:39:05Z
15
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:TinyLlama/TinyLlama_v1.1", "base_model:adapter:TinyLlama/TinyLlama_v1.1", "license:apache-2.0", "region:us" ]
null
2025-01-31T07:18:44Z
--- library_name: peft license: apache-2.0 base_model: TinyLlama/TinyLlama_v1.1 tags: - axolotl - generated_from_trainer model-index: - name: 046b85c9-23cf-42fa-ad72-faea29e54f78 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: TinyLlama/TinyLlama_v1.1 bf16: true chat_template: llama3 data_processes: 24 dataset_prepared_path: null datasets: - data_files: - f6627dfddf7998ee_train_data.json ds_type: json format: custom path: /workspace/input_data/f6627dfddf7998ee_train_data.json type: field_input: traj_0_response field_instruction: prompt field_output: traj_0_solution_0 format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto early_stopping_patience: 4 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: false fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 8 gradient_checkpointing: true group_by_length: true hub_model_id: cilooor/046b85c9-23cf-42fa-ad72-faea29e54f78 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 7.0e-05 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.07 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine lr_scheduler_warmup_steps: 50 max_grad_norm: 0.3 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/f6627dfddf7998ee_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.999 adam_epsilon: 1e-8 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null seed: 17333 sequence_len: 1024 special_tokens: pad_token: </s> strict: false tf32: true tokenizer_type: AutoTokenizer total_train_batch_size: 32 train_batch_size: 8 train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 41e012f9-ee25-49ae-abe0-b64021ea6e9d wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 41e012f9-ee25-49ae-abe0-b64021ea6e9d warmup_steps: 30 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 046b85c9-23cf-42fa-ad72-faea29e54f78 This model is a fine-tuned version of [TinyLlama/TinyLlama_v1.1](https://huggingface.co/TinyLlama/TinyLlama_v1.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8387 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 17333 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.999,adam_epsilon=1e-8 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 30 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.7648 | 0.0005 | 1 | 1.3696 | | 1.1307 | 0.0273 | 50 | 0.9475 | | 1.0357 | 0.0547 | 100 | 0.8693 | | 0.9074 | 0.0820 | 150 | 0.8440 | | 0.9893 | 0.1093 | 200 | 0.8387 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
mrferr3t/50cf18ca-29a7-43e4-b52c-209e7bc94fc6
mrferr3t
2025-01-31T07:38:54Z
9
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:DeepMount00/Llama-3-8b-Ita", "base_model:adapter:DeepMount00/Llama-3-8b-Ita", "license:llama3", "region:us" ]
null
2025-01-31T07:37:30Z
--- library_name: peft license: llama3 base_model: DeepMount00/Llama-3-8b-Ita tags: - axolotl - generated_from_trainer model-index: - name: 50cf18ca-29a7-43e4-b52c-209e7bc94fc6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: DeepMount00/Llama-3-8b-Ita bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ff701e66869152c5_train_data.json ds_type: json format: custom path: /workspace/input_data/ff701e66869152c5_train_data.json type: field_instruction: src field_output: tgt format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_steps: 50 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: mrferr3t/50cf18ca-29a7-43e4-b52c-209e7bc94fc6 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0005 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 99 micro_batch_size: 2 mlflow_experiment_name: /tmp/ff701e66869152c5_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 300 saves_per_epoch: 0 sequence_len: 512 special_tokens: pad_token: <|eot_id|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 37e884fe-9938-432e-9e6b-d663af3f92e4 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 37e884fe-9938-432e-9e6b-d663af3f92e4 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 50cf18ca-29a7-43e4-b52c-209e7bc94fc6 This model is a fine-tuned version of [DeepMount00/Llama-3-8b-Ita](https://huggingface.co/DeepMount00/Llama-3-8b-Ita) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2451 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 97 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.9305 | 0.0104 | 1 | 2.0875 | | 1.3837 | 0.5181 | 50 | 1.2451 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.3.1+cu121 - Datasets 3.0.1 - Tokenizers 0.20.1
Best000/8ddd5201-de60-4f90-b2e3-4a4b8d9b1acc
Best000
2025-01-31T07:38:30Z
9
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:DeepMount00/Llama-3-8b-Ita", "base_model:adapter:DeepMount00/Llama-3-8b-Ita", "license:llama3", "region:us" ]
null
2025-01-31T07:37:25Z
--- library_name: peft license: llama3 base_model: DeepMount00/Llama-3-8b-Ita tags: - axolotl - generated_from_trainer model-index: - name: 8ddd5201-de60-4f90-b2e3-4a4b8d9b1acc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: DeepMount00/Llama-3-8b-Ita bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ff701e66869152c5_train_data.json ds_type: json format: custom path: /workspace/input_data/ff701e66869152c5_train_data.json type: field_instruction: src field_output: tgt format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: Best000/8ddd5201-de60-4f90-b2e3-4a4b8d9b1acc hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/ff701e66869152c5_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: <|eot_id|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 37e884fe-9938-432e-9e6b-d663af3f92e4 wandb_project: Birthday-SN56-16-Gradients-On-Demand wandb_run: your_name wandb_runid: 37e884fe-9938-432e-9e6b-d663af3f92e4 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 8ddd5201-de60-4f90-b2e3-4a4b8d9b1acc This model is a fine-tuned version of [DeepMount00/Llama-3-8b-Ita](https://huggingface.co/DeepMount00/Llama-3-8b-Ita) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2792 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0104 | 1 | 2.0874 | | 1.899 | 0.1347 | 13 | 1.5013 | | 1.4695 | 0.2694 | 26 | 1.3251 | | 1.283 | 0.4041 | 39 | 1.2792 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
lesso01/f55ed13d-287f-4850-b695-20aec435094e
lesso01
2025-01-31T07:36:23Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "custom_code", "base_model:NousResearch/CodeLlama-7b-hf-flash", "base_model:adapter:NousResearch/CodeLlama-7b-hf-flash", "region:us" ]
null
2025-01-31T07:25:57Z
--- library_name: peft base_model: NousResearch/CodeLlama-7b-hf-flash tags: - axolotl - generated_from_trainer model-index: - name: f55ed13d-287f-4850-b695-20aec435094e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/CodeLlama-7b-hf-flash bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ef066a96964aba8a_train_data.json ds_type: json format: custom path: /workspace/input_data/ef066a96964aba8a_train_data.json type: field_instruction: title field_output: description format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: lesso01/f55ed13d-287f-4850-b695-20aec435094e hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mixed_precision: bf16 mlflow_experiment_name: /tmp/ef066a96964aba8a_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 7cf2646b-3084-4458-ab3f-4af8618983fd wandb_project: new-01-29 wandb_run: your_name wandb_runid: 7cf2646b-3084-4458-ab3f-4af8618983fd warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # f55ed13d-287f-4850-b695-20aec435094e This model is a fine-tuned version of [NousResearch/CodeLlama-7b-hf-flash](https://huggingface.co/NousResearch/CodeLlama-7b-hf-flash) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3251 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 5.5535 | 0.7984 | 200 | 1.3251 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
adammandic87/bc1558dc-b7da-4aad-bc5e-ea57281facde
adammandic87
2025-01-31T07:36:21Z
6
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:HuggingFaceH4/zephyr-7b-beta", "base_model:adapter:HuggingFaceH4/zephyr-7b-beta", "license:mit", "region:us" ]
null
2025-01-31T07:19:09Z
--- library_name: peft license: mit base_model: HuggingFaceH4/zephyr-7b-beta tags: - axolotl - generated_from_trainer model-index: - name: bc1558dc-b7da-4aad-bc5e-ea57281facde results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: HuggingFaceH4/zephyr-7b-beta bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ecd7cec85692169d_train_data.json ds_type: json format: custom path: /workspace/input_data/ecd7cec85692169d_train_data.json type: field_instruction: input_persona field_output: prompt format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: adammandic87/bc1558dc-b7da-4aad-bc5e-ea57281facde hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/ecd7cec85692169d_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 7bdc132e-e198-4b8f-bee8-34caa4c4cbb2 wandb_project: Birthday-SN56-13-Gradients-On-Demand wandb_run: your_name wandb_runid: 7bdc132e-e198-4b8f-bee8-34caa4c4cbb2 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # bc1558dc-b7da-4aad-bc5e-ea57281facde This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0 | 0.0001 | 1 | nan | | 0.0 | 0.0007 | 13 | nan | | 0.0 | 0.0015 | 26 | nan | | 0.0 | 0.0022 | 39 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
adammandic87/f62fa779-f2a3-4e37-ade5-d772103b1717
adammandic87
2025-01-31T07:35:29Z
6
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:HuggingFaceH4/zephyr-7b-beta", "base_model:adapter:HuggingFaceH4/zephyr-7b-beta", "license:mit", "region:us" ]
null
2025-01-31T07:18:45Z
--- library_name: peft license: mit base_model: HuggingFaceH4/zephyr-7b-beta tags: - axolotl - generated_from_trainer model-index: - name: f62fa779-f2a3-4e37-ade5-d772103b1717 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: HuggingFaceH4/zephyr-7b-beta bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ecd7cec85692169d_train_data.json ds_type: json format: custom path: /workspace/input_data/ecd7cec85692169d_train_data.json type: field_instruction: input_persona field_output: prompt format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: adammandic87/f62fa779-f2a3-4e37-ade5-d772103b1717 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/ecd7cec85692169d_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 7bdc132e-e198-4b8f-bee8-34caa4c4cbb2 wandb_project: Birthday-SN56-34-Gradients-On-Demand wandb_run: your_name wandb_runid: 7bdc132e-e198-4b8f-bee8-34caa4c4cbb2 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # f62fa779-f2a3-4e37-ade5-d772103b1717 This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0001 | 1 | nan | | 0.2605 | 0.0007 | 13 | nan | | 0.0 | 0.0015 | 26 | nan | | 2.3517 | 0.0022 | 39 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
beast33/902a5079-22c8-4d77-a4f7-edade50bdf6d
beast33
2025-01-31T07:33:10Z
7
0
peft
[ "peft", "safetensors", "gemma", "axolotl", "generated_from_trainer", "base_model:unsloth/gemma-2b-it", "base_model:adapter:unsloth/gemma-2b-it", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T07:31:30Z
--- library_name: peft license: apache-2.0 base_model: unsloth/gemma-2b-it tags: - axolotl - generated_from_trainer model-index: - name: 902a5079-22c8-4d77-a4f7-edade50bdf6d results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/gemma-2b-it bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 938e7b961a3fae54_train_data.json ds_type: json format: custom path: /workspace/input_data/938e7b961a3fae54_train_data.json type: field_input: choices field_instruction: full_prompt field_output: example format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: null eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: beast33/902a5079-22c8-4d77-a4f7-edade50bdf6d hub_repo: null hub_strategy: end hub_token: null learning_rate: 0.0001 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/938e7b961a3fae54_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null saves_per_epoch: null sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 264a9c6b-5cbc-436b-8c95-a81e899b2353 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 264a9c6b-5cbc-436b-8c95-a81e899b2353 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 902a5079-22c8-4d77-a4f7-edade50bdf6d This model is a fine-tuned version of [unsloth/gemma-2b-it](https://huggingface.co/unsloth/gemma-2b-it) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0005 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 21 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0007 | 1.0 | 21 | 0.0005 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
YMEA/Pathe-asr-LenaData-V0
YMEA
2025-01-31T07:32:38Z
25
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "bam", "dataset:YMEA/lena_audio", "base_model:openai/whisper-medium", "base_model:finetune:openai/whisper-medium", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-01-31T03:17:15Z
--- library_name: transformers language: - bam license: apache-2.0 base_model: openai/whisper-medium tags: - generated_from_trainer datasets: - YMEA/lena_audio model-index: - name: Whisper Bambara-Bambara results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Bambara-Bambara This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the BambaraAsr dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 250 - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
featherless-ai-quants/SteelStorage-L3.1-MS-Astoria-70b-v2-GGUF
featherless-ai-quants
2025-01-31T07:30:05Z
212
0
null
[ "gguf", "text-generation", "base_model:SteelStorage/L3.1-MS-Astoria-70b-v2", "base_model:quantized:SteelStorage/L3.1-MS-Astoria-70b-v2", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-01-31T06:20:05Z
--- base_model: SteelStorage/L3.1-MS-Astoria-70b-v2 pipeline_tag: text-generation quantized_by: featherless-ai-quants --- # SteelStorage/L3.1-MS-Astoria-70b-v2 GGUF Quantizations 🚀 ![Featherless AI Quants](./featherless-quants.png) *Optimized GGUF quantization files for enhanced model performance* > Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee. --- ## Available Quantizations 📊 | Quantization Type | File | Size | |-------------------|------|------| | IQ4_XS | [SteelStorage-L3.1-MS-Astoria-70b-v2-IQ4_XS](https://huggingface.co/featherless-ai-quants/SteelStorage-L3.1-MS-Astoria-70b-v2-GGUF/tree/main/SteelStorage-L3.1-MS-Astoria-70b-v2-IQ4_XS) | 36496.80 MB (folder) | | Q2_K | [SteelStorage-L3.1-MS-Astoria-70b-v2-Q2_K](https://huggingface.co/featherless-ai-quants/SteelStorage-L3.1-MS-Astoria-70b-v2-GGUF/tree/main/SteelStorage-L3.1-MS-Astoria-70b-v2-Q2_K) | 25153.27 MB (folder) | | Q3_K_L | [SteelStorage-L3.1-MS-Astoria-70b-v2-Q3_K_L](https://huggingface.co/featherless-ai-quants/SteelStorage-L3.1-MS-Astoria-70b-v2-GGUF/tree/main/SteelStorage-L3.1-MS-Astoria-70b-v2-Q3_K_L) | 35420.04 MB (folder) | | Q3_K_M | [SteelStorage-L3.1-MS-Astoria-70b-v2-Q3_K_M](https://huggingface.co/featherless-ai-quants/SteelStorage-L3.1-MS-Astoria-70b-v2-GGUF/tree/main/SteelStorage-L3.1-MS-Astoria-70b-v2-Q3_K_M) | 32680.04 MB (folder) | | Q3_K_S | [SteelStorage-L3.1-MS-Astoria-70b-v2-Q3_K_S](https://huggingface.co/featherless-ai-quants/SteelStorage-L3.1-MS-Astoria-70b-v2-GGUF/tree/main/SteelStorage-L3.1-MS-Astoria-70b-v2-Q3_K_S) | 29480.04 MB (folder) | | Q4_K_M | [SteelStorage-L3.1-MS-Astoria-70b-v2-Q4_K_M](https://huggingface.co/featherless-ai-quants/SteelStorage-L3.1-MS-Astoria-70b-v2-GGUF/tree/main/SteelStorage-L3.1-MS-Astoria-70b-v2-Q4_K_M) | 40550.61 MB (folder) | | Q4_K_S | [SteelStorage-L3.1-MS-Astoria-70b-v2-Q4_K_S](https://huggingface.co/featherless-ai-quants/SteelStorage-L3.1-MS-Astoria-70b-v2-GGUF/tree/main/SteelStorage-L3.1-MS-Astoria-70b-v2-Q4_K_S) | 38478.11 MB (folder) | | Q5_K_M | [SteelStorage-L3.1-MS-Astoria-70b-v2-Q5_K_M](https://huggingface.co/featherless-ai-quants/SteelStorage-L3.1-MS-Astoria-70b-v2-GGUF/tree/main/SteelStorage-L3.1-MS-Astoria-70b-v2-Q5_K_M) | 47635.86 MB (folder) | | Q5_K_S | [SteelStorage-L3.1-MS-Astoria-70b-v2-Q5_K_S](https://huggingface.co/featherless-ai-quants/SteelStorage-L3.1-MS-Astoria-70b-v2-GGUF/tree/main/SteelStorage-L3.1-MS-Astoria-70b-v2-Q5_K_S) | 46403.36 MB (folder) | | Q6_K | [SteelStorage-L3.1-MS-Astoria-70b-v2-Q6_K](https://huggingface.co/featherless-ai-quants/SteelStorage-L3.1-MS-Astoria-70b-v2-GGUF/tree/main/SteelStorage-L3.1-MS-Astoria-70b-v2-Q6_K) | 55206.44 MB (folder) | | Q8_0 | [SteelStorage-L3.1-MS-Astoria-70b-v2-Q8_0](https://huggingface.co/featherless-ai-quants/SteelStorage-L3.1-MS-Astoria-70b-v2-GGUF/tree/main/SteelStorage-L3.1-MS-Astoria-70b-v2-Q8_0) | 71501.79 MB (folder) | --- ## ⚡ Powered by [Featherless AI](https://featherless.ai) ### Key Features - 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly - 🛠️ **Zero Infrastructure** - No server setup or maintenance required - 📚 **Vast Compatibility** - Support for 2400+ models and counting - 💎 **Affordable Pricing** - Starting at just $10/month --- **Links:** [Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
philip-hightech/5b67fba1-2225-443a-9af8-4fbcf4440017
philip-hightech
2025-01-31T07:30:04Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "custom_code", "base_model:NousResearch/CodeLlama-7b-hf-flash", "base_model:adapter:NousResearch/CodeLlama-7b-hf-flash", "region:us" ]
null
2025-01-31T07:27:18Z
--- library_name: peft base_model: NousResearch/CodeLlama-7b-hf-flash tags: - axolotl - generated_from_trainer model-index: - name: 5b67fba1-2225-443a-9af8-4fbcf4440017 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/CodeLlama-7b-hf-flash bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ef066a96964aba8a_train_data.json ds_type: json format: custom path: /workspace/input_data/ef066a96964aba8a_train_data.json type: field_instruction: title field_output: description format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: false group_by_length: false hub_model_id: philip-hightech/5b67fba1-2225-443a-9af8-4fbcf4440017 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/ef066a96964aba8a_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 7cf2646b-3084-4458-ab3f-4af8618983fd wandb_project: Mine-SN56-21-Gradients-On-Demand wandb_run: your_name wandb_runid: 7cf2646b-3084-4458-ab3f-4af8618983fd warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 5b67fba1-2225-443a-9af8-4fbcf4440017 This model is a fine-tuned version of [NousResearch/CodeLlama-7b-hf-flash](https://huggingface.co/NousResearch/CodeLlama-7b-hf-flash) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3994 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0020 | 1 | 2.4172 | | 4.0157 | 0.0259 | 13 | 1.6903 | | 3.1184 | 0.0519 | 26 | 1.4732 | | 2.9423 | 0.0778 | 39 | 1.3994 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
sercetexam9/rubert-tiny2-rus-MICRO
sercetexam9
2025-01-31T07:29:01Z
6
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:cointegrated/rubert-tiny2", "base_model:finetune:cointegrated/rubert-tiny2", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-01-31T07:26:14Z
--- library_name: transformers license: mit base_model: cointegrated/rubert-tiny2 tags: - generated_from_trainer metrics: - f1 - accuracy model-index: - name: rubert-tiny2-rus-MICRO results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # rubert-tiny2-rus-MICRO This model is a fine-tuned version of [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1485 - F1: 0.8458 - Roc Auc: 0.9005 - Accuracy: 0.7887 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:| | 0.2588 | 1.0 | 607 | 0.2564 | 0.6892 | 0.7777 | 0.6469 | | 0.1663 | 2.0 | 1214 | 0.1743 | 0.8322 | 0.8850 | 0.7668 | | 0.1014 | 3.0 | 1821 | 0.1481 | 0.8399 | 0.8829 | 0.7912 | | 0.0716 | 4.0 | 2428 | 0.1458 | 0.8433 | 0.8968 | 0.7861 | | 0.0496 | 5.0 | 3035 | 0.1440 | 0.8423 | 0.8945 | 0.7835 | | 0.0389 | 6.0 | 3642 | 0.1485 | 0.8458 | 0.9005 | 0.7887 | | 0.037 | 7.0 | 4249 | 0.1538 | 0.8428 | 0.8998 | 0.7822 | | 0.0218 | 8.0 | 4856 | 0.1623 | 0.8422 | 0.8997 | 0.7809 | | 0.0196 | 9.0 | 5463 | 0.1678 | 0.8420 | 0.9007 | 0.7796 | | 0.0204 | 10.0 | 6070 | 0.1743 | 0.8355 | 0.8967 | 0.7732 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
DINGOLANI/distilbert-ner-v2
DINGOLANI
2025-01-31T07:28:49Z
45
0
transformers
[ "transformers", "safetensors", "distilbert", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-01-31T07:28:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Best000/b44f94f4-56ce-4c9a-8a24-dd304ed4037e
Best000
2025-01-31T07:28:31Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "custom_code", "base_model:NousResearch/CodeLlama-7b-hf-flash", "base_model:adapter:NousResearch/CodeLlama-7b-hf-flash", "region:us" ]
null
2025-01-31T07:27:18Z
--- library_name: peft base_model: NousResearch/CodeLlama-7b-hf-flash tags: - axolotl - generated_from_trainer model-index: - name: b44f94f4-56ce-4c9a-8a24-dd304ed4037e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/CodeLlama-7b-hf-flash bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ef066a96964aba8a_train_data.json ds_type: json format: custom path: /workspace/input_data/ef066a96964aba8a_train_data.json type: field_instruction: title field_output: description format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: Best000/b44f94f4-56ce-4c9a-8a24-dd304ed4037e hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/ef066a96964aba8a_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 7cf2646b-3084-4458-ab3f-4af8618983fd wandb_project: Birthday-SN56-32-Gradients-On-Demand wandb_run: your_name wandb_runid: 7cf2646b-3084-4458-ab3f-4af8618983fd warmup_steps: 50 weight_decay: 0.0 xformers_attention: null ``` </details><br> # b44f94f4-56ce-4c9a-8a24-dd304ed4037e This model is a fine-tuned version of [NousResearch/CodeLlama-7b-hf-flash](https://huggingface.co/NousResearch/CodeLlama-7b-hf-flash) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5488 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 50 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0040 | 1 | 2.4172 | | 9.1505 | 0.0519 | 13 | 2.3770 | | 9.2624 | 0.1038 | 26 | 1.8705 | | 7.4654 | 0.1557 | 39 | 1.5488 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
qingy2024/Qwen2.5-Coder-Draft-1.5B-Instruct
qingy2024
2025-01-31T07:27:53Z
17
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "base_model:Qwen/Qwen2.5-Coder-1.5B-Instruct", "base_model:finetune:Qwen/Qwen2.5-Coder-1.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-01-31T05:56:33Z
--- library_name: transformers base_model: - Qwen/Qwen2.5-Coder-1.5B-Instruct --- # Qwen2.5-Coder-Draft-1.5B-Instruct A draft model suitable for [Qwen2.5 Coder 32B Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) It uses a vocabulary size of 152064, which is the same as Qwen2.5 Coder 32B Instruct (can be used in vLLM directly without any hack)
tensorwa/dp_mg_h1_01
tensorwa
2025-01-31T07:27:53Z
24
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "base_model:Peacoc/chatml_2test43", "base_model:finetune:Peacoc/chatml_2test43", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-01-31T07:25:23Z
--- base_model: - itorgov/model-1738289983 - Peacoc/chatml_2test43 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method. ### Models Merged The following models were included in the merge: * [itorgov/model-1738289983](https://huggingface.co/itorgov/model-1738289983) * [Peacoc/chatml_2test43](https://huggingface.co/Peacoc/chatml_2test43) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: itorgov/model-1738289983 layer_range: [0, 32] - model: Peacoc/chatml_2test43 layer_range: [0, 32] merge_method: slerp base_model: itorgov/model-1738289983 parameters: t: - filter: self_attn value: 0.98 - filter: mlp value: 0.99 - value: 1 dtype: bfloat16 ```
Razvan1974/Jimi
Razvan1974
2025-01-31T07:25:08Z
22
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-01-31T07:04:43Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: Jimi --- # Jimi <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `Jimi` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Razvan1974/Jimi', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
Mohamedk12345678/patikya
Mohamedk12345678
2025-01-31T07:25:07Z
7
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-01-31T07:11:34Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: patikya --- # Patikya <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `patikya` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Mohamedk12345678/patikya', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
ancient41/f6fd0cd1-e0a0-4ad8-bdd0-b39e0ac89ff6
ancient41
2025-01-31T07:24:23Z
6
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:dltjdgh0928/test_instruction", "base_model:adapter:dltjdgh0928/test_instruction", "license:apache-2.0", "region:us" ]
null
2025-01-31T05:15:50Z
--- library_name: peft license: apache-2.0 base_model: dltjdgh0928/test_instruction tags: - axolotl - generated_from_trainer model-index: - name: f6fd0cd1-e0a0-4ad8-bdd0-b39e0ac89ff6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: dltjdgh0928/test_instruction bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 445036244439be21_train_data.json ds_type: json format: custom path: /workspace/input_data/445036244439be21_train_data.json type: field_input: new_response field_instruction: prompt field_output: org_response format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: ancient41/f6fd0cd1-e0a0-4ad8-bdd0-b39e0ac89ff6 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 8 mlflow_experiment_name: /tmp/445036244439be21_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 8d4144fc-9ff0-40f6-938c-971bb0af2635 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 8d4144fc-9ff0-40f6-938c-971bb0af2635 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # f6fd0cd1-e0a0-4ad8-bdd0-b39e0ac89ff6 This model is a fine-tuned version of [dltjdgh0928/test_instruction](https://huggingface.co/dltjdgh0928/test_instruction) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6707 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 3.1204 | 0.0001 | 1 | 1.1771 | | 3.5574 | 0.0056 | 50 | 0.7825 | | 3.665 | 0.0112 | 100 | 0.7170 | | 3.6566 | 0.0169 | 150 | 0.6775 | | 3.6301 | 0.0225 | 200 | 0.6707 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
visdata/py26
visdata
2025-01-31T07:23:25Z
24
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-01-31T07:18:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
InsultedByMathematics/alpha_1e-2-beta_1e-2
InsultedByMathematics
2025-01-31T07:21:59Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-01-31T07:17:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
abaddon182/cdedae3a-3953-41ed-acb9-287e5ba6a04c
abaddon182
2025-01-31T07:21:42Z
8
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:jhflow/mistral7b-lora-multi-turn-v2", "base_model:adapter:jhflow/mistral7b-lora-multi-turn-v2", "region:us" ]
null
2025-01-31T06:54:16Z
--- library_name: peft base_model: jhflow/mistral7b-lora-multi-turn-v2 tags: - axolotl - generated_from_trainer model-index: - name: cdedae3a-3953-41ed-acb9-287e5ba6a04c results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: jhflow/mistral7b-lora-multi-turn-v2 bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - bd759e5c8d2b027f_train_data.json ds_type: json format: custom path: /workspace/input_data/bd759e5c8d2b027f_train_data.json type: field_input: answers field_instruction: topic field_output: text format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: abaddon182/cdedae3a-3953-41ed-acb9-287e5ba6a04c hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 8 mlflow_experiment_name: /tmp/bd759e5c8d2b027f_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 3217968f-95e4-42f6-ab2b-878e655e1370 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 3217968f-95e4-42f6-ab2b-878e655e1370 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # cdedae3a-3953-41ed-acb9-287e5ba6a04c This model is a fine-tuned version of [jhflow/mistral7b-lora-multi-turn-v2](https://huggingface.co/jhflow/mistral7b-lora-multi-turn-v2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1080 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 5.9483 | 0.0108 | 1 | 2.2484 | | 5.1298 | 0.5420 | 50 | 1.2160 | | 2.4199 | 1.0840 | 100 | 1.1514 | | 2.3623 | 1.6260 | 150 | 1.1195 | | 1.2455 | 2.1680 | 200 | 1.1080 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Kobi-01/distilled_bert_tamil
Kobi-01
2025-01-31T07:21:20Z
84
0
transformers
[ "transformers", "safetensors", "distilbert", "question-answering", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
question-answering
2025-01-10T10:21:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nttx/b959546c-d51c-44fc-aeec-977098c32968
nttx
2025-01-31T07:19:53Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:MLP-KTLim/llama-3-Korean-Bllossom-8B", "base_model:adapter:MLP-KTLim/llama-3-Korean-Bllossom-8B", "license:llama3", "region:us" ]
null
2025-01-31T07:01:00Z
--- library_name: peft license: llama3 base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B tags: - axolotl - generated_from_trainer model-index: - name: b959546c-d51c-44fc-aeec-977098c32968 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 423760bfd2fbfffa_train_data.json ds_type: json format: custom path: /workspace/input_data/423760bfd2fbfffa_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: null eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: nttx/b959546c-d51c-44fc-aeec-977098c32968 hub_repo: null hub_strategy: end hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/423760bfd2fbfffa_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 84585b20-d892-48c7-a995-1238079422b0 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 84585b20-d892-48c7-a995-1238079422b0 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # b959546c-d51c-44fc-aeec-977098c32968 This model is a fine-tuned version of [MLP-KTLim/llama-3-Korean-Bllossom-8B](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6075 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.7395 | 0.0410 | 200 | 1.6075 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
beast33/c7d68f13-7fb1-4ded-a461-ea16244e38e8
beast33
2025-01-31T07:17:13Z
7
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:jhflow/mistral7b-lora-multi-turn-v2", "base_model:adapter:jhflow/mistral7b-lora-multi-turn-v2", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T06:46:18Z
--- library_name: peft base_model: jhflow/mistral7b-lora-multi-turn-v2 tags: - axolotl - generated_from_trainer model-index: - name: c7d68f13-7fb1-4ded-a461-ea16244e38e8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: jhflow/mistral7b-lora-multi-turn-v2 bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - bd759e5c8d2b027f_train_data.json ds_type: json format: custom path: /workspace/input_data/bd759e5c8d2b027f_train_data.json type: field_input: answers field_instruction: topic field_output: text format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: null eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: beast33/c7d68f13-7fb1-4ded-a461-ea16244e38e8 hub_repo: null hub_strategy: end hub_token: null learning_rate: 0.0001 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/bd759e5c8d2b027f_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null saves_per_epoch: null sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 3217968f-95e4-42f6-ab2b-878e655e1370 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 3217968f-95e4-42f6-ab2b-878e655e1370 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # c7d68f13-7fb1-4ded-a461-ea16244e38e8 This model is a fine-tuned version of [jhflow/mistral7b-lora-multi-turn-v2](https://huggingface.co/jhflow/mistral7b-lora-multi-turn-v2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1200 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 185 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 4.0022 | 0.9986 | 184 | 1.1373 | | 4.9826 | 1.0041 | 185 | 1.1200 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
havinash-ai/bbe1101f-5c1b-444f-8b48-67bfd058899b
havinash-ai
2025-01-31T07:11:29Z
7
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:MLP-KTLim/llama-3-Korean-Bllossom-8B", "base_model:adapter:MLP-KTLim/llama-3-Korean-Bllossom-8B", "license:llama3", "region:us" ]
null
2025-01-31T07:01:55Z
--- library_name: peft license: llama3 base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B tags: - axolotl - generated_from_trainer model-index: - name: bbe1101f-5c1b-444f-8b48-67bfd058899b results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 423760bfd2fbfffa_train_data.json ds_type: json format: custom path: /workspace/input_data/423760bfd2fbfffa_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: false group_by_length: false hub_model_id: havinash-ai/bbe1101f-5c1b-444f-8b48-67bfd058899b hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/423760bfd2fbfffa_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 84585b20-d892-48c7-a995-1238079422b0 wandb_project: Mine-SN56-2-Gradients-On-Demand wandb_run: your_name wandb_runid: 84585b20-d892-48c7-a995-1238079422b0 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # bbe1101f-5c1b-444f-8b48-67bfd058899b This model is a fine-tuned version of [MLP-KTLim/llama-3-Korean-Bllossom-8B](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7416 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0001 | 1 | 2.2211 | | 2.1075 | 0.0007 | 13 | 1.8652 | | 2.0234 | 0.0013 | 26 | 1.7669 | | 1.9285 | 0.0020 | 39 | 1.7416 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
lesso01/f6c2b613-3b40-4dc1-8332-b21dbc57874f
lesso01
2025-01-31T07:08:38Z
7
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:berkeley-nest/Starling-LM-7B-alpha", "base_model:adapter:berkeley-nest/Starling-LM-7B-alpha", "license:apache-2.0", "region:us" ]
null
2025-01-31T06:18:34Z
--- library_name: peft license: apache-2.0 base_model: berkeley-nest/Starling-LM-7B-alpha tags: - axolotl - generated_from_trainer model-index: - name: f6c2b613-3b40-4dc1-8332-b21dbc57874f results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: berkeley-nest/Starling-LM-7B-alpha bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - dffa8fc58ce66dc6_train_data.json ds_type: json format: custom path: /workspace/input_data/dffa8fc58ce66dc6_train_data.json type: field_instruction: title field_output: text format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: lesso01/f6c2b613-3b40-4dc1-8332-b21dbc57874f hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mixed_precision: bf16 mlflow_experiment_name: /tmp/dffa8fc58ce66dc6_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 73f2e9d8-c4f5-4163-bde3-27fae5504c6a wandb_project: new-01-29 wandb_run: your_name wandb_runid: 73f2e9d8-c4f5-4163-bde3-27fae5504c6a warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # f6c2b613-3b40-4dc1-8332-b21dbc57874f This model is a fine-tuned version of [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0 | 0.0972 | 200 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1