modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
tttx/sft_r1_7b
|
tttx
| 2025-02-04T06:31:34Z | 22 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:tttx/r1-trajectories-collection-round-2",
"dataset:tttx/r1-trajectories-arcagi-barc",
"license:mit",
"region:us"
] | null | 2025-02-02T22:44:06Z |
---
base_model: deepseek-ai/Deepseek-R1-Distill-Qwen-7B
datasets:
- tttx/r1-trajectories-collection-round-2
- tttx/r1-trajectories-arcagi-barc
library_name: peft
license: mit
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
model-index:
- name: sft_r1_7b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft_r1_7b
This model is a fine-tuned version of [deepseek-ai/Deepseek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/Deepseek-R1-Distill-Qwen-7B) on the tttx/r1-trajectories-collection-round-2 and the tttx/r1-trajectories-arcagi-barc datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.47.0.dev0
- Pytorch 2.4.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
jpark677/internvl2-8b-mmmu-3
|
jpark677
| 2025-02-04T06:29:10Z | 59 | 0 |
transformers
|
[
"transformers",
"safetensors",
"internvl_chat",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
] |
feature-extraction
| 2025-01-31T04:02:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
t2ance/FNO-s20
|
t2ance
| 2025-02-04T06:28:50Z | 68 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-01-05T14:36:09Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed]
|
lesso/1a2af3a5-f601-4c88-8f0b-c05b8a843409
|
lesso
| 2025-02-04T06:28:41Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:heegyu/WizardVicuna-open-llama-3b-v2",
"base_model:adapter:heegyu/WizardVicuna-open-llama-3b-v2",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T06:22:37Z |
---
library_name: peft
license: apache-2.0
base_model: heegyu/WizardVicuna-open-llama-3b-v2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1a2af3a5-f601-4c88-8f0b-c05b8a843409
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: heegyu/WizardVicuna-open-llama-3b-v2
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 9988054a1155975c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9988054a1155975c_train_data.json
type:
field_input: history
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso/1a2af3a5-f601-4c88-8f0b-c05b8a843409
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000101
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: linear
max_grad_norm: 1.0
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/god08/9988054a1155975c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 29d53164-9e6f-42ae-a37f-4cb166ed6f4f
wandb_project: ab-god08
wandb_run: your_name
wandb_runid: 29d53164-9e6f-42ae-a37f-4cb166ed6f4f
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 1a2af3a5-f601-4c88-8f0b-c05b8a843409
This model is a fine-tuned version of [heegyu/WizardVicuna-open-llama-3b-v2](https://huggingface.co/heegyu/WizardVicuna-open-llama-3b-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7353
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000101
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 78
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.2659 | 0.0385 | 1 | 2.2118 |
| 1.7136 | 1.9231 | 50 | 1.7353 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
jq/whisper-large-v3-salt-plus-xog-myx-kin-swa
|
jq
| 2025-02-04T06:28:10Z | 41 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-01-30T16:12:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
datlaaaaaaa/fae9c0e7-11e1-4603-b2d9-9d8ec9d0b9a6
|
datlaaaaaaa
| 2025-02-04T06:26:53Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:heegyu/WizardVicuna-open-llama-3b-v2",
"base_model:adapter:heegyu/WizardVicuna-open-llama-3b-v2",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-04T06:14:01Z |
---
library_name: peft
license: apache-2.0
base_model: heegyu/WizardVicuna-open-llama-3b-v2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fae9c0e7-11e1-4603-b2d9-9d8ec9d0b9a6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: heegyu/WizardVicuna-open-llama-3b-v2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9988054a1155975c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9988054a1155975c_train_data.json
type:
field_input: history
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: datlaaaaaaa/fae9c0e7-11e1-4603-b2d9-9d8ec9d0b9a6
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/9988054a1155975c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 29d53164-9e6f-42ae-a37f-4cb166ed6f4f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 29d53164-9e6f-42ae-a37f-4cb166ed6f4f
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# fae9c0e7-11e1-4603-b2d9-9d8ec9d0b9a6
This model is a fine-tuned version of [heegyu/WizardVicuna-open-llama-3b-v2](https://huggingface.co/heegyu/WizardVicuna-open-llama-3b-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9183
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.0546 | 0.2421 | 200 | 1.9183 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
LHRuig/aytac
|
LHRuig
| 2025-02-04T06:26:45Z | 7 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T06:26:40Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: aytac
---
# aytac
<Gallery />
## Model description
aytac lora
## Trigger words
You should use `aytac` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/aytac/tree/main) them in the Files & versions tab.
|
LHRuig/aykutelms
|
LHRuig
| 2025-02-04T06:25:58Z | 8 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T06:25:37Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: aykutelms
---
# aykutelms
<Gallery />
## Model description
aykutelms lora
## Trigger words
You should use `aykutelms` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/aykutelms/tree/main) them in the Files & versions tab.
|
havinash-ai/050c82c0-329a-4dc0-8acb-04a2af64431e
|
havinash-ai
| 2025-02-04T06:24:55Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Meta-Llama-3-8B",
"base_model:adapter:NousResearch/Meta-Llama-3-8B",
"license:other",
"region:us"
] | null | 2025-02-04T06:20:36Z |
---
library_name: peft
license: other
base_model: NousResearch/Meta-Llama-3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 050c82c0-329a-4dc0-8acb-04a2af64431e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Meta-Llama-3-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3c7f8d22a3b05f19_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3c7f8d22a3b05f19_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: havinash-ai/050c82c0-329a-4dc0-8acb-04a2af64431e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 250
micro_batch_size: 2
mlflow_experiment_name: /tmp/3c7f8d22a3b05f19_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f956346c-1ba2-40a0-96e7-e24d7e19d3c3
wandb_project: Mine-SN56-2-Gradients-On-Demand
wandb_run: your_name
wandb_runid: f956346c-1ba2-40a0-96e7-e24d7e19d3c3
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 050c82c0-329a-4dc0-8acb-04a2af64431e
This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4294
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 250
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0004 | 1 | 1.6095 |
| 1.5417 | 0.0252 | 63 | 1.4651 |
| 1.411 | 0.0504 | 126 | 1.4427 |
| 1.5382 | 0.0755 | 189 | 1.4294 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
trenden/0bc7e690-9b6b-4c5b-bcf8-193dd20b6c3c
|
trenden
| 2025-02-04T06:24:08Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/CodeLlama-13b-hf-flash",
"base_model:adapter:NousResearch/CodeLlama-13b-hf-flash",
"region:us"
] | null | 2025-02-04T05:59:13Z |
---
library_name: peft
base_model: NousResearch/CodeLlama-13b-hf-flash
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0bc7e690-9b6b-4c5b-bcf8-193dd20b6c3c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/CodeLlama-13b-hf-flash
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 64472e7e5ca00041_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/64472e7e5ca00041_train_data.json
type:
field_input: meshid
field_instruction: meshMajor
field_output: abstractText
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: trenden/0bc7e690-9b6b-4c5b-bcf8-193dd20b6c3c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/64472e7e5ca00041_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: cb6da118-4bd4-4632-9cf4-6bf7d4fdb9b3
wandb_project: Birthday-SN56-26-Gradients-On-Demand
wandb_run: your_name
wandb_runid: cb6da118-4bd4-4632-9cf4-6bf7d4fdb9b3
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 0bc7e690-9b6b-4c5b-bcf8-193dd20b6c3c
This model is a fine-tuned version of [NousResearch/CodeLlama-13b-hf-flash](https://huggingface.co/NousResearch/CodeLlama-13b-hf-flash) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7800
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 1.9034 |
| 6.7744 | 0.0085 | 50 | 1.7877 |
| 7.228 | 0.0169 | 100 | 1.7834 |
| 6.8892 | 0.0254 | 150 | 1.7804 |
| 7.1472 | 0.0338 | 200 | 1.7800 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Nexspear/a8d825c2-7690-4065-b59b-0d6a979b356f
|
Nexspear
| 2025-02-04T06:23:01Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Meta-Llama-3-8B",
"base_model:adapter:NousResearch/Meta-Llama-3-8B",
"license:other",
"region:us"
] | null | 2025-02-04T06:06:28Z |
---
library_name: peft
license: other
base_model: NousResearch/Meta-Llama-3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a8d825c2-7690-4065-b59b-0d6a979b356f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Meta-Llama-3-8B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3c7f8d22a3b05f19_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3c7f8d22a3b05f19_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: Nexspear/a8d825c2-7690-4065-b59b-0d6a979b356f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/3c7f8d22a3b05f19_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: f956346c-1ba2-40a0-96e7-e24d7e19d3c3
wandb_project: Gradients-On-Four
wandb_run: your_name
wandb_runid: f956346c-1ba2-40a0-96e7-e24d7e19d3c3
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a8d825c2-7690-4065-b59b-0d6a979b356f
This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4337
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1783 | 0.0032 | 1 | 1.6533 |
| 1.877 | 0.1599 | 50 | 1.4841 |
| 1.7372 | 0.3197 | 100 | 1.4337 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
LHRuig/austintheor
|
LHRuig
| 2025-02-04T06:21:18Z | 11 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T06:21:11Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: austintheor
---
# austintheor
<Gallery />
## Model description
austintheor lora
## Trigger words
You should use `austintheor` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/austintheor/tree/main) them in the Files & versions tab.
|
LHRuig/austinshw
|
LHRuig
| 2025-02-04T06:19:57Z | 10 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T06:19:50Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: austinshw
---
# austinshw
<Gallery />
## Model description
austinshw lora
## Trigger words
You should use `austinshw` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/austinshw/tree/main) them in the Files & versions tab.
|
LHRuig/asfarion
|
LHRuig
| 2025-02-04T06:19:25Z | 8 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T06:19:04Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: asfarion
---
# asfarion
<Gallery />
## Model description
asfarion lora
## Trigger words
You should use `asfarion` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/asfarion/tree/main) them in the Files & versions tab.
|
auxyus/fd5cf91f-d0b5-4d5c-bd47-f0f237efab5a
|
auxyus
| 2025-02-04T06:18:54Z | 11 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-7B-Instruct",
"base_model:adapter:unsloth/Qwen2-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T05:51:45Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fd5cf91f-d0b5-4d5c-bd47-f0f237efab5a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e6e4f6e948bc6471_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e6e4f6e948bc6471_train_data.json
type:
field_input: topic
field_instruction: text
field_output: title
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: auxyus/fd5cf91f-d0b5-4d5c-bd47-f0f237efab5a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/e6e4f6e948bc6471_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 69002658-908b-4f14-a9fb-64d08340747d
wandb_project: Gradients-On-Two
wandb_run: your_name
wandb_runid: 69002658-908b-4f14-a9fb-64d08340747d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# fd5cf91f-d0b5-4d5c-bd47-f0f237efab5a
This model is a fine-tuned version of [unsloth/Qwen2-7B-Instruct](https://huggingface.co/unsloth/Qwen2-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6284
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0143 | 1 | 3.6620 |
| 3.2763 | 0.1286 | 9 | 2.6868 |
| 1.7025 | 0.2571 | 18 | 1.3585 |
| 1.1514 | 0.3857 | 27 | 0.9190 |
| 0.8298 | 0.5143 | 36 | 0.7803 |
| 0.8113 | 0.6429 | 45 | 0.6975 |
| 0.8633 | 0.7714 | 54 | 0.6908 |
| 0.7252 | 0.9 | 63 | 0.6500 |
| 0.6536 | 1.0286 | 72 | 0.6391 |
| 0.5663 | 1.1571 | 81 | 0.6290 |
| 0.5139 | 1.2857 | 90 | 0.6306 |
| 0.5922 | 1.4143 | 99 | 0.6284 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
LHRuig/asmondgoldsx
|
LHRuig
| 2025-02-04T06:17:34Z | 8 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T06:17:27Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: asmondgoldsx
---
# asmondgoldsx
<Gallery />
## Model description
asmondgoldsx lora
## Trigger words
You should use `asmondgoldsx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/asmondgoldsx/tree/main) them in the Files & versions tab.
|
LHRuig/asmondgold
|
LHRuig
| 2025-02-04T06:17:00Z | 9 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T06:16:44Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: asmondgold
---
# asmondgold
<Gallery />
## Model description
asmondgold lora
## Trigger words
You should use `asmondgold` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/asmondgold/tree/main) them in the Files & versions tab.
|
Best000/2b707c33-8da2-4a21-b508-4b42124561ed
|
Best000
| 2025-02-04T06:16:12Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Meta-Llama-3-8B",
"base_model:adapter:NousResearch/Meta-Llama-3-8B",
"license:other",
"region:us"
] | null | 2025-02-04T06:09:23Z |
---
library_name: peft
license: other
base_model: NousResearch/Meta-Llama-3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2b707c33-8da2-4a21-b508-4b42124561ed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
# 2b707c33-8da2-4a21-b508-4b42124561ed
This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4991
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
LHRuig/asiapns
|
LHRuig
| 2025-02-04T06:15:48Z | 10 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T06:15:36Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: man
---
# asiapns
<Gallery />
## Model description
asiapns lora
## Trigger words
You should use `man` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/asiapns/tree/main) them in the Files & versions tab.
|
mradermacher/SCE-3-24B-GGUF
|
mradermacher
| 2025-02-04T06:12:09Z | 300 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Cran-May/SCE-3-24B",
"base_model:quantized:Cran-May/SCE-3-24B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-03T17:46:57Z |
---
base_model: Cran-May/SCE-3-24B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Cran-May/SCE-3-24B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/SCE-3-24B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SCE-3-24B-GGUF/resolve/main/SCE-3-24B.Q2_K.gguf) | Q2_K | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/SCE-3-24B-GGUF/resolve/main/SCE-3-24B.Q3_K_S.gguf) | Q3_K_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/SCE-3-24B-GGUF/resolve/main/SCE-3-24B.Q3_K_M.gguf) | Q3_K_M | 11.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SCE-3-24B-GGUF/resolve/main/SCE-3-24B.Q3_K_L.gguf) | Q3_K_L | 12.5 | |
| [GGUF](https://huggingface.co/mradermacher/SCE-3-24B-GGUF/resolve/main/SCE-3-24B.IQ4_XS.gguf) | IQ4_XS | 13.0 | |
| [GGUF](https://huggingface.co/mradermacher/SCE-3-24B-GGUF/resolve/main/SCE-3-24B.Q4_K_S.gguf) | Q4_K_S | 13.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SCE-3-24B-GGUF/resolve/main/SCE-3-24B.Q4_K_M.gguf) | Q4_K_M | 14.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SCE-3-24B-GGUF/resolve/main/SCE-3-24B.Q5_K_S.gguf) | Q5_K_S | 16.4 | |
| [GGUF](https://huggingface.co/mradermacher/SCE-3-24B-GGUF/resolve/main/SCE-3-24B.Q5_K_M.gguf) | Q5_K_M | 16.9 | |
| [GGUF](https://huggingface.co/mradermacher/SCE-3-24B-GGUF/resolve/main/SCE-3-24B.Q6_K.gguf) | Q6_K | 19.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SCE-3-24B-GGUF/resolve/main/SCE-3-24B.Q8_0.gguf) | Q8_0 | 25.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
oldiday/433f14e5-6b6b-40a8-b45c-86579fd22fd7
|
oldiday
| 2025-02-04T06:11:37Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:defog/llama-3-sqlcoder-8b",
"base_model:adapter:defog/llama-3-sqlcoder-8b",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2025-02-04T05:39:42Z |
---
library_name: peft
license: cc-by-sa-4.0
base_model: defog/llama-3-sqlcoder-8b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 433f14e5-6b6b-40a8-b45c-86579fd22fd7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: defog/llama-3-sqlcoder-8b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3b9b4289b748f826_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3b9b4289b748f826_train_data.json
type:
field_instruction: item_title
field_output: comment
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: oldiday/433f14e5-6b6b-40a8-b45c-86579fd22fd7
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/3b9b4289b748f826_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: fb91bb99-180c-4ff4-aa46-6d9918134443
wandb_project: Gradients-On-Six
wandb_run: your_name
wandb_runid: fb91bb99-180c-4ff4-aa46-6d9918134443
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 433f14e5-6b6b-40a8-b45c-86579fd22fd7
This model is a fine-tuned version of [defog/llama-3-sqlcoder-8b](https://huggingface.co/defog/llama-3-sqlcoder-8b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9299
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0014 | 1 | 3.6452 |
| 3.6055 | 0.0122 | 9 | 3.3931 |
| 3.2192 | 0.0244 | 18 | 3.1764 |
| 3.1097 | 0.0367 | 27 | 3.0733 |
| 3.1131 | 0.0489 | 36 | 3.0239 |
| 2.9565 | 0.0611 | 45 | 2.9911 |
| 2.9408 | 0.0733 | 54 | 2.9681 |
| 2.8996 | 0.0856 | 63 | 2.9517 |
| 3.0113 | 0.0978 | 72 | 2.9399 |
| 2.8834 | 0.1100 | 81 | 2.9339 |
| 2.9892 | 0.1222 | 90 | 2.9308 |
| 2.843 | 0.1345 | 99 | 2.9299 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
LHRuig/artursila
|
LHRuig
| 2025-02-04T06:11:33Z | 8 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T06:11:28Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: artursila
---
# artursila
<Gallery />
## Model description
artursila lora
## Trigger words
You should use `artursila` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/artursila/tree/main) them in the Files & versions tab.
|
Triangle104/Rombos-EVAGutenberg-TIES-Qwen2.5-32B-Q5_K_M-GGUF
|
Triangle104
| 2025-02-04T06:11:21Z | 20 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B",
"base_model:quantized:nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-04T06:09:32Z |
---
base_model: nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
license: apache-2.0
---
# Triangle104/Rombos-EVAGutenberg-TIES-Qwen2.5-32B-Q5_K_M-GGUF
This model was converted to GGUF format from [`nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B`](https://huggingface.co/nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Rombos-EVAGutenberg-TIES-Qwen2.5-32B-Q5_K_M-GGUF --hf-file rombos-evagutenberg-ties-qwen2.5-32b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Rombos-EVAGutenberg-TIES-Qwen2.5-32B-Q5_K_M-GGUF --hf-file rombos-evagutenberg-ties-qwen2.5-32b-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Rombos-EVAGutenberg-TIES-Qwen2.5-32B-Q5_K_M-GGUF --hf-file rombos-evagutenberg-ties-qwen2.5-32b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Rombos-EVAGutenberg-TIES-Qwen2.5-32B-Q5_K_M-GGUF --hf-file rombos-evagutenberg-ties-qwen2.5-32b-q5_k_m.gguf -c 2048
```
|
mrferr3t/1cdc8fd5-66ad-4aea-b3a0-81fa806d0e2d
|
mrferr3t
| 2025-02-04T06:11:17Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct",
"base_model:adapter:aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct",
"license:llama3",
"region:us"
] | null | 2025-02-04T05:56:22Z |
---
library_name: peft
license: llama3
base_model: aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1cdc8fd5-66ad-4aea-b3a0-81fa806d0e2d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
auto_find_batch_size: true
base_model: aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- a1495fc5a097a229_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a1495fc5a097a229_train_data.json
type:
field_instruction: disease
field_output: symptoms
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 3
early_stopping_threshold: 0.001
eval_max_new_tokens: 128
eval_steps: 40
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/1cdc8fd5-66ad-4aea-b3a0-81fa806d0e2d
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0003
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 100
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
micro_batch_size: 32
mlflow_experiment_name: /tmp/a1495fc5a097a229_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 50
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
s2_attention: null
sample_packing: false
save_steps: 40
saves_per_epoch: 0
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1da0ae8b-2e96-422b-80d9-64dbe42908dd
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1da0ae8b-2e96-422b-80d9-64dbe42908dd
warmup_ratio: 0.05
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 1cdc8fd5-66ad-4aea-b3a0-81fa806d0e2d
This model is a fine-tuned version of [aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct](https://huggingface.co/aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0530
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 175
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0036 | 1 | 3.0597 |
| No log | 0.1426 | 40 | 0.9813 |
| No log | 0.2852 | 80 | 0.1053 |
| 1.0919 | 0.4278 | 120 | 0.0878 |
| 1.0919 | 0.5704 | 160 | 0.0793 |
| 0.0938 | 0.7130 | 200 | 0.0747 |
| 0.0938 | 0.8556 | 240 | 0.0762 |
| 0.0938 | 0.9982 | 280 | 0.0750 |
| 0.0793 | 1.1408 | 320 | 0.0635 |
| 0.0793 | 1.2834 | 360 | 0.0575 |
| 0.065 | 1.4260 | 400 | 0.0558 |
| 0.065 | 1.5686 | 440 | 0.0609 |
| 0.065 | 1.7112 | 480 | 0.0559 |
| 0.0599 | 1.8538 | 520 | 0.0527 |
| 0.0599 | 1.9964 | 560 | 0.0525 |
| 0.0619 | 2.1390 | 600 | 0.0595 |
| 0.0619 | 2.2816 | 640 | 0.0640 |
| 0.0619 | 2.4242 | 680 | 0.0530 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
kk-aivio/35015b36-4c7f-40d3-9363-d217ced67c05
|
kk-aivio
| 2025-02-04T06:10:11Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:defog/llama-3-sqlcoder-8b",
"base_model:adapter:defog/llama-3-sqlcoder-8b",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2025-02-04T06:03:58Z |
---
library_name: peft
license: cc-by-sa-4.0
base_model: defog/llama-3-sqlcoder-8b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 35015b36-4c7f-40d3-9363-d217ced67c05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: defog/llama-3-sqlcoder-8b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3b9b4289b748f826_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3b9b4289b748f826_train_data.json
type:
field_instruction: item_title
field_output: comment
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kk-aivio/35015b36-4c7f-40d3-9363-d217ced67c05
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/3b9b4289b748f826_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fb91bb99-180c-4ff4-aa46-6d9918134443
wandb_project: Birthday-SN56-17-Gradients-On-Demand
wandb_run: your_name
wandb_runid: fb91bb99-180c-4ff4-aa46-6d9918134443
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 35015b36-4c7f-40d3-9363-d217ced67c05
This model is a fine-tuned version of [defog/llama-3-sqlcoder-8b](https://huggingface.co/defog/llama-3-sqlcoder-8b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0224
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | 3.9196 |
| 3.0679 | 0.0170 | 50 | 3.1643 |
| 2.9438 | 0.0340 | 100 | 3.0732 |
| 2.9586 | 0.0509 | 150 | 3.0325 |
| 3.0316 | 0.0679 | 200 | 3.0224 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
reaper24/model_8bit
|
reaper24
| 2025-02-04T06:08:54Z | 33 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-04T06:07:58Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** reaper24
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
LHRuig/stephenamll
|
LHRuig
| 2025-02-04T06:08:43Z | 6 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T06:08:39Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: stephenamll
---
# stephenamll
<Gallery />
## Model description
stephenamll lora
## Trigger words
You should use `stephenamll` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/stephenamll/tree/main) them in the Files & versions tab.
|
tensoralchemistdev01/bb22
|
tensoralchemistdev01
| 2025-02-04T06:08:36Z | 83 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-04T06:03:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LHRuig/arnoldschwarz
|
LHRuig
| 2025-02-04T06:07:36Z | 10 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T06:07:10Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: arnoldschwar
---
# arnoldschwar
<Gallery />
## Model description
arnoldschwar lora
## Trigger words
You should use `arnoldschwar` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/arnoldschwarz/tree/main) them in the Files & versions tab.
|
John6666/redp8nt-noobai11-v11-sdxl
|
John6666
| 2025-02-04T06:07:25Z | 13 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"2D",
"hentai",
"painterly style",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-1.1",
"base_model:finetune:Laxhar/noobai-XL-1.1",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-02-04T06:00:36Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- 2D
- hentai
- painterly style
- illustrious
base_model: Laxhar/noobai-XL-1.1
---
Original model is [here](https://civitai.com/models/1157156/redp8nt-noobai11?modelVersionId=1361319).
This model created by [bloodsplash](https://civitai.com/user/bloodsplash).
|
earnxus/d1c020ab-808c-4bb7-9a64-ed5f67960b17
|
earnxus
| 2025-02-04T06:07:19Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-3B",
"base_model:adapter:Qwen/Qwen2.5-3B",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-04T05:17:29Z |
---
library_name: peft
license: other
base_model: Qwen/Qwen2.5-3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d1c020ab-808c-4bb7-9a64-ed5f67960b17
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-3B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c34072e21f82fd36_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c34072e21f82fd36_train_data.json
type:
field_instruction: qwq
field_output: problem
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: earnxus/d1c020ab-808c-4bb7-9a64-ed5f67960b17
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/c34072e21f82fd36_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: a8c27290-a0ee-4a3d-85a4-688f5c1c52b6
wandb_project: Gradients-On-Nine
wandb_run: your_name
wandb_runid: a8c27290-a0ee-4a3d-85a4-688f5c1c52b6
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# d1c020ab-808c-4bb7-9a64-ed5f67960b17
This model is a fine-tuned version of [Qwen/Qwen2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3554
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.2872 | 0.0127 | 200 | 0.3554 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
reaper24/model_q4_k_m
|
reaper24
| 2025-02-04T06:05:20Z | 22 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-04T06:04:46Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** reaper24
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
LHRuig/arnoldschwar
|
LHRuig
| 2025-02-04T06:04:01Z | 8 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T06:03:56Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: arnoldschwar
---
# arnoldschwar
<Gallery />
## Model description
arnoldschwar lora
## Trigger words
You should use `arnoldschwar` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/arnoldschwar/tree/main) them in the Files & versions tab.
|
LHRuig/armanram
|
LHRuig
| 2025-02-04T06:03:35Z | 7 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T06:03:14Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: armanram
---
# armanram
<Gallery />
## Model description
armanram lora
## Trigger words
You should use `armanram` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/armanram/tree/main) them in the Files & versions tab.
|
LHRuig/aricsx
|
LHRuig
| 2025-02-04T06:02:47Z | 8 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T06:02:36Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: aricsx
---
# aricsx
<Gallery />
## Model description
aricsx lora
## Trigger words
You should use `aricsx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/aricsx/tree/main) them in the Files & versions tab.
|
robiulawaldev/a3b9ff45-428f-4bd8-98b7-a5248b0d9081
|
robiulawaldev
| 2025-02-04T06:02:01Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct",
"base_model:adapter:aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct",
"license:llama3",
"region:us"
] | null | 2025-02-04T05:56:49Z |
---
library_name: peft
license: llama3
base_model: aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a3b9ff45-428f-4bd8-98b7-a5248b0d9081
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
# a3b9ff45-428f-4bd8-98b7-a5248b0d9081
This model is a fine-tuned version of [aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct](https://huggingface.co/aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3304
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
fabian6567/model-overfitted
|
fabian6567
| 2025-02-04T06:01:50Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-04T05:57:29Z |
---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** fabian6567
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
LHRuig/areuben
|
LHRuig
| 2025-02-04T06:01:50Z | 8 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T06:01:46Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: areuben
---
# areuben
<Gallery />
## Model description
areuben lora
## Trigger words
You should use `areuben` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/areuben/tree/main) them in the Files & versions tab.
|
LHRuig/archersx
|
LHRuig
| 2025-02-04T05:59:42Z | 8 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T05:59:37Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: archersx
---
# archersx
<Gallery />
## Model description
archersx lora
## Trigger words
You should use `archersx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/archersx/tree/main) them in the Files & versions tab.
|
LHRuig/aragornviggo
|
LHRuig
| 2025-02-04T05:57:31Z | 8 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T05:57:27Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: aragornviggo
---
# aragornviggo
<Gallery />
## Model description
aragornviggo lora
## Trigger words
You should use `aragornviggo` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/aragornviggo/tree/main) them in the Files & versions tab.
|
kk-aivio/200f4114-0fca-4b74-b365-f81ac9f59a76
|
kk-aivio
| 2025-02-04T05:56:36Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:OpenBuddy/openbuddy-llama2-13b-v8.1-fp16",
"base_model:adapter:OpenBuddy/openbuddy-llama2-13b-v8.1-fp16",
"region:us"
] | null | 2025-02-04T05:31:29Z |
---
library_name: peft
base_model: OpenBuddy/openbuddy-llama2-13b-v8.1-fp16
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 200f4114-0fca-4b74-b365-f81ac9f59a76
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: OpenBuddy/openbuddy-llama2-13b-v8.1-fp16
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ad9a336907b8ae34_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ad9a336907b8ae34_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kk-aivio/200f4114-0fca-4b74-b365-f81ac9f59a76
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/ad9a336907b8ae34_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2ae55a37-53c0-49da-ae27-90302c180793
wandb_project: Birthday-SN56-17-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2ae55a37-53c0-49da-ae27-90302c180793
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 200f4114-0fca-4b74-b365-f81ac9f59a76
This model is a fine-tuned version of [OpenBuddy/openbuddy-llama2-13b-v8.1-fp16](https://huggingface.co/OpenBuddy/openbuddy-llama2-13b-v8.1-fp16) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | nan |
| 9.0062 | 0.0085 | 50 | nan |
| 46.0364 | 0.0169 | 100 | nan |
| 148.0058 | 0.0254 | 150 | nan |
| 72.2216 | 0.0338 | 200 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Triangle104/Rombos-EVAGutenberg-TIES-Qwen2.5-32B-Q5_K_S-GGUF
|
Triangle104
| 2025-02-04T05:56:05Z | 20 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B",
"base_model:quantized:nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-04T05:54:18Z |
---
base_model: nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
license: apache-2.0
---
# Triangle104/Rombos-EVAGutenberg-TIES-Qwen2.5-32B-Q5_K_S-GGUF
This model was converted to GGUF format from [`nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B`](https://huggingface.co/nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Rombos-EVAGutenberg-TIES-Qwen2.5-32B-Q5_K_S-GGUF --hf-file rombos-evagutenberg-ties-qwen2.5-32b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Rombos-EVAGutenberg-TIES-Qwen2.5-32B-Q5_K_S-GGUF --hf-file rombos-evagutenberg-ties-qwen2.5-32b-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Rombos-EVAGutenberg-TIES-Qwen2.5-32B-Q5_K_S-GGUF --hf-file rombos-evagutenberg-ties-qwen2.5-32b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Rombos-EVAGutenberg-TIES-Qwen2.5-32B-Q5_K_S-GGUF --hf-file rombos-evagutenberg-ties-qwen2.5-32b-q5_k_s.gguf -c 2048
```
|
John6666/ikastrious-noobai-xl-v92-sdxl
|
John6666
| 2025-02-04T05:55:41Z | 7 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"cute",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-1.1",
"base_model:finetune:Laxhar/noobai-XL-1.1",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-02-04T05:48:33Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- cute
- illustrious
base_model: Laxhar/noobai-XL-1.1
---
Original model is [here](https://civitai.com/models/874216/ikastrious-noobai-xl?modelVersionId=1367989).
This model created by [giko](https://civitai.com/user/giko).
|
reaper24/model_16bit_gguf
|
reaper24
| 2025-02-04T05:55:38Z | 45 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-04T05:53:58Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** reaper24
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nathanialhunt/4d996d37-081c-4183-9112-28695b9f58b5
|
nathanialhunt
| 2025-02-04T05:55:29Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:OpenBuddy/openbuddy-llama2-13b-v8.1-fp16",
"base_model:adapter:OpenBuddy/openbuddy-llama2-13b-v8.1-fp16",
"region:us"
] | null | 2025-02-04T05:30:35Z |
---
library_name: peft
base_model: OpenBuddy/openbuddy-llama2-13b-v8.1-fp16
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4d996d37-081c-4183-9112-28695b9f58b5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: OpenBuddy/openbuddy-llama2-13b-v8.1-fp16
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ad9a336907b8ae34_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ad9a336907b8ae34_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: nathanialhunt/4d996d37-081c-4183-9112-28695b9f58b5
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/ad9a336907b8ae34_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2ae55a37-53c0-49da-ae27-90302c180793
wandb_project: Birthday-SN56-24-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2ae55a37-53c0-49da-ae27-90302c180793
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 4d996d37-081c-4183-9112-28695b9f58b5
This model is a fine-tuned version of [OpenBuddy/openbuddy-llama2-13b-v8.1-fp16](https://huggingface.co/OpenBuddy/openbuddy-llama2-13b-v8.1-fp16) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | nan |
| 9.0062 | 0.0085 | 50 | nan |
| 46.0364 | 0.0169 | 100 | nan |
| 148.0058 | 0.0254 | 150 | nan |
| 72.2216 | 0.0338 | 200 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
LHRuig/antoinedupont
|
LHRuig
| 2025-02-04T05:55:29Z | 8 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T05:55:08Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: antoinedupont
---
# antoinedupont
<Gallery />
## Model description
antoinedupont lora
## Trigger words
You should use `antoinedupont` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/antoinedupont/tree/main) them in the Files & versions tab.
|
aseratus1/09c4d30d-efde-4112-aa25-bd99b624b201
|
aseratus1
| 2025-02-04T05:55:19Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/tinyllama",
"base_model:adapter:unsloth/tinyllama",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-04T05:42:19Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/tinyllama
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 09c4d30d-efde-4112-aa25-bd99b624b201
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/tinyllama
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 74fd83b58bc4ad47_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/74fd83b58bc4ad47_train_data.json
type:
field_input: conversation
field_instruction: note
field_output: summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: aseratus1/09c4d30d-efde-4112-aa25-bd99b624b201
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/74fd83b58bc4ad47_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 21068da4-737c-49df-9240-0bd8ff25df8b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 21068da4-737c-49df-9240-0bd8ff25df8b
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 09c4d30d-efde-4112-aa25-bd99b624b201
This model is a fine-tuned version of [unsloth/tinyllama](https://huggingface.co/unsloth/tinyllama) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1192 | 0.0565 | 200 | 0.9429 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
laquythang/3550677e-e305-49f2-80f2-a4ab175087d7
|
laquythang
| 2025-02-04T05:54:33Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/tinyllama",
"base_model:adapter:unsloth/tinyllama",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-04T05:42:51Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/tinyllama
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3550677e-e305-49f2-80f2-a4ab175087d7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/tinyllama
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 74fd83b58bc4ad47_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/74fd83b58bc4ad47_train_data.json
type:
field_input: conversation
field_instruction: note
field_output: summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: laquythang/3550677e-e305-49f2-80f2-a4ab175087d7
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/74fd83b58bc4ad47_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 21068da4-737c-49df-9240-0bd8ff25df8b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 21068da4-737c-49df-9240-0bd8ff25df8b
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 3550677e-e305-49f2-80f2-a4ab175087d7
This model is a fine-tuned version of [unsloth/tinyllama](https://huggingface.co/unsloth/tinyllama) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9308
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9055 | 0.0565 | 200 | 0.9308 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
LHRuig/anthonyhopkn
|
LHRuig
| 2025-02-04T05:54:25Z | 9 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T05:54:21Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: anthonyhopkn
---
# anthonyhopkn
<Gallery />
## Model description
anthonyhopkn lora
## Trigger words
You should use `anthonyhopkn` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/anthonyhopkn/tree/main) them in the Files & versions tab.
|
LHRuig/anthonyburdan
|
LHRuig
| 2025-02-04T05:53:11Z | 7 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T05:53:04Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: anthonyburdan
---
# anthonyburdan
<Gallery />
## Model description
anthonyburdan lora
## Trigger words
You should use `anthonyburdan` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/anthonyburdan/tree/main) them in the Files & versions tab.
|
LHRuig/anthonymorl
|
LHRuig
| 2025-02-04T05:52:28Z | 10 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T05:52:23Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: anthonymorl
---
# anthonymorl
<Gallery />
## Model description
anthonymorl lora
## Trigger words
You should use `anthonymorl` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/anthonymorl/tree/main) them in the Files & versions tab.
|
bane5631/2750b9d9-ded4-494a-b1ca-811ef86cc80d
|
bane5631
| 2025-02-04T05:51:23Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-7B-Instruct",
"base_model:adapter:unsloth/Qwen2-7B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-04T05:26:51Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2750b9d9-ded4-494a-b1ca-811ef86cc80d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e6e4f6e948bc6471_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e6e4f6e948bc6471_train_data.json
type:
field_input: topic
field_instruction: text
field_output: title
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: bane5631/2750b9d9-ded4-494a-b1ca-811ef86cc80d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/e6e4f6e948bc6471_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 69002658-908b-4f14-a9fb-64d08340747d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 69002658-908b-4f14-a9fb-64d08340747d
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 2750b9d9-ded4-494a-b1ca-811ef86cc80d
This model is a fine-tuned version of [unsloth/Qwen2-7B-Instruct](https://huggingface.co/unsloth/Qwen2-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6787
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 140
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.4008 | 1.0 | 140 | 0.6787 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nttx/cf97c69c-9b44-4a7d-8de5-5062564fd8f0
|
nttx
| 2025-02-04T05:50:44Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/tinyllama",
"base_model:adapter:unsloth/tinyllama",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T05:41:20Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/tinyllama
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cf97c69c-9b44-4a7d-8de5-5062564fd8f0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/tinyllama
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 74fd83b58bc4ad47_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/74fd83b58bc4ad47_train_data.json
type:
field_input: conversation
field_instruction: note
field_output: summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: nttx/cf97c69c-9b44-4a7d-8de5-5062564fd8f0
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/74fd83b58bc4ad47_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 21068da4-737c-49df-9240-0bd8ff25df8b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 21068da4-737c-49df-9240-0bd8ff25df8b
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# cf97c69c-9b44-4a7d-8de5-5062564fd8f0
This model is a fine-tuned version of [unsloth/tinyllama](https://huggingface.co/unsloth/tinyllama) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2128
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.2235 | 0.1130 | 200 | 0.2128 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
biustnaspust/puszek50
|
biustnaspust
| 2025-02-04T05:50:32Z | 41 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-04T05:45:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lesso/71d9fb6c-4d85-4aec-8d15-86465f62e01a
|
lesso
| 2025-02-04T05:49:45Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:OpenBuddy/openbuddy-llama2-13b-v8.1-fp16",
"base_model:adapter:OpenBuddy/openbuddy-llama2-13b-v8.1-fp16",
"region:us"
] | null | 2025-02-04T05:03:06Z |
---
library_name: peft
base_model: OpenBuddy/openbuddy-llama2-13b-v8.1-fp16
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 71d9fb6c-4d85-4aec-8d15-86465f62e01a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: OpenBuddy/openbuddy-llama2-13b-v8.1-fp16
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- ad9a336907b8ae34_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ad9a336907b8ae34_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso/71d9fb6c-4d85-4aec-8d15-86465f62e01a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000101
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: linear
max_grad_norm: 1.0
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/god13/ad9a336907b8ae34_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2ae55a37-53c0-49da-ae27-90302c180793
wandb_project: ab-god13
wandb_run: your_name
wandb_runid: 2ae55a37-53c0-49da-ae27-90302c180793
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 71d9fb6c-4d85-4aec-8d15-86465f62e01a
This model is a fine-tuned version of [OpenBuddy/openbuddy-llama2-13b-v8.1-fp16](https://huggingface.co/OpenBuddy/openbuddy-llama2-13b-v8.1-fp16) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4630
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000101
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.6474 | 0.0054 | 1 | 2.8695 |
| 2.7467 | 0.2703 | 50 | 2.5603 |
| 2.707 | 0.5405 | 100 | 2.5017 |
| 2.736 | 0.8108 | 150 | 2.4729 |
| 2.4112 | 1.0811 | 200 | 2.4630 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
LHRuig/andycut
|
LHRuig
| 2025-02-04T05:48:02Z | 7 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T05:47:58Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: andycut
---
# andycut
<Gallery />
## Model description
andycut lora
## Trigger words
You should use `andycut` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/andycut/tree/main) them in the Files & versions tab.
|
mrferr3t/ec115bc8-c139-42af-aa1d-ba408f218dda
|
mrferr3t
| 2025-02-04T05:46:11Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"opt",
"axolotl",
"generated_from_trainer",
"base_model:facebook/opt-1.3b",
"base_model:adapter:facebook/opt-1.3b",
"license:other",
"region:us"
] | null | 2025-02-04T04:38:36Z |
---
library_name: peft
license: other
base_model: facebook/opt-1.3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ec115bc8-c139-42af-aa1d-ba408f218dda
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
auto_find_batch_size: true
base_model: facebook/opt-1.3b
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- 865018e1e26a6750_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/865018e1e26a6750_train_data.json
type:
field_input: category
field_instruction: prompt
field_output: text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 3
early_stopping_threshold: 0.001
eval_max_new_tokens: 128
eval_steps: 40
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/ec115bc8-c139-42af-aa1d-ba408f218dda
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0003
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 100
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
micro_batch_size: 32
mlflow_experiment_name: /tmp/865018e1e26a6750_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 50
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
s2_attention: null
sample_packing: false
save_steps: 40
saves_per_epoch: 0
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ac737825-d1b0-4693-98b1-0e9d338f04f7
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ac737825-d1b0-4693-98b1-0e9d338f04f7
warmup_ratio: 0.05
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# ec115bc8-c139-42af-aa1d-ba408f218dda
This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3369
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 363
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| No log | 0.0034 | 1 | 1.4209 |
| No log | 0.1375 | 40 | 1.2780 |
| No log | 0.2749 | 80 | 1.0079 |
| 2.4977 | 0.4124 | 120 | 0.8917 |
| 2.4977 | 0.5498 | 160 | 0.8292 |
| 1.7889 | 0.6873 | 200 | 0.7923 |
| 1.7889 | 0.8247 | 240 | 0.7640 |
| 1.7889 | 0.9622 | 280 | 0.7362 |
| 1.6126 | 1.0997 | 320 | 0.7179 |
| 1.6126 | 1.2371 | 360 | 0.7066 |
| 1.493 | 1.3746 | 400 | 0.6835 |
| 1.493 | 1.5120 | 440 | 0.6650 |
| 1.493 | 1.6495 | 480 | 0.6581 |
| 1.4002 | 1.7869 | 520 | 0.6351 |
| 1.4002 | 1.9244 | 560 | 0.6221 |
| 1.3197 | 2.0619 | 600 | 0.6082 |
| 1.3197 | 2.1993 | 640 | 0.5975 |
| 1.3197 | 2.3368 | 680 | 0.5838 |
| 1.2424 | 2.4742 | 720 | 0.5732 |
| 1.2424 | 2.6117 | 760 | 0.5607 |
| 1.1842 | 2.7491 | 800 | 0.5476 |
| 1.1842 | 2.8866 | 840 | 0.5407 |
| 1.1842 | 3.0241 | 880 | 0.5306 |
| 1.1193 | 3.1615 | 920 | 0.5175 |
| 1.1193 | 3.2990 | 960 | 0.5120 |
| 1.0643 | 3.4364 | 1000 | 0.5031 |
| 1.0643 | 3.5739 | 1040 | 0.4926 |
| 1.0643 | 3.7113 | 1080 | 0.4833 |
| 1.0313 | 3.8488 | 1120 | 0.4757 |
| 1.0313 | 3.9863 | 1160 | 0.4716 |
| 0.9792 | 4.1237 | 1200 | 0.4642 |
| 0.9792 | 4.2612 | 1240 | 0.4573 |
| 0.9792 | 4.3986 | 1280 | 0.4507 |
| 0.9349 | 4.5361 | 1320 | 0.4439 |
| 0.9349 | 4.6735 | 1360 | 0.4401 |
| 0.9144 | 4.8110 | 1400 | 0.4325 |
| 0.9144 | 4.9485 | 1440 | 0.4269 |
| 0.9144 | 5.0859 | 1480 | 0.4215 |
| 0.8651 | 5.2234 | 1520 | 0.4160 |
| 0.8651 | 5.3608 | 1560 | 0.4099 |
| 0.8401 | 5.4983 | 1600 | 0.4063 |
| 0.8401 | 5.6357 | 1640 | 0.4021 |
| 0.8401 | 5.7732 | 1680 | 0.4004 |
| 0.8193 | 5.9107 | 1720 | 0.3935 |
| 0.8193 | 6.0481 | 1760 | 0.3907 |
| 0.7797 | 6.1856 | 1800 | 0.3889 |
| 0.7797 | 6.3230 | 1840 | 0.3832 |
| 0.7797 | 6.4605 | 1880 | 0.3841 |
| 0.7569 | 6.5979 | 1920 | 0.3794 |
| 0.7569 | 6.7354 | 1960 | 0.3763 |
| 0.7553 | 6.8729 | 2000 | 0.3719 |
| 0.7553 | 7.0103 | 2040 | 0.3709 |
| 0.7553 | 7.1478 | 2080 | 0.3677 |
| 0.706 | 7.2852 | 2120 | 0.3678 |
| 0.706 | 7.4227 | 2160 | 0.3643 |
| 0.7028 | 7.5601 | 2200 | 0.3603 |
| 0.7028 | 7.6976 | 2240 | 0.3593 |
| 0.7028 | 7.8351 | 2280 | 0.3554 |
| 0.6982 | 7.9725 | 2320 | 0.3540 |
| 0.6982 | 8.1100 | 2360 | 0.3552 |
| 0.6574 | 8.2474 | 2400 | 0.3537 |
| 0.6574 | 8.3849 | 2440 | 0.3525 |
| 0.6574 | 8.5223 | 2480 | 0.3515 |
| 0.6605 | 8.6598 | 2520 | 0.3481 |
| 0.6605 | 8.7973 | 2560 | 0.3463 |
| 0.6595 | 8.9347 | 2600 | 0.3455 |
| 0.6595 | 9.0722 | 2640 | 0.3460 |
| 0.6595 | 9.2096 | 2680 | 0.3437 |
| 0.6202 | 9.3471 | 2720 | 0.3405 |
| 0.6202 | 9.4845 | 2760 | 0.3395 |
| 0.6263 | 9.6220 | 2800 | 0.3386 |
| 0.6263 | 9.7595 | 2840 | 0.3357 |
| 0.6263 | 9.8969 | 2880 | 0.3334 |
| 0.6157 | 10.0344 | 2920 | 0.3372 |
| 0.6157 | 10.1718 | 2960 | 0.3380 |
| 0.585 | 10.3093 | 3000 | 0.3369 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
leixa/19350214-bb71-41ac-8c18-13064e1f4f30
|
leixa
| 2025-02-04T05:44:43Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:upstage/SOLAR-10.7B-Instruct-v1.0",
"base_model:adapter:upstage/SOLAR-10.7B-Instruct-v1.0",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-02-04T04:51:36Z |
---
library_name: peft
license: cc-by-nc-4.0
base_model: upstage/SOLAR-10.7B-Instruct-v1.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 19350214-bb71-41ac-8c18-13064e1f4f30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: upstage/SOLAR-10.7B-Instruct-v1.0
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2b92ab41fd78d964_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2b92ab41fd78d964_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: leixa/19350214-bb71-41ac-8c18-13064e1f4f30
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/2b92ab41fd78d964_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: d52e1c3e-d02f-4f16-8a19-04af65ce7992
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d52e1c3e-d02f-4f16-8a19-04af65ce7992
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 19350214-bb71-41ac-8c18-13064e1f4f30
This model is a fine-tuned version of [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0036 | 1 | 0.8825 |
| 0.7539 | 0.0328 | 9 | 0.6861 |
| 0.6087 | 0.0656 | 18 | 0.6304 |
| 0.5978 | 0.0985 | 27 | 0.6114 |
| 0.6131 | 0.1313 | 36 | 0.6000 |
| 0.6093 | 0.1641 | 45 | 0.5919 |
| 0.5994 | 0.1969 | 54 | 0.5852 |
| 0.5922 | 0.2297 | 63 | 0.5802 |
| 0.5946 | 0.2625 | 72 | 0.5760 |
| 0.5759 | 0.2954 | 81 | 0.5733 |
| 0.5821 | 0.3282 | 90 | 0.5720 |
| 0.5755 | 0.3610 | 99 | 0.5718 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
prxy5604/ea561f4e-8d7a-4672-a68f-a2ffce4792e2
|
prxy5604
| 2025-02-04T05:44:35Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:defog/llama-3-sqlcoder-8b",
"base_model:adapter:defog/llama-3-sqlcoder-8b",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2025-02-04T05:09:42Z |
---
library_name: peft
license: cc-by-sa-4.0
base_model: defog/llama-3-sqlcoder-8b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ea561f4e-8d7a-4672-a68f-a2ffce4792e2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: defog/llama-3-sqlcoder-8b
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3b9b4289b748f826_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3b9b4289b748f826_train_data.json
type:
field_instruction: item_title
field_output: comment
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: prxy5604/ea561f4e-8d7a-4672-a68f-a2ffce4792e2
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/3b9b4289b748f826_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fb91bb99-180c-4ff4-aa46-6d9918134443
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: fb91bb99-180c-4ff4-aa46-6d9918134443
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# ea561f4e-8d7a-4672-a68f-a2ffce4792e2
This model is a fine-tuned version of [defog/llama-3-sqlcoder-8b](https://huggingface.co/defog/llama-3-sqlcoder-8b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0228
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.1054 | 0.0014 | 1 | 4.3530 |
| 4.0698 | 0.0679 | 50 | 4.0311 |
| 4.1273 | 0.1358 | 100 | 3.5088 |
| 4.0845 | 0.2037 | 150 | 3.0917 |
| 4.9535 | 0.2716 | 200 | 3.0228 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
ciloku/5a4fcb7a-f816-42ea-9149-6c968f2d88d2
|
ciloku
| 2025-02-04T05:44:35Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:defog/llama-3-sqlcoder-8b",
"base_model:adapter:defog/llama-3-sqlcoder-8b",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2025-02-04T05:09:50Z |
---
library_name: peft
license: cc-by-sa-4.0
base_model: defog/llama-3-sqlcoder-8b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5a4fcb7a-f816-42ea-9149-6c968f2d88d2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: defog/llama-3-sqlcoder-8b
bf16: true
chat_template: llama3
data_processes: 24
dataset_prepared_path: null
datasets:
- data_files:
- 3b9b4289b748f826_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3b9b4289b748f826_train_data.json
type:
field_instruction: item_title
field_output: comment
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 4
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: ciloku/5a4fcb7a-f816-42ea-9149-6c968f2d88d2
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 6.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.04
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
lr_scheduler_warmup_steps: 50
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/3b9b4289b748f826_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-8
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
seed: 17333
sequence_len: 1024
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
total_train_batch_size: 32
train_batch_size: 8
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fb91bb99-180c-4ff4-aa46-6d9918134443
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: fb91bb99-180c-4ff4-aa46-6d9918134443
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 5a4fcb7a-f816-42ea-9149-6c968f2d88d2
This model is a fine-tuned version of [defog/llama-3-sqlcoder-8b](https://huggingface.co/defog/llama-3-sqlcoder-8b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0434
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 17333
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-8
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.1182 | 0.0014 | 1 | 4.3540 |
| 4.4217 | 0.0679 | 50 | 3.6466 |
| 4.5185 | 0.1358 | 100 | 3.3408 |
| 4.8501 | 0.2037 | 150 | 3.0678 |
| 4.8512 | 0.2716 | 200 | 3.0434 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
LHRuig/andrewrea
|
LHRuig
| 2025-02-04T05:44:09Z | 7 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T05:44:04Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: andrewrea
---
# andrewrea
<Gallery />
## Model description
andrewrea lora
## Trigger words
You should use `andrewrea` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/andrewrea/tree/main) them in the Files & versions tab.
|
Kyungjin-Kim/mmc_roberta_500000_es-ipa
|
Kyungjin-Kim
| 2025-02-04T05:43:27Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-02-03T23:36:39Z |
---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: mmc_roberta_500000_es-ipa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mmc_roberta_500000_es-ipa
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00025
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5000
- training_steps: 100000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Triangle104/Rombos-EVAGutenberg-TIES-Qwen2.5-32B-Q4_K_M-GGUF
|
Triangle104
| 2025-02-04T05:42:27Z | 21 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B",
"base_model:quantized:nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-04T05:40:54Z |
---
base_model: nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
license: apache-2.0
---
# Triangle104/Rombos-EVAGutenberg-TIES-Qwen2.5-32B-Q4_K_M-GGUF
This model was converted to GGUF format from [`nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B`](https://huggingface.co/nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Rombos-EVAGutenberg-TIES-Qwen2.5-32B-Q4_K_M-GGUF --hf-file rombos-evagutenberg-ties-qwen2.5-32b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Rombos-EVAGutenberg-TIES-Qwen2.5-32B-Q4_K_M-GGUF --hf-file rombos-evagutenberg-ties-qwen2.5-32b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Rombos-EVAGutenberg-TIES-Qwen2.5-32B-Q4_K_M-GGUF --hf-file rombos-evagutenberg-ties-qwen2.5-32b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Rombos-EVAGutenberg-TIES-Qwen2.5-32B-Q4_K_M-GGUF --hf-file rombos-evagutenberg-ties-qwen2.5-32b-q4_k_m.gguf -c 2048
```
|
brew35/072cce4b-98f6-4c95-bed0-15811a60d568
|
brew35
| 2025-02-04T05:38:59Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-9b",
"base_model:adapter:unsloth/gemma-2-9b",
"license:gemma",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-04T04:33:15Z |
---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-9b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 072cce4b-98f6-4c95-bed0-15811a60d568
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-9b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- aca1347c2eff58c3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/aca1347c2eff58c3_train_data.json
type:
field_instruction: question_text
field_output: document_plaintext
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: brew35/072cce4b-98f6-4c95-bed0-15811a60d568
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/aca1347c2eff58c3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 36088511-e20e-40ed-8fa3-5090e5d7f560
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 36088511-e20e-40ed-8fa3-5090e5d7f560
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 072cce4b-98f6-4c95-bed0-15811a60d568
This model is a fine-tuned version of [unsloth/gemma-2-9b](https://huggingface.co/unsloth/gemma-2-9b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7741
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.9911 | 0.0261 | 200 | 1.7741 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
anvorja/roberta-base-biomedical-clinical-es-ner-breast-cancer
|
anvorja
| 2025-02-04T05:37:25Z | 29 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"token-classification",
"generated_from_trainer",
"base_model:PlanTL-GOB-ES/roberta-base-biomedical-clinical-es",
"base_model:finetune:PlanTL-GOB-ES/roberta-base-biomedical-clinical-es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-02-04T04:06:52Z |
---
library_name: transformers
license: apache-2.0
base_model: PlanTL-GOB-ES/roberta-base-biomedical-clinical-es
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-base-biomedical-clinical-es-ner-breast-cancer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-biomedical-clinical-es-ner-breast-cancer
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-biomedical-clinical-es](https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2843
- Precision: 0.8858
- Recall: 0.8799
- F1: 0.8829
- Accuracy: 0.9477
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 2.0684 | 1.0 | 213 | 2.5125 | 0.0 | 0.0 | 0.0 | 0.4888 |
| 1.1248 | 2.0 | 426 | 1.2611 | 0.5215 | 0.4616 | 0.4897 | 0.7386 |
| 0.5784 | 3.0 | 639 | 0.6768 | 0.7367 | 0.7785 | 0.7571 | 0.8910 |
| 0.3367 | 4.0 | 852 | 0.4469 | 0.7996 | 0.8359 | 0.8174 | 0.9227 |
| 0.2784 | 5.0 | 1065 | 0.3739 | 0.8410 | 0.8646 | 0.8526 | 0.9328 |
| 0.1799 | 6.0 | 1278 | 0.3285 | 0.8709 | 0.8686 | 0.8697 | 0.9393 |
| 0.1392 | 7.0 | 1491 | 0.3132 | 0.8758 | 0.8659 | 0.8708 | 0.9397 |
| 0.1399 | 8.0 | 1704 | 0.3047 | 0.8798 | 0.8739 | 0.8768 | 0.9427 |
| 0.1207 | 9.0 | 1917 | 0.3080 | 0.8755 | 0.8773 | 0.8764 | 0.9400 |
| 0.0968 | 10.0 | 2130 | 0.3021 | 0.8757 | 0.8739 | 0.8748 | 0.9395 |
| 0.1218 | 11.0 | 2343 | 0.2862 | 0.8835 | 0.8753 | 0.8794 | 0.9431 |
| 0.088 | 12.0 | 2556 | 0.2894 | 0.8807 | 0.8819 | 0.8813 | 0.9429 |
| 0.0808 | 13.0 | 2769 | 0.2891 | 0.8818 | 0.8759 | 0.8788 | 0.9451 |
| 0.1002 | 14.0 | 2982 | 0.2829 | 0.8837 | 0.8766 | 0.8801 | 0.9453 |
| 0.0617 | 15.0 | 3195 | 0.2840 | 0.8820 | 0.8773 | 0.8796 | 0.9460 |
| 0.0757 | 16.0 | 3408 | 0.2843 | 0.8858 | 0.8799 | 0.8829 | 0.9477 |
| 0.0758 | 17.0 | 3621 | 0.2869 | 0.8845 | 0.8786 | 0.8815 | 0.9462 |
| 0.0617 | 18.0 | 3834 | 0.2844 | 0.8835 | 0.8799 | 0.8817 | 0.9463 |
| 0.0719 | 19.0 | 4047 | 0.2842 | 0.8852 | 0.8793 | 0.8822 | 0.9467 |
| 0.0717 | 19.9088 | 4240 | 0.2842 | 0.8852 | 0.8793 | 0.8822 | 0.9467 |
### Framework versions
- Transformers 4.48.2
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
LHRuig/alexandrejubl
|
LHRuig
| 2025-02-04T05:35:01Z | 5 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T05:34:57Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: alexandrejubl
---
# alexandrejubl
<Gallery />
## Model description
alexandrejubl lora
## Trigger words
You should use `alexandrejubl` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/alexandrejubl/tree/main) them in the Files & versions tab.
|
mrHungddddh/845ca854-d8a6-415e-aa62-71bfef4ac9c9
|
mrHungddddh
| 2025-02-04T05:33:45Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:adapter:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-04T02:28:22Z |
---
library_name: peft
license: apache-2.0
base_model: teknium/OpenHermes-2.5-Mistral-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 845ca854-d8a6-415e-aa62-71bfef4ac9c9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: teknium/OpenHermes-2.5-Mistral-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8f23d0c27dcb0f9f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8f23d0c27dcb0f9f_train_data.json
type:
field_input: evidence
field_instruction: user_input
field_output: claim
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: mrHungddddh/845ca854-d8a6-415e-aa62-71bfef4ac9c9
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/8f23d0c27dcb0f9f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|im_end|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: afeef3dd-1e46-4c12-b26d-35001f70da6e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: afeef3dd-1e46-4c12-b26d-35001f70da6e
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 845ca854-d8a6-415e-aa62-71bfef4ac9c9
This model is a fine-tuned version of [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9622
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.9613 | 0.0035 | 200 | 0.9622 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Pravallika6/detr-resnet-50-finetuned-credentials
|
Pravallika6
| 2025-02-04T05:33:07Z | 12 | 0 |
transformers
|
[
"transformers",
"safetensors",
"detr",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2025-02-03T20:51:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
oldiday/7576c91c-10b4-49e2-8393-8055fec170f0
|
oldiday
| 2025-02-04T05:32:06Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Solar-10b-64k",
"base_model:adapter:NousResearch/Yarn-Solar-10b-64k",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T05:10:36Z |
---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Solar-10b-64k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7576c91c-10b4-49e2-8393-8055fec170f0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Solar-10b-64k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9bd7b6044d104eec_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9bd7b6044d104eec_train_data.json
type:
field_input: ''
field_instruction: input_text
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: oldiday/7576c91c-10b4-49e2-8393-8055fec170f0
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/9bd7b6044d104eec_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: e5a6e46b-b77f-4d50-a625-e1eb21e1df7c
wandb_project: Gradients-On-Six
wandb_run: your_name
wandb_runid: e5a6e46b-b77f-4d50-a625-e1eb21e1df7c
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7576c91c-10b4-49e2-8393-8055fec170f0
This model is a fine-tuned version of [NousResearch/Yarn-Solar-10b-64k](https://huggingface.co/NousResearch/Yarn-Solar-10b-64k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0125 | 1 | 2.0694 |
| 3.0347 | 0.1121 | 9 | 0.3758 |
| 0.5592 | 0.2243 | 18 | 0.1338 |
| 0.4166 | 0.3364 | 27 | 0.0836 |
| 0.3085 | 0.4486 | 36 | 0.0722 |
| 0.1686 | 0.5607 | 45 | 0.0535 |
| 0.1935 | 0.6729 | 54 | 0.0369 |
| 0.1384 | 0.7850 | 63 | 0.0295 |
| 0.0998 | 0.8972 | 72 | 0.0225 |
| 0.1406 | 1.0093 | 81 | 0.0230 |
| 0.0726 | 1.1215 | 90 | 0.0219 |
| 0.0303 | 1.2336 | 99 | 0.0215 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
LHRuig/alexcut
|
LHRuig
| 2025-02-04T05:31:57Z | 7 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T05:31:53Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: alexcut
---
# alexcut
<Gallery />
## Model description
alexcut lora
## Trigger words
You should use `alexcut` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/alexcut/tree/main) them in the Files & versions tab.
|
abaddon182/7fec50ae-d171-4d77-9ee5-5ae4e3971de0
|
abaddon182
| 2025-02-04T05:30:33Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-7B-Instruct",
"base_model:adapter:unsloth/Qwen2-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T04:54:43Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7fec50ae-d171-4d77-9ee5-5ae4e3971de0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-7B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e6e4f6e948bc6471_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e6e4f6e948bc6471_train_data.json
type:
field_input: topic
field_instruction: text
field_output: title
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: abaddon182/7fec50ae-d171-4d77-9ee5-5ae4e3971de0
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/e6e4f6e948bc6471_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 69002658-908b-4f14-a9fb-64d08340747d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 69002658-908b-4f14-a9fb-64d08340747d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7fec50ae-d171-4d77-9ee5-5ae4e3971de0
This model is a fine-tuned version of [unsloth/Qwen2-7B-Instruct](https://huggingface.co/unsloth/Qwen2-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6212
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.985 | 0.0143 | 1 | 3.7549 |
| 0.6906 | 0.7143 | 50 | 0.7170 |
| 0.2946 | 1.4286 | 100 | 0.5829 |
| 0.1743 | 2.1429 | 150 | 0.5703 |
| 0.0712 | 2.8571 | 200 | 0.6212 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
LHRuig/alesko
|
LHRuig
| 2025-02-04T05:29:38Z | 7 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T05:29:10Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: alesko
---
# alesko
<Gallery />
## Model description
alesko lora
## Trigger words
You should use `alesko` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/alesko/tree/main) them in the Files & versions tab.
|
oiehhun/love_chatbot
|
oiehhun
| 2025-02-04T05:29:20Z | 26 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-02-04T05:27:03Z |
---
base_model: unsloth/llama-3.2-3b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** oiehhun
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
JoeKinng14/test_trainer
|
JoeKinng14
| 2025-02-04T05:29:05Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-02-04T05:28:26Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: test_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5120
- Accuracy: 0.884
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 0.3425 | 0.849 |
| No log | 2.0 | 250 | 0.4071 | 0.874 |
| No log | 3.0 | 375 | 0.5120 | 0.884 |
### Framework versions
- Transformers 4.49.0.dev0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
LHRuig/albundy
|
LHRuig
| 2025-02-04T05:27:55Z | 9 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T05:27:51Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: albundy
---
# albundy
<Gallery />
## Model description
albundy lora
## Trigger words
You should use `albundy` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/albundy/tree/main) them in the Files & versions tab.
|
LHRuig/albertdupontl
|
LHRuig
| 2025-02-04T05:27:27Z | 7 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T05:26:53Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: albertdupontl
---
# albertdupontl
<Gallery />
## Model description
albertdupontl lora
## Trigger words
You should use `albertdupontl` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/albertdupontl/tree/main) them in the Files & versions tab.
|
Best000/ca86e901-45fe-4b2b-ad97-9ef3848616ad
|
Best000
| 2025-02-04T05:27:19Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:OpenBuddy/openbuddy-llama2-13b-v8.1-fp16",
"base_model:adapter:OpenBuddy/openbuddy-llama2-13b-v8.1-fp16",
"region:us"
] | null | 2025-02-04T05:02:08Z |
---
library_name: peft
base_model: OpenBuddy/openbuddy-llama2-13b-v8.1-fp16
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ca86e901-45fe-4b2b-ad97-9ef3848616ad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: OpenBuddy/openbuddy-llama2-13b-v8.1-fp16
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ad9a336907b8ae34_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ad9a336907b8ae34_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Best000/ca86e901-45fe-4b2b-ad97-9ef3848616ad
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/ad9a336907b8ae34_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2ae55a37-53c0-49da-ae27-90302c180793
wandb_project: Birthday-SN56-15-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2ae55a37-53c0-49da-ae27-90302c180793
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# ca86e901-45fe-4b2b-ad97-9ef3848616ad
This model is a fine-tuned version of [OpenBuddy/openbuddy-llama2-13b-v8.1-fp16](https://huggingface.co/OpenBuddy/openbuddy-llama2-13b-v8.1-fp16) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | nan |
| 9.0062 | 0.0085 | 50 | nan |
| 46.0364 | 0.0169 | 100 | nan |
| 148.0058 | 0.0254 | 150 | nan |
| 72.2216 | 0.0338 | 200 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
havinash-ai/72018811-3923-4491-9a6d-c9fb992d2204
|
havinash-ai
| 2025-02-04T05:27:05Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:OpenBuddy/openbuddy-llama2-13b-v8.1-fp16",
"base_model:adapter:OpenBuddy/openbuddy-llama2-13b-v8.1-fp16",
"region:us"
] | null | 2025-02-04T05:02:06Z |
---
library_name: peft
base_model: OpenBuddy/openbuddy-llama2-13b-v8.1-fp16
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 72018811-3923-4491-9a6d-c9fb992d2204
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: OpenBuddy/openbuddy-llama2-13b-v8.1-fp16
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ad9a336907b8ae34_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ad9a336907b8ae34_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: havinash-ai/72018811-3923-4491-9a6d-c9fb992d2204
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/ad9a336907b8ae34_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2ae55a37-53c0-49da-ae27-90302c180793
wandb_project: Birthday-SN56-9-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2ae55a37-53c0-49da-ae27-90302c180793
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 72018811-3923-4491-9a6d-c9fb992d2204
This model is a fine-tuned version of [OpenBuddy/openbuddy-llama2-13b-v8.1-fp16](https://huggingface.co/OpenBuddy/openbuddy-llama2-13b-v8.1-fp16) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | nan |
| 9.0062 | 0.0085 | 50 | nan |
| 46.0364 | 0.0169 | 100 | nan |
| 148.0058 | 0.0254 | 150 | nan |
| 72.2216 | 0.0338 | 200 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
trenden/99bd7935-0a86-4700-8869-582d321fefbd
|
trenden
| 2025-02-04T05:27:05Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:OpenBuddy/openbuddy-llama2-13b-v8.1-fp16",
"base_model:adapter:OpenBuddy/openbuddy-llama2-13b-v8.1-fp16",
"region:us"
] | null | 2025-02-04T05:02:05Z |
---
library_name: peft
base_model: OpenBuddy/openbuddy-llama2-13b-v8.1-fp16
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 99bd7935-0a86-4700-8869-582d321fefbd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: OpenBuddy/openbuddy-llama2-13b-v8.1-fp16
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ad9a336907b8ae34_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ad9a336907b8ae34_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: trenden/99bd7935-0a86-4700-8869-582d321fefbd
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/ad9a336907b8ae34_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2ae55a37-53c0-49da-ae27-90302c180793
wandb_project: Birthday-SN56-26-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2ae55a37-53c0-49da-ae27-90302c180793
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 99bd7935-0a86-4700-8869-582d321fefbd
This model is a fine-tuned version of [OpenBuddy/openbuddy-llama2-13b-v8.1-fp16](https://huggingface.co/OpenBuddy/openbuddy-llama2-13b-v8.1-fp16) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | nan |
| 9.0062 | 0.0085 | 50 | nan |
| 46.0364 | 0.0169 | 100 | nan |
| 148.0058 | 0.0254 | 150 | nan |
| 72.2216 | 0.0338 | 200 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
daniel40/421a3a89-b693-46a7-9536-4371f1420f98
|
daniel40
| 2025-02-04T05:26:59Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:OpenBuddy/openbuddy-llama2-13b-v8.1-fp16",
"base_model:adapter:OpenBuddy/openbuddy-llama2-13b-v8.1-fp16",
"region:us"
] | null | 2025-02-04T05:02:06Z |
---
library_name: peft
base_model: OpenBuddy/openbuddy-llama2-13b-v8.1-fp16
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 421a3a89-b693-46a7-9536-4371f1420f98
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: OpenBuddy/openbuddy-llama2-13b-v8.1-fp16
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ad9a336907b8ae34_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ad9a336907b8ae34_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: daniel40/421a3a89-b693-46a7-9536-4371f1420f98
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/ad9a336907b8ae34_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2ae55a37-53c0-49da-ae27-90302c180793
wandb_project: Birthday-SN56-28-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2ae55a37-53c0-49da-ae27-90302c180793
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 421a3a89-b693-46a7-9536-4371f1420f98
This model is a fine-tuned version of [OpenBuddy/openbuddy-llama2-13b-v8.1-fp16](https://huggingface.co/OpenBuddy/openbuddy-llama2-13b-v8.1-fp16) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | nan |
| 9.0062 | 0.0085 | 50 | nan |
| 46.0364 | 0.0169 | 100 | nan |
| 148.0058 | 0.0254 | 150 | nan |
| 72.2216 | 0.0338 | 200 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
LHRuig/alainchabt
|
LHRuig
| 2025-02-04T05:25:39Z | 7 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T05:25:12Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: alainchabt
---
# alainchabt
<Gallery />
## Model description
alainchabt lora
## Trigger words
You should use `alainchabt` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/alainchabt/tree/main) them in the Files & versions tab.
|
LHRuig/alaindeln
|
LHRuig
| 2025-02-04T05:24:47Z | 7 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T05:24:15Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: alaindeln
---
# alaindeln
<Gallery />
## Model description
alaindeln lora
## Trigger words
You should use `alaindeln` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/alaindeln/tree/main) them in the Files & versions tab.
|
Triangle104/Rombos-EVAGutenberg-TIES-Qwen2.5-32B-Q4_K_S-GGUF
|
Triangle104
| 2025-02-04T05:24:01Z | 20 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B",
"base_model:quantized:nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-04T05:22:32Z |
---
base_model: nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
license: apache-2.0
---
# Triangle104/Rombos-EVAGutenberg-TIES-Qwen2.5-32B-Q4_K_S-GGUF
This model was converted to GGUF format from [`nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B`](https://huggingface.co/nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Rombos-EVAGutenberg-TIES-Qwen2.5-32B-Q4_K_S-GGUF --hf-file rombos-evagutenberg-ties-qwen2.5-32b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Rombos-EVAGutenberg-TIES-Qwen2.5-32B-Q4_K_S-GGUF --hf-file rombos-evagutenberg-ties-qwen2.5-32b-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Rombos-EVAGutenberg-TIES-Qwen2.5-32B-Q4_K_S-GGUF --hf-file rombos-evagutenberg-ties-qwen2.5-32b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Rombos-EVAGutenberg-TIES-Qwen2.5-32B-Q4_K_S-GGUF --hf-file rombos-evagutenberg-ties-qwen2.5-32b-q4_k_s.gguf -c 2048
```
|
LHRuig/akboss
|
LHRuig
| 2025-02-04T05:23:28Z | 7 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T05:23:23Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: akboss
---
# akboss
<Gallery />
## Model description
akboss lora
## Trigger words
You should use `akboss` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/akboss/tree/main) them in the Files & versions tab.
|
rsh345/llama3-8b-finance-elyza-linear-a_w06-b_w04
|
rsh345
| 2025-02-04T05:22:01Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"base_model:elyza/Llama-3-ELYZA-JP-8B",
"base_model:merge:elyza/Llama-3-ELYZA-JP-8B",
"base_model:instruction-pretrain/finance-Llama3-8B",
"base_model:merge:instruction-pretrain/finance-Llama3-8B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-04T05:17:21Z |
---
base_model:
- elyza/Llama-3-ELYZA-JP-8B
- instruction-pretrain/finance-Llama3-8B
library_name: transformers
tags:
- mergekit
- merge
---
# llama3-8b-finance-elyza-linear-a_w06-b_w04
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [elyza/Llama-3-ELYZA-JP-8B](https://huggingface.co/elyza/Llama-3-ELYZA-JP-8B)
* [instruction-pretrain/finance-Llama3-8B](https://huggingface.co/instruction-pretrain/finance-Llama3-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: elyza/Llama-3-ELYZA-JP-8B
parameters:
weight: 0.6
- model: instruction-pretrain/finance-Llama3-8B
parameters:
weight: 0.4
merge_method: linear
dtype: float16
```
|
LHRuig/ajcute
|
LHRuig
| 2025-02-04T05:20:20Z | 6 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T05:20:16Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: ajcute
---
# ajcute
<Gallery />
## Model description
ajcute lora
## Trigger words
You should use `ajcute` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/ajcute/tree/main) them in the Files & versions tab.
|
mradermacher/Zurich-1.5B-GCv2-50k-GGUF
|
mradermacher
| 2025-02-04T05:20:13Z | 278 | 1 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"gammacorpus",
"zurich",
"chat",
"conversational",
"en",
"dataset:rubenroy/GammaCorpus-v2-50k",
"base_model:rubenroy/Zurich-1.5B-GCv2-50k",
"base_model:quantized:rubenroy/Zurich-1.5B-GCv2-50k",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-02-03T19:22:05Z |
---
base_model: rubenroy/Zurich-1.5B-GCv2-50k
datasets:
- rubenroy/GammaCorpus-v2-50k
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- gammacorpus
- zurich
- chat
- conversational
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/rubenroy/Zurich-1.5B-GCv2-50k
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Zurich-1.5B-GCv2-50k-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Zurich-1.5B-GCv2-50k-GGUF/resolve/main/Zurich-1.5B-GCv2-50k.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Zurich-1.5B-GCv2-50k-GGUF/resolve/main/Zurich-1.5B-GCv2-50k.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Zurich-1.5B-GCv2-50k-GGUF/resolve/main/Zurich-1.5B-GCv2-50k.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Zurich-1.5B-GCv2-50k-GGUF/resolve/main/Zurich-1.5B-GCv2-50k.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Zurich-1.5B-GCv2-50k-GGUF/resolve/main/Zurich-1.5B-GCv2-50k.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Zurich-1.5B-GCv2-50k-GGUF/resolve/main/Zurich-1.5B-GCv2-50k.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Zurich-1.5B-GCv2-50k-GGUF/resolve/main/Zurich-1.5B-GCv2-50k.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Zurich-1.5B-GCv2-50k-GGUF/resolve/main/Zurich-1.5B-GCv2-50k.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Zurich-1.5B-GCv2-50k-GGUF/resolve/main/Zurich-1.5B-GCv2-50k.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Zurich-1.5B-GCv2-50k-GGUF/resolve/main/Zurich-1.5B-GCv2-50k.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Zurich-1.5B-GCv2-50k-GGUF/resolve/main/Zurich-1.5B-GCv2-50k.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Zurich-1.5B-GCv2-50k-GGUF/resolve/main/Zurich-1.5B-GCv2-50k.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Dang-gu/pokemon2
|
Dang-gu
| 2025-02-04T05:19:53Z | 26 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-02-04T05:17:03Z |
---
base_model: unsloth/llama-3.2-3b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Dang-gu
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
beast33/7e9a33e4-dfed-46c0-8f45-6919b81fa56d
|
beast33
| 2025-02-04T05:19:39Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-7B-Instruct",
"base_model:adapter:unsloth/Qwen2-7B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-04T04:54:49Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7e9a33e4-dfed-46c0-8f45-6919b81fa56d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e6e4f6e948bc6471_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e6e4f6e948bc6471_train_data.json
type:
field_input: topic
field_instruction: text
field_output: title
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: beast33/7e9a33e4-dfed-46c0-8f45-6919b81fa56d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/e6e4f6e948bc6471_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 69002658-908b-4f14-a9fb-64d08340747d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 69002658-908b-4f14-a9fb-64d08340747d
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7e9a33e4-dfed-46c0-8f45-6919b81fa56d
This model is a fine-tuned version of [unsloth/Qwen2-7B-Instruct](https://huggingface.co/unsloth/Qwen2-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6725
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 140
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.4011 | 1.0 | 140 | 0.6725 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
jeongyuni/starbucks
|
jeongyuni
| 2025-02-04T05:19:28Z | 22 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-02-04T05:16:36Z |
---
base_model: unsloth/llama-3.2-3b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** jeongyuni
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
LHRuig/ajmitchll
|
LHRuig
| 2025-02-04T05:18:22Z | 7 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T05:18:03Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: ajmitchll
---
# ajmitchll
<Gallery />
## Model description
ajmitchll lora
## Trigger words
You should use `ajmitchll` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/ajmitchll/tree/main) them in the Files & versions tab.
|
great0001/b9103c98-2954-4a61-95ef-18c8b9cf9652
|
great0001
| 2025-02-04T05:17:09Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:defog/llama-3-sqlcoder-8b",
"base_model:adapter:defog/llama-3-sqlcoder-8b",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2025-02-04T05:10:47Z |
---
library_name: peft
license: cc-by-sa-4.0
base_model: defog/llama-3-sqlcoder-8b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b9103c98-2954-4a61-95ef-18c8b9cf9652
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: defog/llama-3-sqlcoder-8b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3b9b4289b748f826_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3b9b4289b748f826_train_data.json
type:
field_instruction: item_title
field_output: comment
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: great0001/b9103c98-2954-4a61-95ef-18c8b9cf9652
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: constant
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/3b9b4289b748f826_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fb91bb99-180c-4ff4-aa46-6d9918134443
wandb_project: Birthday-SN56-33-Gradients-On-Demand
wandb_run: your_name
wandb_runid: fb91bb99-180c-4ff4-aa46-6d9918134443
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b9103c98-2954-4a61-95ef-18c8b9cf9652
This model is a fine-tuned version of [defog/llama-3-sqlcoder-8b](https://huggingface.co/defog/llama-3-sqlcoder-8b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0113
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | 3.8817 |
| 3.0603 | 0.0170 | 50 | 3.1547 |
| 2.9416 | 0.0340 | 100 | 3.0674 |
| 2.9691 | 0.0509 | 150 | 3.0348 |
| 3.0199 | 0.0679 | 200 | 3.0113 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
LHRuig/aidensx
|
LHRuig
| 2025-02-04T05:14:32Z | 7 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T05:14:28Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: aidensx
---
# aidensx
<Gallery />
## Model description
aidensx lora
## Trigger words
You should use `aidensx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/aidensx/tree/main) them in the Files & versions tab.
|
DevQuasar/oumi-ai.distill-r1-670b-math-GGUF
|
DevQuasar
| 2025-02-04T05:14:24Z | 604 | 0 | null |
[
"gguf",
"text-generation",
"base_model:oumi-ai/distill-r1-670b-math",
"base_model:quantized:oumi-ai/distill-r1-670b-math",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-02-04T03:41:59Z |
---
base_model:
- oumi-ai/distill-r1-670b-math
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
'Make knowledge free for everyone'
Quantized version of: [oumi-ai/distill-r1-670b-math](https://huggingface.co/oumi-ai/distill-r1-670b-math)
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
USNIM/interview_dataset
|
USNIM
| 2025-02-04T05:12:57Z | 26 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-02-04T05:10:51Z |
---
base_model: unsloth/llama-3.2-3b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** USNIM
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
LHRuig/ahiezer
|
LHRuig
| 2025-02-04T05:12:53Z | 7 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T05:12:48Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: ahiezer
---
# ahiezer
<Gallery />
## Model description
ahiezer lora
## Trigger words
You should use `ahiezer` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/ahiezer/tree/main) them in the Files & versions tab.
|
ZoniaChatbot/female
|
ZoniaChatbot
| 2025-02-04T05:12:32Z | 7 | 0 | null |
[
"safetensors",
"vits",
"license:cc-by-nd-4.0",
"region:us"
] | null | 2025-02-04T05:00:02Z |
---
license: cc-by-nd-4.0
---
|
LHRuig/agentsmith
|
LHRuig
| 2025-02-04T05:11:45Z | 7 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T05:11:40Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: agentsmith
---
# agentsmith
<Gallery />
## Model description
agentsmith lora
## Trigger words
You should use `agentsmith` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/agentsmith/tree/main) them in the Files & versions tab.
|
Mursaleen121/SciSeek3
|
Mursaleen121
| 2025-02-04T05:11:01Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2025-02-04T05:08:49Z |
---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0
|
datlaaaaaaa/1b6818e0-989e-432e-8013-054a2fec4ab5
|
datlaaaaaaa
| 2025-02-04T05:10:32Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:adapter:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-04T02:29:40Z |
---
library_name: peft
license: apache-2.0
base_model: teknium/OpenHermes-2.5-Mistral-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1b6818e0-989e-432e-8013-054a2fec4ab5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: teknium/OpenHermes-2.5-Mistral-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8f23d0c27dcb0f9f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8f23d0c27dcb0f9f_train_data.json
type:
field_input: evidence
field_instruction: user_input
field_output: claim
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: datlaaaaaaa/1b6818e0-989e-432e-8013-054a2fec4ab5
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/8f23d0c27dcb0f9f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|im_end|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: afeef3dd-1e46-4c12-b26d-35001f70da6e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: afeef3dd-1e46-4c12-b26d-35001f70da6e
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 1b6818e0-989e-432e-8013-054a2fec4ab5
This model is a fine-tuned version of [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9620
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.9106 | 0.0035 | 200 | 0.9620 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.