modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-09 00:41:25
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 549
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-09 00:41:08
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mrferr3t/b2c66b31-ca98-4d88-967a-21feb45c51ec
|
mrferr3t
| 2025-01-30T04:58:24Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b",
"base_model:adapter:unsloth/mistral-7b",
"license:apache-2.0",
"region:us"
] | null | 2025-01-30T04:49:28Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b2c66b31-ca98-4d88-967a-21feb45c51ec
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 349d1a68b79d245f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/349d1a68b79d245f_train_data.json
type:
field_instruction: question
field_output: best_answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 30
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/b2c66b31-ca98-4d88-967a-21feb45c51ec
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0005
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 2
mlflow_experiment_name: /tmp/349d1a68b79d245f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 300
saves_per_epoch: 0
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 9119e1b2-3b65-4cce-8060-7a9f2e96c7cf
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 9119e1b2-3b65-4cce-8060-7a9f2e96c7cf
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b2c66b31-ca98-4d88-967a-21feb45c51ec
This model is a fine-tuned version of [unsloth/mistral-7b](https://huggingface.co/unsloth/mistral-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8167
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 76
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.8396 | 0.0133 | 1 | 1.1466 |
| 2.8951 | 0.3987 | 30 | 0.9471 |
| 3.0997 | 0.7973 | 60 | 0.8167 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
rl-llm-coders/RS_GT_RM_1B_iter1
|
rl-llm-coders
| 2025-01-30T04:58:20Z | 45 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-30T04:47:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
arcwarden46/836b4081-36e5-4089-97c4-9a4e82385312
|
arcwarden46
| 2025-01-30T04:54:44Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-0.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-30T04:35:43Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 836b4081-36e5-4089-97c4-9a4e82385312
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-0.5B-Instruct
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 5dd32cdee5c892d5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5dd32cdee5c892d5_train_data.json
type:
field_instruction: english_prompt
field_output: sql_statement
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: arcwarden46/836b4081-36e5-4089-97c4-9a4e82385312
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/5dd32cdee5c892d5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 132b9665-5e41-4e60-9e8b-87e501bd6138
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 132b9665-5e41-4e60-9e8b-87e501bd6138
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 836b4081-36e5-4089-97c4-9a4e82385312
This model is a fine-tuned version of [unsloth/Qwen2-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0413
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.4922 | 0.0003 | 1 | 2.5977 |
| 0.3941 | 0.0169 | 50 | 0.2369 |
| 0.1448 | 0.0337 | 100 | 0.0865 |
| 0.1021 | 0.0506 | 150 | 0.0484 |
| 0.1136 | 0.0675 | 200 | 0.0413 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nbninh/be919448-aaa4-4b50-99ce-9cf180d0ec82
|
nbninh
| 2025-01-30T04:53:54Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-135M",
"base_model:adapter:unsloth/SmolLM2-135M",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-30T04:40:12Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-135M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: be919448-aaa4-4b50-99ce-9cf180d0ec82
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-135M
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c6da635b3fbbd7dd_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c6da635b3fbbd7dd_train_data.json
type:
field_input: ''
field_instruction: problem
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nbninh/be919448-aaa4-4b50-99ce-9cf180d0ec82
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/c6da635b3fbbd7dd_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e31f0b8d-60e7-432d-ab6b-0e559cb390ed
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e31f0b8d-60e7-432d-ab6b-0e559cb390ed
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# be919448-aaa4-4b50-99ce-9cf180d0ec82
This model is a fine-tuned version of [unsloth/SmolLM2-135M](https://huggingface.co/unsloth/SmolLM2-135M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2662
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.751 | 0.0973 | 200 | 1.2662 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nblinh/fd3eae18-f101-47e7-bab7-fac444bd0b31
|
nblinh
| 2025-01-30T04:53:46Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-135M",
"base_model:adapter:unsloth/SmolLM2-135M",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-30T04:40:13Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-135M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fd3eae18-f101-47e7-bab7-fac444bd0b31
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-135M
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c6da635b3fbbd7dd_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c6da635b3fbbd7dd_train_data.json
type:
field_input: ''
field_instruction: problem
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nblinh/fd3eae18-f101-47e7-bab7-fac444bd0b31
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/c6da635b3fbbd7dd_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e31f0b8d-60e7-432d-ab6b-0e559cb390ed
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e31f0b8d-60e7-432d-ab6b-0e559cb390ed
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# fd3eae18-f101-47e7-bab7-fac444bd0b31
This model is a fine-tuned version of [unsloth/SmolLM2-135M](https://huggingface.co/unsloth/SmolLM2-135M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.741 | 0.0973 | 200 | 1.2649 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
AlSamCur123/Mistral-Nemo-InstructContinuedFine
|
AlSamCur123
| 2025-01-30T04:49:23Z | 367 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit",
"base_model:quantized:unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-07T06:01:42Z |
---
base_model: unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
---
# Uploaded model
- **Developed by:** AlSamCur123
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
alchemist69/54057398-61da-494b-b287-9f551c9bc6ec
|
alchemist69
| 2025-01-30T04:44:51Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-135M",
"base_model:adapter:unsloth/SmolLM2-135M",
"license:apache-2.0",
"region:us"
] | null | 2025-01-30T04:40:03Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-135M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 54057398-61da-494b-b287-9f551c9bc6ec
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-135M
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- c6da635b3fbbd7dd_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c6da635b3fbbd7dd_train_data.json
type:
field_input: ''
field_instruction: problem
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: alchemist69/54057398-61da-494b-b287-9f551c9bc6ec
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/c6da635b3fbbd7dd_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e31f0b8d-60e7-432d-ab6b-0e559cb390ed
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e31f0b8d-60e7-432d-ab6b-0e559cb390ed
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 54057398-61da-494b-b287-9f551c9bc6ec
This model is a fine-tuned version of [unsloth/SmolLM2-135M](https://huggingface.co/unsloth/SmolLM2-135M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1376
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8475 | 0.0019 | 1 | 1.4036 |
| 1.1409 | 0.0973 | 50 | 1.2063 |
| 1.1923 | 0.1946 | 100 | 1.1593 |
| 1.055 | 0.2920 | 150 | 1.1421 |
| 1.1963 | 0.3893 | 200 | 1.1376 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
ajku2199/Llama-2-7b-hf_abstract_prob6_dataset1_n1000_seed42_epochs10_batch8_qlora
|
ajku2199
| 2025-01-30T04:44:33Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"region:us"
] | null | 2025-01-10T08:10:37Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
Bhaskar009/sdxl_trial
|
Bhaskar009
| 2025-01-30T04:41:32Z | 19 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2025-01-29T11:26:12Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - Bhaskar009/sdxl_trial
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were fine-tuned on the AdamLucek/oldbookillustrations-small dataset. You can find some example images in the following.




LoRA for the text encoder was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Intended uses & limitations
#### How to use
```python
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0")
pipe.load_lora_weights("Bhaskar009/sdxl_trial")
pipe.to("cuda")
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
Lauther/emb-gte-large-en-v1.5-3e
|
Lauther
| 2025-01-30T04:41:21Z | 7 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"new",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:5220",
"loss:CosineSimilarityLoss",
"custom_code",
"dataset:Lauther/embeddings-train-semantic",
"arxiv:1908.10084",
"base_model:Alibaba-NLP/gte-large-en-v1.5",
"base_model:finetune:Alibaba-NLP/gte-large-en-v1.5",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-01-30T04:40:44Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:5220
- loss:CosineSimilarityLoss
base_model: Alibaba-NLP/gte-large-en-v1.5
widget:
- source_sentence: Identify the column that stores the uncertainty value.
sentences:
- "What is measuring equipment?\nMeasuring equipment refers to the devices that\
\ make up a measurement system. Each piece of equipment has:\n- A unique serial\
\ number for identification.\n- A technical name, such as transmitter, plate,\
\ thermometer, etc.\n\nHow is equipment assigned to a measurement system?\nWhen\
\ equipment is assigned to a measurement system, it is given a unique identifier\
\ called an \"\"Equipment Tag.\"\"\n- If a piece of equipment has a tag, it is\
\ considered in use in a measurement system.\n- If it does not have a tag, it\
\ is considered spare or unused\n\nEquipment assignment based on technology:\n\
The type of equipment assigned to a measurement system depends on the technology\
\ used, for example:\n1. Differential technology (for gas measurement):\n -\
\ Static pressure transmitters\n - Differential pressure transmitters\n \
\ - Temperature transmitters\n - RTDs (thermometers)\n - Orifice plates\n\
\ - Straight stretch\n\n2. Linear technology (for gas measurement):\n -\
\ Temperature transmitters\n - RTDs\n - Static pressure transmitters\n \
\ - Ultrasonic meters\n\nRelationship between equipment and measurement systems:\n\
- A measurement system can have multiple pieces of equipment.\n- However, a piece\
\ of equipment can only be assigned to one measurement system.\n\nDatabase management:\n\
- The database includes a special table to manage the list of equipment assigned\
\ to measurement systems.\n- When a user refers to an \"\"Equipment Tag\"\", they\
\ are searching for operational equipment assigned to a measurement system.\n\
- If a user is looking for spare or unused equipment, they are searching for equipment\
\ not listed in the tagged equipment table.\n- Commonly used when user refers\
\ directly to an \"\"Equipment Tag\""
- 'What is equipment calibration?
Calibration is a metrological verification process used to ensure the accuracy
of measurement equipment. It is performed periodically, based on intervals set
by the company or a regulatory body.
Purpose of calibration:
The calibration process corrects any deviations in how the equipment measures
physical magnitudes (variables). This ensures the equipment provides accurate
and reliable data.
Calibration cycles:
There are two main calibration cycles:
1. As-found: Represents the equipment''s measurement accuracy before any adjustments
are made. This cycle is almost always implemented.
2. As-left: Represents the equipment''s measurement accuracy after adjustments
are made. This cycle is used depending on regulatory requirements.
Calibration uncertainty:
- Uncertainty is included in the results of a calibration.
- Calibration uncertainty refers to the margin of error in the device''s measurements,
which also affects the uncertainty of the measured variable or magnitude.'
- 'What kind of data store an equipment?
Equipments can capture meteorological data, such as pressure, temperature, and
volume (magnitudes). This data is essential for users to perform various calculations.
Data storage:
- The measured values are stored in a special table in the database for magnitudes.
This table contains the values of the variables captured by the equipments.
- These values are **direct measurements** from the fluid (e.g., raw pressure,
temperature, or volume readings). **They are not calculated values**, such as
uncertainty.
- The values stored in the variable values table are **different** from variable
uncertainty values, which are calculated separately and represent the margin of
error.
Accessing the data:
- Users typically access the data by referring to the readings from the measurement
system, not directly from the individual equipments.
- The readings are stored in a "variable values" table within the database.
Linking variable names:
If the user needs to know the name of a variable, they must link the data to another
table that stores information about the types of variables.'
- source_sentence: SELECT * FROM EquipmentType LIMIT 1
sentences:
- 'What kind of data store an equipment?
Equipments can capture meteorological data, such as pressure, temperature, and
volume (magnitudes). This data is essential for users to perform various calculations.
Data storage:
- The measured values are stored in a special table in the database for magnitudes.
This table contains the values of the variables captured by the equipments.
- These values are **direct measurements** from the fluid (e.g., raw pressure,
temperature, or volume readings). **They are not calculated values**, such as
uncertainty.
- The values stored in the variable values table are **different** from variable
uncertainty values, which are calculated separately and represent the margin of
error.
Accessing the data:
- Users typically access the data by referring to the readings from the measurement
system, not directly from the individual equipments.
- The readings are stored in a "variable values" table within the database.
Linking variable names:
If the user needs to know the name of a variable, they must link the data to another
table that stores information about the types of variables.'
- "How does a flow computer generate and store reports?\nA flow computer generates\
\ daily or hourly reports to provide users with operational data. These reports\
\ are stored in the flow computer's memory in an organized format.\n\nReport structure:\n\
- Each report includes:\n- Date and time of the data recording.\n- Data recorded\
\ from flow computers.\n\nData storage in tables:\nThe reports are saved in two\
\ tables:\n1. Main table (Index):\n - Stores the date, time, and flow computer\
\ identifier.\n2. Detail table:\n - Stores the measured values associated with\
\ the report.\n\nConnection to the Modbus table:\nThe flow computer's reports\
\ are linked to a Modbus table. This table contains the names corresponding to\
\ each value in the reports, making it easier to interpret the data."
- 'What is a flow computer?
A flow computer is a device used in measurement engineering. It collects analog
and digital data from flow meters and other sensors.
Key features of a flow computer:
- It has a unique name, firmware version, and manufacturer information.
- It is designed to record and process data such as temperature, pressure, and
fluid volume (for gases or oils).
Main function:
The flow computer sends the collected data to a measurement system. This allows
measurement engineers to analyze the data and perform their tasks effectively.'
- source_sentence: What tables store measurement system data?
sentences:
- "What is uncertainty?\nUncertainty is a measure of confidence in the precision\
\ and reliability of results obtained from equipment or measurement systems. It\
\ quantifies the potential error or margin of error in measurements.\n\nTypes\
\ of uncertainty:\nThere are two main types of uncertainty:\n1. Uncertainty of\
\ magnitudes (variables):\n - Refers to the uncertainty of specific variables,\
\ such as temperature or pressure.\n - It is calculated after calibrating a\
\ device or obtained from the equipment manufacturer's manual.\n - This uncertainty\
\ serves as a starting point for further calculations related to the equipment.\n\
\n2. Uncertainty of the measurement system:\n - Refers to the uncertainty calculated\
\ for the overall flow measurement.\n - It depends on the uncertainties of\
\ the individual variables (magnitudes) and represents the combined margin of\
\ error for the entire system.\n\nKey points:\n- The uncertainties of magnitudes\
\ (variables) are the foundation for calculating the uncertainty of the measurement\
\ system. Think of them as the \"building blocks.\"\n- Do not confuse the two\
\ types of uncertainty:\n - **Uncertainty of magnitudes/variables**: Specific\
\ to individual variables (e.g., temperature, pressure).\n - **Uncertainty\
\ of the measurement system**: Specific to the overall flow measurement.\n\nDatabase\
\ storage for uncertainties:\nIn the database, uncertainty calculations are stored\
\ in two separate tables:\n1. Uncertainty of magnitudes (variables):\n - Stores\
\ the uncertainty values for specific variables (e.g., temperature, pressure).\n\
\n2. Uncertainty of the measurement system:\n - Stores the uncertainty values\
\ for the overall flow measurement system.\n\nHow to retrieve uncertainty data:\n\
- To find the uncertainty of the measurement system, join the measurement systems\
\ table with the uncertainty of the measurement system table.\n- To find the uncertainty\
\ of a specific variable (magnitude), join the measurement systems table with\
\ the uncertainty of magnitudes (variables) table.\n\nImportant note:\nDo not\
\ confuse the two types of uncertainty:\n- If the user requests the uncertainty\
\ of the measurement system, use the first join (measurement systems table + uncertainty\
\ of the measurement system table).\n- If the user requests the uncertainty of\
\ a specific variable (magnitude) in a report, use the second join (measurement\
\ systems table + uncertainty of magnitudes table)."
- "What is a measurement system?\nA measurement system, also referred to as a delivery\
\ point, measurement point, or reception point, is used to measure and monitor\
\ fluids in industrial processes.\n\nKey characteristics of a measurement system:\n\
1. Measurement technology:\n - Differential: Used for precise measurements.\n\
\ - Linear: Used for straightforward measurements.\n\n2. System identifier\
\ (TAG):\n - A unique identifier for the system.\n\n3. Fluid type:\n - The\
\ system can measure gases, oils, condensates, water, steam, or other fluids.\n\
4. System type:\n - Specifies the category or purpose of the system.\n\nMeasurement\
\ technology by fluid type:\n- Gas measurement systems: Use both linear and differential\
\ measurement technologies.\n- Oil measurement systems: Do not use linear or differential\
\ technologies; they are programmed differently.\"\n\n\nClassification of measurement\
\ systems:\nMeasurement systems are classified based on the stage of the process\
\ in which they are used. Common classifications include:\n- Fiscal\n- Operational\n\
- Appropriation\n- Custody\n- Production Poços"
- 'What do measurement equipment measure?
Each equipment measures a physical magnitude, also known as a variable. Based
on the type of variable they measure, devices are classified into different categories.
Equipment classification:
- Primary meter: Assigned by default to equipments like orifice plates.
- Secondary meter: Assigned by default to equipments like transmitters.
- Tertiary meter: Used for other types of equipments.
Equipment types in the database:
The database includes a table listing all equipment types. Examples of equipment
types are:
- Differential pressure transmitters
- RTDs (Resistance Temperature Detectors)
- Orifice plates
- Multivariable transmitters
- Ultrasonic meters
Meteorological checks for equipments:
Each equipment type is assigned a meteorological check, which can be either:
- Calibration: To ensure measurement accuracy.
- Inspection: To verify proper functioning.
Data storage in tables:
The database also includes a separate table for equipment classifications, which
are:
- Primary meter
- Secondary meter
- Tertiary meter
So, an equipment has equipment types and this types has classifications.'
- source_sentence: What is the table structure for equipment types?
sentences:
- "How does a flow computer generate and store reports?\nA flow computer generates\
\ daily or hourly reports to provide users with operational data. These reports\
\ are stored in the flow computer's memory in an organized format.\n\nReport structure:\n\
- Each report includes:\n- Date and time of the data recording.\n- Data recorded\
\ from flow computers.\n\nData storage in tables:\nThe reports are saved in two\
\ tables:\n1. Main table (Index):\n - Stores the date, time, and flow computer\
\ identifier.\n2. Detail table:\n - Stores the measured values associated with\
\ the report.\n\nConnection to the Modbus table:\nThe flow computer's reports\
\ are linked to a Modbus table. This table contains the names corresponding to\
\ each value in the reports, making it easier to interpret the data."
- "What is measuring equipment?\nMeasuring equipment refers to the devices that\
\ make up a measurement system. Each piece of equipment has:\n- A unique serial\
\ number for identification.\n- A technical name, such as transmitter, plate,\
\ thermometer, etc.\n\nHow is equipment assigned to a measurement system?\nWhen\
\ equipment is assigned to a measurement system, it is given a unique identifier\
\ called an \"\"Equipment Tag.\"\"\n- If a piece of equipment has a tag, it is\
\ considered in use in a measurement system.\n- If it does not have a tag, it\
\ is considered spare or unused\n\nEquipment assignment based on technology:\n\
The type of equipment assigned to a measurement system depends on the technology\
\ used, for example:\n1. Differential technology (for gas measurement):\n -\
\ Static pressure transmitters\n - Differential pressure transmitters\n \
\ - Temperature transmitters\n - RTDs (thermometers)\n - Orifice plates\n\
\ - Straight stretch\n\n2. Linear technology (for gas measurement):\n -\
\ Temperature transmitters\n - RTDs\n - Static pressure transmitters\n \
\ - Ultrasonic meters\n\nRelationship between equipment and measurement systems:\n\
- A measurement system can have multiple pieces of equipment.\n- However, a piece\
\ of equipment can only be assigned to one measurement system.\n\nDatabase management:\n\
- The database includes a special table to manage the list of equipment assigned\
\ to measurement systems.\n- When a user refers to an \"\"Equipment Tag\"\", they\
\ are searching for operational equipment assigned to a measurement system.\n\
- If a user is looking for spare or unused equipment, they are searching for equipment\
\ not listed in the tagged equipment table.\n- Commonly used when user refers\
\ directly to an \"\"Equipment Tag\""
- "What is uncertainty?\nUncertainty is a measure of confidence in the precision\
\ and reliability of results obtained from equipment or measurement systems. It\
\ quantifies the potential error or margin of error in measurements.\n\nTypes\
\ of uncertainty:\nThere are two main types of uncertainty:\n1. Uncertainty of\
\ magnitudes (variables):\n - Refers to the uncertainty of specific variables,\
\ such as temperature or pressure.\n - It is calculated after calibrating a\
\ device or obtained from the equipment manufacturer's manual.\n - This uncertainty\
\ serves as a starting point for further calculations related to the equipment.\n\
\n2. Uncertainty of the measurement system:\n - Refers to the uncertainty calculated\
\ for the overall flow measurement.\n - It depends on the uncertainties of\
\ the individual variables (magnitudes) and represents the combined margin of\
\ error for the entire system.\n\nKey points:\n- The uncertainties of magnitudes\
\ (variables) are the foundation for calculating the uncertainty of the measurement\
\ system. Think of them as the \"building blocks.\"\n- Do not confuse the two\
\ types of uncertainty:\n - **Uncertainty of magnitudes/variables**: Specific\
\ to individual variables (e.g., temperature, pressure).\n - **Uncertainty\
\ of the measurement system**: Specific to the overall flow measurement.\n\nDatabase\
\ storage for uncertainties:\nIn the database, uncertainty calculations are stored\
\ in two separate tables:\n1. Uncertainty of magnitudes (variables):\n - Stores\
\ the uncertainty values for specific variables (e.g., temperature, pressure).\n\
\n2. Uncertainty of the measurement system:\n - Stores the uncertainty values\
\ for the overall flow measurement system.\n\nHow to retrieve uncertainty data:\n\
- To find the uncertainty of the measurement system, join the measurement systems\
\ table with the uncertainty of the measurement system table.\n- To find the uncertainty\
\ of a specific variable (magnitude), join the measurement systems table with\
\ the uncertainty of magnitudes (variables) table.\n\nImportant note:\nDo not\
\ confuse the two types of uncertainty:\n- If the user requests the uncertainty\
\ of the measurement system, use the first join (measurement systems table + uncertainty\
\ of the measurement system table).\n- If the user requests the uncertainty of\
\ a specific variable (magnitude) in a report, use the second join (measurement\
\ systems table + uncertainty of magnitudes table)."
- source_sentence: What columns store the uncertainty values?
sentences:
- "What is a measurement system?\nA measurement system, also referred to as a delivery\
\ point, measurement point, or reception point, is used to measure and monitor\
\ fluids in industrial processes.\n\nKey characteristics of a measurement system:\n\
1. Measurement technology:\n - Differential: Used for precise measurements.\n\
\ - Linear: Used for straightforward measurements.\n\n2. System identifier\
\ (TAG):\n - A unique identifier for the system.\n\n3. Fluid type:\n - The\
\ system can measure gases, oils, condensates, water, steam, or other fluids.\n\
4. System type:\n - Specifies the category or purpose of the system.\n\nMeasurement\
\ technology by fluid type:\n- Gas measurement systems: Use both linear and differential\
\ measurement technologies.\n- Oil measurement systems: Do not use linear or differential\
\ technologies; they are programmed differently.\"\n\n\nClassification of measurement\
\ systems:\nMeasurement systems are classified based on the stage of the process\
\ in which they are used. Common classifications include:\n- Fiscal\n- Operational\n\
- Appropriation\n- Custody\n- Production Poços"
- 'How are flow computers and measurement systems related?
Flow computers can have multiple systems assigned to them. However, a measurement
system can only be assigned to one flow computer.
Database terminology:
In the database, this relationship is referred to as:
- Meter streams
- Meter runs
- Sections
Storage of the relationship:
The relationship between a flow computer and its assigned measurement system is
stored in a special table.
User context:
When a user refers to a "meter stream," they are indicating that they are searching
for a measurement system assigned to a specific flow computer.'
- "What is uncertainty?\nUncertainty is a measure of confidence in the precision\
\ and reliability of results obtained from equipment or measurement systems. It\
\ quantifies the potential error or margin of error in measurements.\n\nTypes\
\ of uncertainty:\nThere are two main types of uncertainty:\n1. Uncertainty of\
\ magnitudes (variables):\n - Refers to the uncertainty of specific variables,\
\ such as temperature or pressure.\n - It is calculated after calibrating a\
\ device or obtained from the equipment manufacturer's manual.\n - This uncertainty\
\ serves as a starting point for further calculations related to the equipment.\n\
\n2. Uncertainty of the measurement system:\n - Refers to the uncertainty calculated\
\ for the overall flow measurement.\n - It depends on the uncertainties of\
\ the individual variables (magnitudes) and represents the combined margin of\
\ error for the entire system.\n\nKey points:\n- The uncertainties of magnitudes\
\ (variables) are the foundation for calculating the uncertainty of the measurement\
\ system. Think of them as the \"building blocks.\"\n- Do not confuse the two\
\ types of uncertainty:\n - **Uncertainty of magnitudes/variables**: Specific\
\ to individual variables (e.g., temperature, pressure).\n - **Uncertainty\
\ of the measurement system**: Specific to the overall flow measurement.\n\nDatabase\
\ storage for uncertainties:\nIn the database, uncertainty calculations are stored\
\ in two separate tables:\n1. Uncertainty of magnitudes (variables):\n - Stores\
\ the uncertainty values for specific variables (e.g., temperature, pressure).\n\
\n2. Uncertainty of the measurement system:\n - Stores the uncertainty values\
\ for the overall flow measurement system.\n\nHow to retrieve uncertainty data:\n\
- To find the uncertainty of the measurement system, join the measurement systems\
\ table with the uncertainty of the measurement system table.\n- To find the uncertainty\
\ of a specific variable (magnitude), join the measurement systems table with\
\ the uncertainty of magnitudes (variables) table.\n\nImportant note:\nDo not\
\ confuse the two types of uncertainty:\n- If the user requests the uncertainty\
\ of the measurement system, use the first join (measurement systems table + uncertainty\
\ of the measurement system table).\n- If the user requests the uncertainty of\
\ a specific variable (magnitude) in a report, use the second join (measurement\
\ systems table + uncertainty of magnitudes table)."
datasets:
- Lauther/embeddings-train-semantic
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on Alibaba-NLP/gte-large-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Alibaba-NLP/gte-large-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) on the [embeddings-train-semantic](https://huggingface.co/datasets/Lauther/embeddings-train-semantic) dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Alibaba-NLP/gte-large-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) <!-- at revision 104333d6af6f97649377c2afbde10a7704870c7b -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [embeddings-train-semantic](https://huggingface.co/datasets/Lauther/embeddings-train-semantic)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NewModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Lauther/emb-gte-large-en-v1.5-3e")
# Run inference
sentences = [
'What columns store the uncertainty values?',
'How are flow computers and measurement systems related?\nFlow computers can have multiple systems assigned to them. However, a measurement system can only be assigned to one flow computer.\n\nDatabase terminology:\nIn the database, this relationship is referred to as:\n- Meter streams\n- Meter runs\n- Sections\n\nStorage of the relationship:\nThe relationship between a flow computer and its assigned measurement system is stored in a special table.\n\nUser context:\nWhen a user refers to a "meter stream," they are indicating that they are searching for a measurement system assigned to a specific flow computer.',
'What is uncertainty?\nUncertainty is a measure of confidence in the precision and reliability of results obtained from equipment or measurement systems. It quantifies the potential error or margin of error in measurements.\n\nTypes of uncertainty:\nThere are two main types of uncertainty:\n1. Uncertainty of magnitudes (variables):\n - Refers to the uncertainty of specific variables, such as temperature or pressure.\n - It is calculated after calibrating a device or obtained from the equipment manufacturer\'s manual.\n - This uncertainty serves as a starting point for further calculations related to the equipment.\n\n2. Uncertainty of the measurement system:\n - Refers to the uncertainty calculated for the overall flow measurement.\n - It depends on the uncertainties of the individual variables (magnitudes) and represents the combined margin of error for the entire system.\n\nKey points:\n- The uncertainties of magnitudes (variables) are the foundation for calculating the uncertainty of the measurement system. Think of them as the "building blocks."\n- Do not confuse the two types of uncertainty:\n - **Uncertainty of magnitudes/variables**: Specific to individual variables (e.g., temperature, pressure).\n - **Uncertainty of the measurement system**: Specific to the overall flow measurement.\n\nDatabase storage for uncertainties:\nIn the database, uncertainty calculations are stored in two separate tables:\n1. Uncertainty of magnitudes (variables):\n - Stores the uncertainty values for specific variables (e.g., temperature, pressure).\n\n2. Uncertainty of the measurement system:\n - Stores the uncertainty values for the overall flow measurement system.\n\nHow to retrieve uncertainty data:\n- To find the uncertainty of the measurement system, join the measurement systems table with the uncertainty of the measurement system table.\n- To find the uncertainty of a specific variable (magnitude), join the measurement systems table with the uncertainty of magnitudes (variables) table.\n\nImportant note:\nDo not confuse the two types of uncertainty:\n- If the user requests the uncertainty of the measurement system, use the first join (measurement systems table + uncertainty of the measurement system table).\n- If the user requests the uncertainty of a specific variable (magnitude) in a report, use the second join (measurement systems table + uncertainty of magnitudes table).',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### embeddings-train-semantic
* Dataset: [embeddings-train-semantic](https://huggingface.co/datasets/Lauther/embeddings-train-semantic) at [ce90f53](https://huggingface.co/datasets/Lauther/embeddings-train-semantic/tree/ce90f531bc39037053d223b27868ad178852f330)
* Size: 5,220 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 6 tokens</li><li>mean: 15.47 tokens</li><li>max: 77 tokens</li></ul> | <ul><li>min: 108 tokens</li><li>mean: 222.4 tokens</li><li>max: 452 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.23</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------|
| <code>What is the data type of differential pressure in the measurement system?</code> | <code>What is uncertainty?<br>Uncertainty is a measure of confidence in the precision and reliability of results obtained from equipment or measurement systems. It quantifies the potential error or margin of error in measurements.<br><br>Types of uncertainty:<br>There are two main types of uncertainty:<br>1. Uncertainty of magnitudes (variables):<br> - Refers to the uncertainty of specific variables, such as temperature or pressure.<br> - It is calculated after calibrating a device or obtained from the equipment manufacturer's manual.<br> - This uncertainty serves as a starting point for further calculations related to the equipment.<br><br>2. Uncertainty of the measurement system:<br> - Refers to the uncertainty calculated for the overall flow measurement.<br> - It depends on the uncertainties of the individual variables (magnitudes) and represents the combined margin of error for the entire system.<br><br>Key points:<br>- The uncertainties of magnitudes (variables) are the foundation for calculating the uncertainty of ...</code> | <code>0.15000000000000002</code> |
| <code>What is the structure of the &&&equipment_data&&& table?</code> | <code>How are flow computers and measurement systems related?<br>Flow computers can have multiple systems assigned to them. However, a measurement system can only be assigned to one flow computer.<br><br>Database terminology:<br>In the database, this relationship is referred to as:<br>- Meter streams<br>- Meter runs<br>- Sections<br><br>Storage of the relationship:<br>The relationship between a flow computer and its assigned measurement system is stored in a special table.<br><br>User context:<br>When a user refers to a "meter stream," they are indicating that they are searching for a measurement system assigned to a specific flow computer.</code> | <code>0.35000000000000003</code> |
| <code>Find the columns in the flow computer table that identify the flow computer.</code> | <code>What kind of data store an equipment?<br>Equipments can capture meteorological data, such as pressure, temperature, and volume (magnitudes). This data is essential for users to perform various calculations.<br><br>Data storage:<br>- The measured values are stored in a special table in the database for magnitudes. This table contains the values of the variables captured by the equipments.<br>- These values are **direct measurements** from the fluid (e.g., raw pressure, temperature, or volume readings). **They are not calculated values**, such as uncertainty.<br>- The values stored in the variable values table are **different** from variable uncertainty values, which are calculated separately and represent the margin of error.<br><br>Accessing the data:<br>- Users typically access the data by referring to the readings from the measurement system, not directly from the individual equipments.<br>- The readings are stored in a "variable values" table within the database.<br><br>Linking variable names:<br>If the user needs to kno...</code> | <code>0.1</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Evaluation Dataset
#### embeddings-train-semantic
* Dataset: [embeddings-train-semantic](https://huggingface.co/datasets/Lauther/embeddings-train-semantic) at [ce90f53](https://huggingface.co/datasets/Lauther/embeddings-train-semantic/tree/ce90f531bc39037053d223b27868ad178852f330)
* Size: 652 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 652 samples:
| | sentence1 | sentence2 | score |
|:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 6 tokens</li><li>mean: 15.03 tokens</li><li>max: 77 tokens</li></ul> | <ul><li>min: 108 tokens</li><li>mean: 219.25 tokens</li><li>max: 452 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.24</li><li>max: 0.9</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:-------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------|
| <code>How can I filter uncertainty reports by equipment tag?</code> | <code>How does a flow computer generate and store reports?<br>A flow computer generates daily or hourly reports to provide users with operational data. These reports are stored in the flow computer's memory in an organized format.<br><br>Report structure:<br>- Each report includes:<br>- Date and time of the data recording.<br>- Data recorded from flow computers.<br><br>Data storage in tables:<br>The reports are saved in two tables:<br>1. Main table (Index):<br> - Stores the date, time, and flow computer identifier.<br>2. Detail table:<br> - Stores the measured values associated with the report.<br><br>Connection to the Modbus table:<br>The flow computer's reports are linked to a Modbus table. This table contains the names corresponding to each value in the reports, making it easier to interpret the data.</code> | <code>0.09999999999999999</code> |
| <code>What is the purpose of the flow_data table?</code> | <code>What is uncertainty?<br>Uncertainty is a measure of confidence in the precision and reliability of results obtained from equipment or measurement systems. It quantifies the potential error or margin of error in measurements.<br><br>Types of uncertainty:<br>There are two main types of uncertainty:<br>1. Uncertainty of magnitudes (variables):<br> - Refers to the uncertainty of specific variables, such as temperature or pressure.<br> - It is calculated after calibrating a device or obtained from the equipment manufacturer's manual.<br> - This uncertainty serves as a starting point for further calculations related to the equipment.<br><br>2. Uncertainty of the measurement system:<br> - Refers to the uncertainty calculated for the overall flow measurement.<br> - It depends on the uncertainties of the individual variables (magnitudes) and represents the combined margin of error for the entire system.<br><br>Key points:<br>- The uncertainties of magnitudes (variables) are the foundation for calculating the uncertainty of ...</code> | <code>0.15000000000000002</code> |
| <code>What is the column name for the report date in the Reports table?</code> | <code>What is equipment calibration?<br>Calibration is a metrological verification process used to ensure the accuracy of measurement equipment. It is performed periodically, based on intervals set by the company or a regulatory body.<br><br>Purpose of calibration:<br>The calibration process corrects any deviations in how the equipment measures physical magnitudes (variables). This ensures the equipment provides accurate and reliable data.<br><br>Calibration cycles:<br>There are two main calibration cycles:<br>1. As-found: Represents the equipment's measurement accuracy before any adjustments are made. This cycle is almost always implemented.<br>2. As-left: Represents the equipment's measurement accuracy after adjustments are made. This cycle is used depending on regulatory requirements.<br><br>Calibration uncertainty:<br>- Uncertainty is included in the results of a calibration.<br>- Calibration uncertainty refers to the margin of error in the device's measurements, which also affects the uncertainty of the measured variable or ...</code> | <code>0.1</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 4
- `gradient_accumulation_steps`: 4
- `learning_rate`: 2e-05
- `warmup_ratio`: 0.1
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 4
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 4
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0307 | 10 | 0.501 | - |
| 0.0613 | 20 | 0.3017 | - |
| 0.0920 | 30 | 0.1991 | - |
| 0.1226 | 40 | 0.1107 | - |
| 0.1533 | 50 | 0.1111 | - |
| 0.1839 | 60 | 0.1187 | - |
| 0.2146 | 70 | 0.105 | - |
| 0.2452 | 80 | 0.1292 | - |
| 0.2759 | 90 | 0.0905 | - |
| 0.3065 | 100 | 0.0806 | - |
| 0.3372 | 110 | 0.093 | - |
| 0.3678 | 120 | 0.0787 | - |
| 0.3985 | 130 | 0.0833 | - |
| 0.4291 | 140 | 0.0633 | - |
| 0.4598 | 150 | 0.0968 | 0.0191 |
| 0.4904 | 160 | 0.0795 | - |
| 0.5211 | 170 | 0.0883 | - |
| 0.5517 | 180 | 0.0859 | - |
| 0.5824 | 190 | 0.0673 | - |
| 0.6130 | 200 | 0.0519 | - |
| 0.6437 | 210 | 0.0757 | - |
| 0.6743 | 220 | 0.0786 | - |
| 0.7050 | 230 | 0.0752 | - |
| 0.7356 | 240 | 0.1087 | - |
| 0.7663 | 250 | 0.0812 | - |
| 0.7969 | 260 | 0.0519 | - |
| 0.8276 | 270 | 0.0596 | - |
| 0.8582 | 280 | 0.0521 | - |
| 0.8889 | 290 | 0.07 | - |
| 0.9195 | 300 | 0.0577 | 0.0167 |
| 0.9502 | 310 | 0.0668 | - |
| 0.9808 | 320 | 0.0473 | - |
| 1.0092 | 330 | 0.0477 | - |
| 1.0398 | 340 | 0.0592 | - |
| 1.0705 | 350 | 0.0518 | - |
| 1.1011 | 360 | 0.0689 | - |
| 1.1318 | 370 | 0.0557 | - |
| 1.1625 | 380 | 0.0593 | - |
| 1.1931 | 390 | 0.0672 | - |
| 1.2238 | 400 | 0.0467 | - |
| 1.2544 | 410 | 0.0348 | - |
| 1.2851 | 420 | 0.0582 | - |
| 1.3157 | 430 | 0.0568 | - |
| 1.3464 | 440 | 0.0548 | - |
| 1.3770 | 450 | 0.0599 | 0.0147 |
| 1.4077 | 460 | 0.0495 | - |
| 1.4383 | 470 | 0.0511 | - |
| 1.4690 | 480 | 0.0525 | - |
| 1.4996 | 490 | 0.0533 | - |
| 1.5303 | 500 | 0.0499 | - |
| 1.5609 | 510 | 0.0497 | - |
| 1.5916 | 520 | 0.043 | - |
| 1.6222 | 530 | 0.0471 | - |
| 1.6529 | 540 | 0.0501 | - |
| 1.6835 | 550 | 0.038 | - |
| 1.7142 | 560 | 0.0378 | - |
| 1.7448 | 570 | 0.0438 | - |
| 1.7755 | 580 | 0.0441 | - |
| 1.8061 | 590 | 0.0503 | - |
| 1.8368 | 600 | 0.0534 | 0.0127 |
| 1.8674 | 610 | 0.0403 | - |
| 1.8981 | 620 | 0.0452 | - |
| 1.9287 | 630 | 0.0478 | - |
| 1.9594 | 640 | 0.0334 | - |
| 1.9900 | 650 | 0.0564 | - |
| 2.0184 | 660 | 0.03 | - |
| 2.0490 | 670 | 0.0459 | - |
| 2.0797 | 680 | 0.0284 | - |
| 2.1103 | 690 | 0.029 | - |
| 2.1410 | 700 | 0.0341 | - |
| 2.1716 | 710 | 0.025 | - |
| 2.2023 | 720 | 0.0167 | - |
| 2.2330 | 730 | 0.0387 | - |
| 2.2636 | 740 | 0.036 | - |
| 2.2943 | 750 | 0.044 | 0.0123 |
| 2.3249 | 760 | 0.0288 | - |
| 2.3556 | 770 | 0.033 | - |
| 2.3862 | 780 | 0.0323 | - |
| 2.4169 | 790 | 0.0301 | - |
| 2.4475 | 800 | 0.0399 | - |
| 2.4782 | 810 | 0.0313 | - |
| 2.5088 | 820 | 0.0418 | - |
| 2.5395 | 830 | 0.03 | - |
| 2.5701 | 840 | 0.0374 | - |
| 2.6008 | 850 | 0.0299 | - |
| 2.6314 | 860 | 0.0396 | - |
| 2.6621 | 870 | 0.0302 | - |
| 2.6927 | 880 | 0.0301 | - |
| 2.7234 | 890 | 0.0283 | - |
| 2.7540 | 900 | 0.016 | 0.0114 |
| 2.7847 | 910 | 0.0308 | - |
| 2.8153 | 920 | 0.0408 | - |
| 2.8460 | 930 | 0.0187 | - |
| 2.8766 | 940 | 0.0269 | - |
| 2.9073 | 950 | 0.04 | - |
| 2.9379 | 960 | 0.0207 | - |
| 2.9686 | 970 | 0.0336 | - |
### Framework Versions
- Python: 3.11.0
- Sentence Transformers: 3.4.0
- Transformers: 4.48.1
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
rl-llm-coders/RS_GT_RM_1B_iter0
|
rl-llm-coders
| 2025-01-30T04:40:09Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-30T04:23:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lesso18/d6aac820-fda9-4758-aa5e-519095c706b2
|
lesso18
| 2025-01-30T04:39:01Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"gpt_neo",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/gpt-neo-1.3B",
"base_model:adapter:EleutherAI/gpt-neo-1.3B",
"license:mit",
"region:us"
] | null | 2025-01-30T04:07:23Z |
---
library_name: peft
license: mit
base_model: EleutherAI/gpt-neo-1.3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d6aac820-fda9-4758-aa5e-519095c706b2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/gpt-neo-1.3B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9220f3ec5e9baf46_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9220f3ec5e9baf46_train_data.json
type:
field_instruction: dialogue
field_output: reference
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso18/d6aac820-fda9-4758-aa5e-519095c706b2
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/9220f3ec5e9baf46_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b014ffbb-9de7-4765-9d8a-0bf229f9b0e3
wandb_project: new-01-29
wandb_run: your_name
wandb_runid: b014ffbb-9de7-4765-9d8a-0bf229f9b0e3
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# d6aac820-fda9-4758-aa5e-519095c706b2
This model is a fine-tuned version of [EleutherAI/gpt-neo-1.3B](https://huggingface.co/EleutherAI/gpt-neo-1.3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4715
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.7043 | 0.0169 | 200 | 1.4715 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nhung01/e15b6e90-39b5-4170-a5d6-dd192a8cb5ed
|
nhung01
| 2025-01-30T04:37:54Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-1.5B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-30T04:21:56Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e15b6e90-39b5-4170-a5d6-dd192a8cb5ed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d5c0c6531c05927b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d5c0c6531c05927b_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung01/e15b6e90-39b5-4170-a5d6-dd192a8cb5ed
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/d5c0c6531c05927b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0b4dd25b-0edc-430d-b581-bf00a0e10324
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0b4dd25b-0edc-430d-b581-bf00a0e10324
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# e15b6e90-39b5-4170-a5d6-dd192a8cb5ed
This model is a fine-tuned version of [unsloth/Qwen2-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7172
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4148 | 0.0681 | 200 | 0.7172 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
prxy5604/718a3229-73ba-4980-9425-ffb241853776
|
prxy5604
| 2025-01-30T04:36:21Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Mistral-7b-128k",
"base_model:adapter:NousResearch/Yarn-Mistral-7b-128k",
"license:apache-2.0",
"region:us"
] | null | 2025-01-30T03:37:43Z |
---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Mistral-7b-128k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 718a3229-73ba-4980-9425-ffb241853776
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Mistral-7b-128k
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- b1454fa1fd1fe58d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b1454fa1fd1fe58d_train_data.json
type:
field_input: possible_answers
field_instruction: question
field_output: memory_answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: prxy5604/718a3229-73ba-4980-9425-ffb241853776
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/b1454fa1fd1fe58d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4d3d1b80-2351-40f7-99cf-7e411e41051a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4d3d1b80-2351-40f7-99cf-7e411e41051a
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 718a3229-73ba-4980-9425-ffb241853776
This model is a fine-tuned version of [NousResearch/Yarn-Mistral-7b-128k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4604
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 7.5599 | 0.0005 | 1 | 1.9562 |
| 1.5145 | 0.0275 | 50 | 0.6537 |
| 1.7453 | 0.0549 | 100 | 0.5282 |
| 1.261 | 0.0824 | 150 | 0.4744 |
| 1.8751 | 0.1099 | 200 | 0.4604 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
robiual-awal/b6e0548d-d114-41a4-8030-2ec5989a46c1
|
robiual-awal
| 2025-01-30T04:35:17Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:heegyu/WizardVicuna2-13b-hf",
"base_model:adapter:heegyu/WizardVicuna2-13b-hf",
"region:us"
] | null | 2025-01-30T03:42:52Z |
---
library_name: peft
base_model: heegyu/WizardVicuna2-13b-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b6e0548d-d114-41a4-8030-2ec5989a46c1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: heegyu/WizardVicuna2-13b-hf
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- de2f5a3df66e2619_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/de2f5a3df66e2619_train_data.json
type:
field_input: package_name
field_instruction: products
field_output: review
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: robiual-awal/b6e0548d-d114-41a4-8030-2ec5989a46c1
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/de2f5a3df66e2619_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5ff6924c-590d-48b4-b2d4-0517ebbf6eba
wandb_project: Birthday-SN56-29-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5ff6924c-590d-48b4-b2d4-0517ebbf6eba
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b6e0548d-d114-41a4-8030-2ec5989a46c1
This model is a fine-tuned version of [heegyu/WizardVicuna2-13b-hf](https://huggingface.co/heegyu/WizardVicuna2-13b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4616
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 5.4028 |
| 3.7717 | 0.0004 | 13 | 3.6447 |
| 3.5306 | 0.0008 | 26 | 3.5163 |
| 3.2121 | 0.0011 | 39 | 3.4616 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Best000/9fe467ab-bc07-4fbc-8be9-2476da2488aa
|
Best000
| 2025-01-30T04:35:10Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:heegyu/WizardVicuna2-13b-hf",
"base_model:adapter:heegyu/WizardVicuna2-13b-hf",
"region:us"
] | null | 2025-01-30T03:42:39Z |
---
library_name: peft
base_model: heegyu/WizardVicuna2-13b-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9fe467ab-bc07-4fbc-8be9-2476da2488aa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: heegyu/WizardVicuna2-13b-hf
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- de2f5a3df66e2619_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/de2f5a3df66e2619_train_data.json
type:
field_input: package_name
field_instruction: products
field_output: review
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Best000/9fe467ab-bc07-4fbc-8be9-2476da2488aa
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/de2f5a3df66e2619_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5ff6924c-590d-48b4-b2d4-0517ebbf6eba
wandb_project: Birthday-SN56-15-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5ff6924c-590d-48b4-b2d4-0517ebbf6eba
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 9fe467ab-bc07-4fbc-8be9-2476da2488aa
This model is a fine-tuned version of [heegyu/WizardVicuna2-13b-hf](https://huggingface.co/heegyu/WizardVicuna2-13b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 5.4028 |
| 3.7715 | 0.0004 | 13 | 3.6480 |
| 3.5329 | 0.0008 | 26 | 3.5187 |
| 3.2091 | 0.0011 | 39 | 3.4623 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
mrferr3t/5330137c-379e-45ee-ae47-6a4938e99b9d
|
mrferr3t
| 2025-01-30T04:35:01Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-30T04:29:47Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5330137c-379e-45ee-ae47-6a4938e99b9d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d5c0c6531c05927b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d5c0c6531c05927b_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 30
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/5330137c-379e-45ee-ae47-6a4938e99b9d
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0005
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 2
mlflow_experiment_name: /tmp/d5c0c6531c05927b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 300
saves_per_epoch: 0
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0b4dd25b-0edc-430d-b581-bf00a0e10324
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0b4dd25b-0edc-430d-b581-bf00a0e10324
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 5330137c-379e-45ee-ae47-6a4938e99b9d
This model is a fine-tuned version of [unsloth/Qwen2-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6742
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.024 | 0.0003 | 1 | 1.7054 |
| 0.6015 | 0.0102 | 30 | 0.7080 |
| 0.5947 | 0.0204 | 60 | 0.6830 |
| 0.5486 | 0.0307 | 90 | 0.6742 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
sleepdeprived3/Cydonia-22B-v1_EXL2_5bpw_H8
|
sleepdeprived3
| 2025-01-30T04:34:54Z | 11 | 0 | null |
[
"safetensors",
"mistral",
"license:other",
"5-bit",
"exl2",
"region:us"
] | null | 2025-01-30T03:32:23Z |
---
license: other
---
# Join our Discord! https://discord.gg/Nbv9pQ88Xb
## 1000+ members strong 💪
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/FNWdi0WlH-Xd3fjkGVPpp.mpga"></audio>
*Thank you, Envoid! I cackled.*
---
[BeaverAI](https://huggingface.co/BeaverAI) proudly presents...
# Cydonia 22B v1 💿
*I christen this model, 'Miqu 2 Mini'* - @invisietch

## Links
- Original: https://huggingface.co/TheDrummer/Cydonia-22B-v1
- GGUF: https://huggingface.co/TheDrummer/Cydonia-22B-v1-GGUF
- iMatrix: https://huggingface.co/MarsupialAI/Cydonia-22B-v1_iMat_GGUF
- EXL2: https://huggingface.co/MarsupialAI/Cydonia-22B-v1_EXL2

## Arsenal (Supported Chat Templates)
- Metharme (a.k.a. Pygmalion in ST) for RP / Story
- Text Completion for RP
- Mistral for Instruct / RP / Story
- You can mix it up and see which works best for you.
### Favorite RP Format
`*action* Dialogue *thoughts* Dialogue *narration*` in 1st person PoV
## What's Next?
- I might release a v1.1... Probably.
- Already have plans for a v2!

```
No one's gonna take me alive
Time has come to make things right
You and I must fight for our rights
You and I must fight to survive
```

`>inb4 my model cards have turned into Tumblr`
|
yuniktmr/paraphrased_fine_tuned_bert_uncased-permission-predictor_prod
|
yuniktmr
| 2025-01-30T04:34:30Z | 13 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-30T04:31:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nhunglaaaaaaa/885edad0-d782-4939-b294-5cb531d9095e
|
nhunglaaaaaaa
| 2025-01-30T04:33:14Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-1.5B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-30T04:21:13Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 885edad0-d782-4939-b294-5cb531d9095e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d5c0c6531c05927b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d5c0c6531c05927b_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhunglaaaaaaa/885edad0-d782-4939-b294-5cb531d9095e
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/d5c0c6531c05927b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0b4dd25b-0edc-430d-b581-bf00a0e10324
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0b4dd25b-0edc-430d-b581-bf00a0e10324
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 885edad0-d782-4939-b294-5cb531d9095e
This model is a fine-tuned version of [unsloth/Qwen2-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7203
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4142 | 0.0681 | 200 | 0.7203 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
LuckyLukke/DPO_1-2500
|
LuckyLukke
| 2025-01-30T04:32:51Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-30T04:28:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lesso01/a7fb0e48-6b5a-4f67-af6d-15bed40b3884
|
lesso01
| 2025-01-30T04:32:10Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-1.5B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-30T04:21:14Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a7fb0e48-6b5a-4f67-af6d-15bed40b3884
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-1.5B-Instruct
bf16: auto
chat_template: llama3
datasets:
- data_files:
- d5c0c6531c05927b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d5c0c6531c05927b_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso01/a7fb0e48-6b5a-4f67-af6d-15bed40b3884
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/d5c0c6531c05927b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0b4dd25b-0edc-430d-b581-bf00a0e10324
wandb_project: new-01-29
wandb_run: your_name
wandb_runid: 0b4dd25b-0edc-430d-b581-bf00a0e10324
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# a7fb0e48-6b5a-4f67-af6d-15bed40b3884
This model is a fine-tuned version of [unsloth/Qwen2-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0681 | 200 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
cilooor/39865643-c496-48ac-9f51-b9fc0f62f447
|
cilooor
| 2025-01-30T04:31:41Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.1-Storm-8B",
"base_model:adapter:unsloth/Llama-3.1-Storm-8B",
"license:llama3.1",
"region:us"
] | null | 2025-01-30T03:51:13Z |
---
library_name: peft
license: llama3.1
base_model: unsloth/Llama-3.1-Storm-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 39865643-c496-48ac-9f51-b9fc0f62f447
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.1-Storm-8B
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 91c8fbf3b2faa749_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/91c8fbf3b2faa749_train_data.json
type:
field_input: ingredients
field_instruction: title
field_output: steps
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: cilooor/39865643-c496-48ac-9f51-b9fc0f62f447
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 2.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
lr_scheduler_warmup_steps: 10
max_grad_norm: 0.5
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/91c8fbf3b2faa749_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.999
adam_epsilon: 1e-8
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 250
saves_per_epoch: null
seed: 42
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
total_train_batch_size: 16
train_batch_size: 8
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 01617280-a23f-4c01-a9d7-f64d9905e269
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 01617280-a23f-4c01-a9d7-f64d9905e269
warmup_steps: 10
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 39865643-c496-48ac-9f51-b9fc0f62f447
This model is a fine-tuned version of [unsloth/Llama-3.1-Storm-8B](https://huggingface.co/unsloth/Llama-3.1-Storm-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.999,adam_epsilon=1e-8
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8105 | 0.0006 | 1 | nan |
| 0.0 | 0.0312 | 50 | nan |
| 0.0 | 0.0624 | 100 | nan |
| 0.0 | 0.0935 | 150 | nan |
| 0.0 | 0.1247 | 200 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Lauther/emb-cde-small-v2-3e
|
Lauther
| 2025-01-30T04:29:11Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:5220",
"loss:CosineSimilarityLoss",
"custom_code",
"dataset:Lauther/embeddings-train-semantic",
"arxiv:1908.10084",
"base_model:jxm/cde-small-v2",
"base_model:finetune:jxm/cde-small-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-01-30T04:27:26Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:5220
- loss:CosineSimilarityLoss
base_model: jxm/cde-small-v2
widget:
- source_sentence: Identify the column that stores the uncertainty value.
sentences:
- "What is measuring equipment?\nMeasuring equipment refers to the devices that\
\ make up a measurement system. Each piece of equipment has:\n- A unique serial\
\ number for identification.\n- A technical name, such as transmitter, plate,\
\ thermometer, etc.\n\nHow is equipment assigned to a measurement system?\nWhen\
\ equipment is assigned to a measurement system, it is given a unique identifier\
\ called an \"\"Equipment Tag.\"\"\n- If a piece of equipment has a tag, it is\
\ considered in use in a measurement system.\n- If it does not have a tag, it\
\ is considered spare or unused\n\nEquipment assignment based on technology:\n\
The type of equipment assigned to a measurement system depends on the technology\
\ used, for example:\n1. Differential technology (for gas measurement):\n -\
\ Static pressure transmitters\n - Differential pressure transmitters\n \
\ - Temperature transmitters\n - RTDs (thermometers)\n - Orifice plates\n\
\ - Straight stretch\n\n2. Linear technology (for gas measurement):\n -\
\ Temperature transmitters\n - RTDs\n - Static pressure transmitters\n \
\ - Ultrasonic meters\n\nRelationship between equipment and measurement systems:\n\
- A measurement system can have multiple pieces of equipment.\n- However, a piece\
\ of equipment can only be assigned to one measurement system.\n\nDatabase management:\n\
- The database includes a special table to manage the list of equipment assigned\
\ to measurement systems.\n- When a user refers to an \"\"Equipment Tag\"\", they\
\ are searching for operational equipment assigned to a measurement system.\n\
- If a user is looking for spare or unused equipment, they are searching for equipment\
\ not listed in the tagged equipment table.\n- Commonly used when user refers\
\ directly to an \"\"Equipment Tag\""
- 'What is equipment calibration?
Calibration is a metrological verification process used to ensure the accuracy
of measurement equipment. It is performed periodically, based on intervals set
by the company or a regulatory body.
Purpose of calibration:
The calibration process corrects any deviations in how the equipment measures
physical magnitudes (variables). This ensures the equipment provides accurate
and reliable data.
Calibration cycles:
There are two main calibration cycles:
1. As-found: Represents the equipment''s measurement accuracy before any adjustments
are made. This cycle is almost always implemented.
2. As-left: Represents the equipment''s measurement accuracy after adjustments
are made. This cycle is used depending on regulatory requirements.
Calibration uncertainty:
- Uncertainty is included in the results of a calibration.
- Calibration uncertainty refers to the margin of error in the device''s measurements,
which also affects the uncertainty of the measured variable or magnitude.'
- 'What kind of data store an equipment?
Equipments can capture meteorological data, such as pressure, temperature, and
volume (magnitudes). This data is essential for users to perform various calculations.
Data storage:
- The measured values are stored in a special table in the database for magnitudes.
This table contains the values of the variables captured by the equipments.
- These values are **direct measurements** from the fluid (e.g., raw pressure,
temperature, or volume readings). **They are not calculated values**, such as
uncertainty.
- The values stored in the variable values table are **different** from variable
uncertainty values, which are calculated separately and represent the margin of
error.
Accessing the data:
- Users typically access the data by referring to the readings from the measurement
system, not directly from the individual equipments.
- The readings are stored in a "variable values" table within the database.
Linking variable names:
If the user needs to know the name of a variable, they must link the data to another
table that stores information about the types of variables.'
- source_sentence: SELECT * FROM EquipmentType LIMIT 1
sentences:
- 'What kind of data store an equipment?
Equipments can capture meteorological data, such as pressure, temperature, and
volume (magnitudes). This data is essential for users to perform various calculations.
Data storage:
- The measured values are stored in a special table in the database for magnitudes.
This table contains the values of the variables captured by the equipments.
- These values are **direct measurements** from the fluid (e.g., raw pressure,
temperature, or volume readings). **They are not calculated values**, such as
uncertainty.
- The values stored in the variable values table are **different** from variable
uncertainty values, which are calculated separately and represent the margin of
error.
Accessing the data:
- Users typically access the data by referring to the readings from the measurement
system, not directly from the individual equipments.
- The readings are stored in a "variable values" table within the database.
Linking variable names:
If the user needs to know the name of a variable, they must link the data to another
table that stores information about the types of variables.'
- "How does a flow computer generate and store reports?\nA flow computer generates\
\ daily or hourly reports to provide users with operational data. These reports\
\ are stored in the flow computer's memory in an organized format.\n\nReport structure:\n\
- Each report includes:\n- Date and time of the data recording.\n- Data recorded\
\ from flow computers.\n\nData storage in tables:\nThe reports are saved in two\
\ tables:\n1. Main table (Index):\n - Stores the date, time, and flow computer\
\ identifier.\n2. Detail table:\n - Stores the measured values associated with\
\ the report.\n\nConnection to the Modbus table:\nThe flow computer's reports\
\ are linked to a Modbus table. This table contains the names corresponding to\
\ each value in the reports, making it easier to interpret the data."
- 'What is a flow computer?
A flow computer is a device used in measurement engineering. It collects analog
and digital data from flow meters and other sensors.
Key features of a flow computer:
- It has a unique name, firmware version, and manufacturer information.
- It is designed to record and process data such as temperature, pressure, and
fluid volume (for gases or oils).
Main function:
The flow computer sends the collected data to a measurement system. This allows
measurement engineers to analyze the data and perform their tasks effectively.'
- source_sentence: What tables store measurement system data?
sentences:
- "What is uncertainty?\nUncertainty is a measure of confidence in the precision\
\ and reliability of results obtained from equipment or measurement systems. It\
\ quantifies the potential error or margin of error in measurements.\n\nTypes\
\ of uncertainty:\nThere are two main types of uncertainty:\n1. Uncertainty of\
\ magnitudes (variables):\n - Refers to the uncertainty of specific variables,\
\ such as temperature or pressure.\n - It is calculated after calibrating a\
\ device or obtained from the equipment manufacturer's manual.\n - This uncertainty\
\ serves as a starting point for further calculations related to the equipment.\n\
\n2. Uncertainty of the measurement system:\n - Refers to the uncertainty calculated\
\ for the overall flow measurement.\n - It depends on the uncertainties of\
\ the individual variables (magnitudes) and represents the combined margin of\
\ error for the entire system.\n\nKey points:\n- The uncertainties of magnitudes\
\ (variables) are the foundation for calculating the uncertainty of the measurement\
\ system. Think of them as the \"building blocks.\"\n- Do not confuse the two\
\ types of uncertainty:\n - **Uncertainty of magnitudes/variables**: Specific\
\ to individual variables (e.g., temperature, pressure).\n - **Uncertainty\
\ of the measurement system**: Specific to the overall flow measurement.\n\nDatabase\
\ storage for uncertainties:\nIn the database, uncertainty calculations are stored\
\ in two separate tables:\n1. Uncertainty of magnitudes (variables):\n - Stores\
\ the uncertainty values for specific variables (e.g., temperature, pressure).\n\
\n2. Uncertainty of the measurement system:\n - Stores the uncertainty values\
\ for the overall flow measurement system.\n\nHow to retrieve uncertainty data:\n\
- To find the uncertainty of the measurement system, join the measurement systems\
\ table with the uncertainty of the measurement system table.\n- To find the uncertainty\
\ of a specific variable (magnitude), join the measurement systems table with\
\ the uncertainty of magnitudes (variables) table.\n\nImportant note:\nDo not\
\ confuse the two types of uncertainty:\n- If the user requests the uncertainty\
\ of the measurement system, use the first join (measurement systems table + uncertainty\
\ of the measurement system table).\n- If the user requests the uncertainty of\
\ a specific variable (magnitude) in a report, use the second join (measurement\
\ systems table + uncertainty of magnitudes table)."
- "What is a measurement system?\nA measurement system, also referred to as a delivery\
\ point, measurement point, or reception point, is used to measure and monitor\
\ fluids in industrial processes.\n\nKey characteristics of a measurement system:\n\
1. Measurement technology:\n - Differential: Used for precise measurements.\n\
\ - Linear: Used for straightforward measurements.\n\n2. System identifier\
\ (TAG):\n - A unique identifier for the system.\n\n3. Fluid type:\n - The\
\ system can measure gases, oils, condensates, water, steam, or other fluids.\n\
4. System type:\n - Specifies the category or purpose of the system.\n\nMeasurement\
\ technology by fluid type:\n- Gas measurement systems: Use both linear and differential\
\ measurement technologies.\n- Oil measurement systems: Do not use linear or differential\
\ technologies; they are programmed differently.\"\n\n\nClassification of measurement\
\ systems:\nMeasurement systems are classified based on the stage of the process\
\ in which they are used. Common classifications include:\n- Fiscal\n- Operational\n\
- Appropriation\n- Custody\n- Production Poços"
- 'What do measurement equipment measure?
Each equipment measures a physical magnitude, also known as a variable. Based
on the type of variable they measure, devices are classified into different categories.
Equipment classification:
- Primary meter: Assigned by default to equipments like orifice plates.
- Secondary meter: Assigned by default to equipments like transmitters.
- Tertiary meter: Used for other types of equipments.
Equipment types in the database:
The database includes a table listing all equipment types. Examples of equipment
types are:
- Differential pressure transmitters
- RTDs (Resistance Temperature Detectors)
- Orifice plates
- Multivariable transmitters
- Ultrasonic meters
Meteorological checks for equipments:
Each equipment type is assigned a meteorological check, which can be either:
- Calibration: To ensure measurement accuracy.
- Inspection: To verify proper functioning.
Data storage in tables:
The database also includes a separate table for equipment classifications, which
are:
- Primary meter
- Secondary meter
- Tertiary meter
So, an equipment has equipment types and this types has classifications.'
- source_sentence: What is the table structure for equipment types?
sentences:
- "How does a flow computer generate and store reports?\nA flow computer generates\
\ daily or hourly reports to provide users with operational data. These reports\
\ are stored in the flow computer's memory in an organized format.\n\nReport structure:\n\
- Each report includes:\n- Date and time of the data recording.\n- Data recorded\
\ from flow computers.\n\nData storage in tables:\nThe reports are saved in two\
\ tables:\n1. Main table (Index):\n - Stores the date, time, and flow computer\
\ identifier.\n2. Detail table:\n - Stores the measured values associated with\
\ the report.\n\nConnection to the Modbus table:\nThe flow computer's reports\
\ are linked to a Modbus table. This table contains the names corresponding to\
\ each value in the reports, making it easier to interpret the data."
- "What is measuring equipment?\nMeasuring equipment refers to the devices that\
\ make up a measurement system. Each piece of equipment has:\n- A unique serial\
\ number for identification.\n- A technical name, such as transmitter, plate,\
\ thermometer, etc.\n\nHow is equipment assigned to a measurement system?\nWhen\
\ equipment is assigned to a measurement system, it is given a unique identifier\
\ called an \"\"Equipment Tag.\"\"\n- If a piece of equipment has a tag, it is\
\ considered in use in a measurement system.\n- If it does not have a tag, it\
\ is considered spare or unused\n\nEquipment assignment based on technology:\n\
The type of equipment assigned to a measurement system depends on the technology\
\ used, for example:\n1. Differential technology (for gas measurement):\n -\
\ Static pressure transmitters\n - Differential pressure transmitters\n \
\ - Temperature transmitters\n - RTDs (thermometers)\n - Orifice plates\n\
\ - Straight stretch\n\n2. Linear technology (for gas measurement):\n -\
\ Temperature transmitters\n - RTDs\n - Static pressure transmitters\n \
\ - Ultrasonic meters\n\nRelationship between equipment and measurement systems:\n\
- A measurement system can have multiple pieces of equipment.\n- However, a piece\
\ of equipment can only be assigned to one measurement system.\n\nDatabase management:\n\
- The database includes a special table to manage the list of equipment assigned\
\ to measurement systems.\n- When a user refers to an \"\"Equipment Tag\"\", they\
\ are searching for operational equipment assigned to a measurement system.\n\
- If a user is looking for spare or unused equipment, they are searching for equipment\
\ not listed in the tagged equipment table.\n- Commonly used when user refers\
\ directly to an \"\"Equipment Tag\""
- "What is uncertainty?\nUncertainty is a measure of confidence in the precision\
\ and reliability of results obtained from equipment or measurement systems. It\
\ quantifies the potential error or margin of error in measurements.\n\nTypes\
\ of uncertainty:\nThere are two main types of uncertainty:\n1. Uncertainty of\
\ magnitudes (variables):\n - Refers to the uncertainty of specific variables,\
\ such as temperature or pressure.\n - It is calculated after calibrating a\
\ device or obtained from the equipment manufacturer's manual.\n - This uncertainty\
\ serves as a starting point for further calculations related to the equipment.\n\
\n2. Uncertainty of the measurement system:\n - Refers to the uncertainty calculated\
\ for the overall flow measurement.\n - It depends on the uncertainties of\
\ the individual variables (magnitudes) and represents the combined margin of\
\ error for the entire system.\n\nKey points:\n- The uncertainties of magnitudes\
\ (variables) are the foundation for calculating the uncertainty of the measurement\
\ system. Think of them as the \"building blocks.\"\n- Do not confuse the two\
\ types of uncertainty:\n - **Uncertainty of magnitudes/variables**: Specific\
\ to individual variables (e.g., temperature, pressure).\n - **Uncertainty\
\ of the measurement system**: Specific to the overall flow measurement.\n\nDatabase\
\ storage for uncertainties:\nIn the database, uncertainty calculations are stored\
\ in two separate tables:\n1. Uncertainty of magnitudes (variables):\n - Stores\
\ the uncertainty values for specific variables (e.g., temperature, pressure).\n\
\n2. Uncertainty of the measurement system:\n - Stores the uncertainty values\
\ for the overall flow measurement system.\n\nHow to retrieve uncertainty data:\n\
- To find the uncertainty of the measurement system, join the measurement systems\
\ table with the uncertainty of the measurement system table.\n- To find the uncertainty\
\ of a specific variable (magnitude), join the measurement systems table with\
\ the uncertainty of magnitudes (variables) table.\n\nImportant note:\nDo not\
\ confuse the two types of uncertainty:\n- If the user requests the uncertainty\
\ of the measurement system, use the first join (measurement systems table + uncertainty\
\ of the measurement system table).\n- If the user requests the uncertainty of\
\ a specific variable (magnitude) in a report, use the second join (measurement\
\ systems table + uncertainty of magnitudes table)."
- source_sentence: What columns store the uncertainty values?
sentences:
- "What is a measurement system?\nA measurement system, also referred to as a delivery\
\ point, measurement point, or reception point, is used to measure and monitor\
\ fluids in industrial processes.\n\nKey characteristics of a measurement system:\n\
1. Measurement technology:\n - Differential: Used for precise measurements.\n\
\ - Linear: Used for straightforward measurements.\n\n2. System identifier\
\ (TAG):\n - A unique identifier for the system.\n\n3. Fluid type:\n - The\
\ system can measure gases, oils, condensates, water, steam, or other fluids.\n\
4. System type:\n - Specifies the category or purpose of the system.\n\nMeasurement\
\ technology by fluid type:\n- Gas measurement systems: Use both linear and differential\
\ measurement technologies.\n- Oil measurement systems: Do not use linear or differential\
\ technologies; they are programmed differently.\"\n\n\nClassification of measurement\
\ systems:\nMeasurement systems are classified based on the stage of the process\
\ in which they are used. Common classifications include:\n- Fiscal\n- Operational\n\
- Appropriation\n- Custody\n- Production Poços"
- 'How are flow computers and measurement systems related?
Flow computers can have multiple systems assigned to them. However, a measurement
system can only be assigned to one flow computer.
Database terminology:
In the database, this relationship is referred to as:
- Meter streams
- Meter runs
- Sections
Storage of the relationship:
The relationship between a flow computer and its assigned measurement system is
stored in a special table.
User context:
When a user refers to a "meter stream," they are indicating that they are searching
for a measurement system assigned to a specific flow computer.'
- "What is uncertainty?\nUncertainty is a measure of confidence in the precision\
\ and reliability of results obtained from equipment or measurement systems. It\
\ quantifies the potential error or margin of error in measurements.\n\nTypes\
\ of uncertainty:\nThere are two main types of uncertainty:\n1. Uncertainty of\
\ magnitudes (variables):\n - Refers to the uncertainty of specific variables,\
\ such as temperature or pressure.\n - It is calculated after calibrating a\
\ device or obtained from the equipment manufacturer's manual.\n - This uncertainty\
\ serves as a starting point for further calculations related to the equipment.\n\
\n2. Uncertainty of the measurement system:\n - Refers to the uncertainty calculated\
\ for the overall flow measurement.\n - It depends on the uncertainties of\
\ the individual variables (magnitudes) and represents the combined margin of\
\ error for the entire system.\n\nKey points:\n- The uncertainties of magnitudes\
\ (variables) are the foundation for calculating the uncertainty of the measurement\
\ system. Think of them as the \"building blocks.\"\n- Do not confuse the two\
\ types of uncertainty:\n - **Uncertainty of magnitudes/variables**: Specific\
\ to individual variables (e.g., temperature, pressure).\n - **Uncertainty\
\ of the measurement system**: Specific to the overall flow measurement.\n\nDatabase\
\ storage for uncertainties:\nIn the database, uncertainty calculations are stored\
\ in two separate tables:\n1. Uncertainty of magnitudes (variables):\n - Stores\
\ the uncertainty values for specific variables (e.g., temperature, pressure).\n\
\n2. Uncertainty of the measurement system:\n - Stores the uncertainty values\
\ for the overall flow measurement system.\n\nHow to retrieve uncertainty data:\n\
- To find the uncertainty of the measurement system, join the measurement systems\
\ table with the uncertainty of the measurement system table.\n- To find the uncertainty\
\ of a specific variable (magnitude), join the measurement systems table with\
\ the uncertainty of magnitudes (variables) table.\n\nImportant note:\nDo not\
\ confuse the two types of uncertainty:\n- If the user requests the uncertainty\
\ of the measurement system, use the first join (measurement systems table + uncertainty\
\ of the measurement system table).\n- If the user requests the uncertainty of\
\ a specific variable (magnitude) in a report, use the second join (measurement\
\ systems table + uncertainty of magnitudes table)."
datasets:
- Lauther/embeddings-train-semantic
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on jxm/cde-small-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [jxm/cde-small-v2](https://huggingface.co/jxm/cde-small-v2) on the [embeddings-train-semantic](https://huggingface.co/datasets/Lauther/embeddings-train-semantic) dataset. It maps sentences & paragraphs to a None-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [jxm/cde-small-v2](https://huggingface.co/jxm/cde-small-v2) <!-- at revision a7e5882ad52c27ea2831fc8258f24379c25cb459 -->
- **Maximum Sequence Length:** None tokens
- **Output Dimensionality:** None dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [embeddings-train-semantic](https://huggingface.co/datasets/Lauther/embeddings-train-semantic)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({}) with Transformer model: ContextualDocumentEmbeddingTransformer
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Lauther/emb-cde-small-v2-3e")
# Run inference
sentences = [
'What columns store the uncertainty values?',
'How are flow computers and measurement systems related?\nFlow computers can have multiple systems assigned to them. However, a measurement system can only be assigned to one flow computer.\n\nDatabase terminology:\nIn the database, this relationship is referred to as:\n- Meter streams\n- Meter runs\n- Sections\n\nStorage of the relationship:\nThe relationship between a flow computer and its assigned measurement system is stored in a special table.\n\nUser context:\nWhen a user refers to a "meter stream," they are indicating that they are searching for a measurement system assigned to a specific flow computer.',
'What is uncertainty?\nUncertainty is a measure of confidence in the precision and reliability of results obtained from equipment or measurement systems. It quantifies the potential error or margin of error in measurements.\n\nTypes of uncertainty:\nThere are two main types of uncertainty:\n1. Uncertainty of magnitudes (variables):\n - Refers to the uncertainty of specific variables, such as temperature or pressure.\n - It is calculated after calibrating a device or obtained from the equipment manufacturer\'s manual.\n - This uncertainty serves as a starting point for further calculations related to the equipment.\n\n2. Uncertainty of the measurement system:\n - Refers to the uncertainty calculated for the overall flow measurement.\n - It depends on the uncertainties of the individual variables (magnitudes) and represents the combined margin of error for the entire system.\n\nKey points:\n- The uncertainties of magnitudes (variables) are the foundation for calculating the uncertainty of the measurement system. Think of them as the "building blocks."\n- Do not confuse the two types of uncertainty:\n - **Uncertainty of magnitudes/variables**: Specific to individual variables (e.g., temperature, pressure).\n - **Uncertainty of the measurement system**: Specific to the overall flow measurement.\n\nDatabase storage for uncertainties:\nIn the database, uncertainty calculations are stored in two separate tables:\n1. Uncertainty of magnitudes (variables):\n - Stores the uncertainty values for specific variables (e.g., temperature, pressure).\n\n2. Uncertainty of the measurement system:\n - Stores the uncertainty values for the overall flow measurement system.\n\nHow to retrieve uncertainty data:\n- To find the uncertainty of the measurement system, join the measurement systems table with the uncertainty of the measurement system table.\n- To find the uncertainty of a specific variable (magnitude), join the measurement systems table with the uncertainty of magnitudes (variables) table.\n\nImportant note:\nDo not confuse the two types of uncertainty:\n- If the user requests the uncertainty of the measurement system, use the first join (measurement systems table + uncertainty of the measurement system table).\n- If the user requests the uncertainty of a specific variable (magnitude) in a report, use the second join (measurement systems table + uncertainty of magnitudes table).',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### embeddings-train-semantic
* Dataset: [embeddings-train-semantic](https://huggingface.co/datasets/Lauther/embeddings-train-semantic) at [ce90f53](https://huggingface.co/datasets/Lauther/embeddings-train-semantic/tree/ce90f531bc39037053d223b27868ad178852f330)
* Size: 5,220 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 6 tokens</li><li>mean: 14.88 tokens</li><li>max: 77 tokens</li></ul> | <ul><li>min: 114 tokens</li><li>mean: 244.02 tokens</li><li>max: 489 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.23</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------|
| <code>What is the data type of differential pressure in the measurement system?</code> | <code>What is uncertainty?<br>Uncertainty is a measure of confidence in the precision and reliability of results obtained from equipment or measurement systems. It quantifies the potential error or margin of error in measurements.<br><br>Types of uncertainty:<br>There are two main types of uncertainty:<br>1. Uncertainty of magnitudes (variables):<br> - Refers to the uncertainty of specific variables, such as temperature or pressure.<br> - It is calculated after calibrating a device or obtained from the equipment manufacturer's manual.<br> - This uncertainty serves as a starting point for further calculations related to the equipment.<br><br>2. Uncertainty of the measurement system:<br> - Refers to the uncertainty calculated for the overall flow measurement.<br> - It depends on the uncertainties of the individual variables (magnitudes) and represents the combined margin of error for the entire system.<br><br>Key points:<br>- The uncertainties of magnitudes (variables) are the foundation for calculating the uncertainty of ...</code> | <code>0.15000000000000002</code> |
| <code>What is the structure of the &&&equipment_data&&& table?</code> | <code>How are flow computers and measurement systems related?<br>Flow computers can have multiple systems assigned to them. However, a measurement system can only be assigned to one flow computer.<br><br>Database terminology:<br>In the database, this relationship is referred to as:<br>- Meter streams<br>- Meter runs<br>- Sections<br><br>Storage of the relationship:<br>The relationship between a flow computer and its assigned measurement system is stored in a special table.<br><br>User context:<br>When a user refers to a "meter stream," they are indicating that they are searching for a measurement system assigned to a specific flow computer.</code> | <code>0.35000000000000003</code> |
| <code>Find the columns in the flow computer table that identify the flow computer.</code> | <code>What kind of data store an equipment?<br>Equipments can capture meteorological data, such as pressure, temperature, and volume (magnitudes). This data is essential for users to perform various calculations.<br><br>Data storage:<br>- The measured values are stored in a special table in the database for magnitudes. This table contains the values of the variables captured by the equipments.<br>- These values are **direct measurements** from the fluid (e.g., raw pressure, temperature, or volume readings). **They are not calculated values**, such as uncertainty.<br>- The values stored in the variable values table are **different** from variable uncertainty values, which are calculated separately and represent the margin of error.<br><br>Accessing the data:<br>- Users typically access the data by referring to the readings from the measurement system, not directly from the individual equipments.<br>- The readings are stored in a "variable values" table within the database.<br><br>Linking variable names:<br>If the user needs to kno...</code> | <code>0.1</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Evaluation Dataset
#### embeddings-train-semantic
* Dataset: [embeddings-train-semantic](https://huggingface.co/datasets/Lauther/embeddings-train-semantic) at [ce90f53](https://huggingface.co/datasets/Lauther/embeddings-train-semantic/tree/ce90f531bc39037053d223b27868ad178852f330)
* Size: 652 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 652 samples:
| | sentence1 | sentence2 | score |
|:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 6 tokens</li><li>mean: 14.48 tokens</li><li>max: 77 tokens</li></ul> | <ul><li>min: 114 tokens</li><li>mean: 241.25 tokens</li><li>max: 489 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.24</li><li>max: 0.9</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:-------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------|
| <code>How can I filter uncertainty reports by equipment tag?</code> | <code>How does a flow computer generate and store reports?<br>A flow computer generates daily or hourly reports to provide users with operational data. These reports are stored in the flow computer's memory in an organized format.<br><br>Report structure:<br>- Each report includes:<br>- Date and time of the data recording.<br>- Data recorded from flow computers.<br><br>Data storage in tables:<br>The reports are saved in two tables:<br>1. Main table (Index):<br> - Stores the date, time, and flow computer identifier.<br>2. Detail table:<br> - Stores the measured values associated with the report.<br><br>Connection to the Modbus table:<br>The flow computer's reports are linked to a Modbus table. This table contains the names corresponding to each value in the reports, making it easier to interpret the data.</code> | <code>0.09999999999999999</code> |
| <code>What is the purpose of the flow_data table?</code> | <code>What is uncertainty?<br>Uncertainty is a measure of confidence in the precision and reliability of results obtained from equipment or measurement systems. It quantifies the potential error or margin of error in measurements.<br><br>Types of uncertainty:<br>There are two main types of uncertainty:<br>1. Uncertainty of magnitudes (variables):<br> - Refers to the uncertainty of specific variables, such as temperature or pressure.<br> - It is calculated after calibrating a device or obtained from the equipment manufacturer's manual.<br> - This uncertainty serves as a starting point for further calculations related to the equipment.<br><br>2. Uncertainty of the measurement system:<br> - Refers to the uncertainty calculated for the overall flow measurement.<br> - It depends on the uncertainties of the individual variables (magnitudes) and represents the combined margin of error for the entire system.<br><br>Key points:<br>- The uncertainties of magnitudes (variables) are the foundation for calculating the uncertainty of ...</code> | <code>0.15000000000000002</code> |
| <code>What is the column name for the report date in the Reports table?</code> | <code>What is equipment calibration?<br>Calibration is a metrological verification process used to ensure the accuracy of measurement equipment. It is performed periodically, based on intervals set by the company or a regulatory body.<br><br>Purpose of calibration:<br>The calibration process corrects any deviations in how the equipment measures physical magnitudes (variables). This ensures the equipment provides accurate and reliable data.<br><br>Calibration cycles:<br>There are two main calibration cycles:<br>1. As-found: Represents the equipment's measurement accuracy before any adjustments are made. This cycle is almost always implemented.<br>2. As-left: Represents the equipment's measurement accuracy after adjustments are made. This cycle is used depending on regulatory requirements.<br><br>Calibration uncertainty:<br>- Uncertainty is included in the results of a calibration.<br>- Calibration uncertainty refers to the margin of error in the device's measurements, which also affects the uncertainty of the measured variable or ...</code> | <code>0.1</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 4
- `gradient_accumulation_steps`: 4
- `learning_rate`: 2e-05
- `warmup_ratio`: 0.1
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 4
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 4
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0307 | 10 | 0.3228 | - |
| 0.0613 | 20 | 0.1919 | - |
| 0.0920 | 30 | 0.2414 | - |
| 0.1226 | 40 | 0.1649 | - |
| 0.1533 | 50 | 0.1554 | - |
| 0.1839 | 60 | 0.1432 | - |
| 0.2146 | 70 | 0.138 | - |
| 0.2452 | 80 | 0.1656 | - |
| 0.2759 | 90 | 0.1306 | - |
| 0.3065 | 100 | 0.1396 | - |
| 0.3372 | 110 | 0.0934 | - |
| 0.3678 | 120 | 0.1263 | - |
| 0.3985 | 130 | 0.1209 | - |
| 0.4291 | 140 | 0.0839 | - |
| 0.4598 | 150 | 0.1128 | 0.0260 |
| 0.4904 | 160 | 0.0895 | - |
| 0.5211 | 170 | 0.1226 | - |
| 0.5517 | 180 | 0.086 | - |
| 0.5824 | 190 | 0.0733 | - |
| 0.6130 | 200 | 0.0827 | - |
| 0.6437 | 210 | 0.0861 | - |
| 0.6743 | 220 | 0.0774 | - |
| 0.7050 | 230 | 0.0784 | - |
| 0.7356 | 240 | 0.095 | - |
| 0.7663 | 250 | 0.0793 | - |
| 0.7969 | 260 | 0.0583 | - |
| 0.8276 | 270 | 0.0571 | - |
| 0.8582 | 280 | 0.0597 | - |
| 0.8889 | 290 | 0.0742 | - |
| 0.9195 | 300 | 0.0569 | 0.0177 |
| 0.9502 | 310 | 0.0702 | - |
| 0.9808 | 320 | 0.0642 | - |
| 1.0092 | 330 | 0.0526 | - |
| 1.0398 | 340 | 0.0604 | - |
| 1.0705 | 350 | 0.0491 | - |
| 1.1011 | 360 | 0.0598 | - |
| 1.1318 | 370 | 0.0616 | - |
| 1.1625 | 380 | 0.0557 | - |
| 1.1931 | 390 | 0.0552 | - |
| 1.2238 | 400 | 0.0474 | - |
| 1.2544 | 410 | 0.0316 | - |
| 1.2851 | 420 | 0.0555 | - |
| 1.3157 | 430 | 0.0554 | - |
| 1.3464 | 440 | 0.051 | - |
| 1.3770 | 450 | 0.0578 | 0.0162 |
| 1.4077 | 460 | 0.0461 | - |
| 1.4383 | 470 | 0.0624 | - |
| 1.4690 | 480 | 0.0505 | - |
| 1.4996 | 490 | 0.0506 | - |
| 1.5303 | 500 | 0.0608 | - |
| 1.5609 | 510 | 0.0465 | - |
| 1.5916 | 520 | 0.0326 | - |
| 1.6222 | 530 | 0.0501 | - |
| 1.6529 | 540 | 0.0419 | - |
| 1.6835 | 550 | 0.0403 | - |
| 1.7142 | 560 | 0.0315 | - |
| 1.7448 | 570 | 0.0336 | - |
| 1.7755 | 580 | 0.0427 | - |
| 1.8061 | 590 | 0.053 | - |
| 1.8368 | 600 | 0.0451 | 0.0144 |
| 1.8674 | 610 | 0.0419 | - |
| 1.8981 | 620 | 0.0382 | - |
| 1.9287 | 630 | 0.0428 | - |
| 1.9594 | 640 | 0.0335 | - |
| 1.9900 | 650 | 0.0606 | - |
| 2.0184 | 660 | 0.0317 | - |
| 2.0490 | 670 | 0.0338 | - |
| 2.0797 | 680 | 0.0256 | - |
| 2.1103 | 690 | 0.0262 | - |
| 2.1410 | 700 | 0.028 | - |
| 2.1716 | 710 | 0.0229 | - |
| 2.2023 | 720 | 0.0157 | - |
| 2.2330 | 730 | 0.0367 | - |
| 2.2636 | 740 | 0.0226 | - |
| 2.2943 | 750 | 0.034 | 0.0128 |
| 2.3249 | 760 | 0.0247 | - |
| 2.3556 | 770 | 0.0251 | - |
| 2.3862 | 780 | 0.0245 | - |
| 2.4169 | 790 | 0.0249 | - |
| 2.4475 | 800 | 0.0247 | - |
| 2.4782 | 810 | 0.0266 | - |
| 2.5088 | 820 | 0.0338 | - |
| 2.5395 | 830 | 0.026 | - |
| 2.5701 | 840 | 0.0304 | - |
| 2.6008 | 850 | 0.0248 | - |
| 2.6314 | 860 | 0.0347 | - |
| 2.6621 | 870 | 0.0241 | - |
| 2.6927 | 880 | 0.0204 | - |
| 2.7234 | 890 | 0.0204 | - |
| 2.7540 | 900 | 0.0147 | 0.0126 |
| 2.7847 | 910 | 0.0266 | - |
| 2.8153 | 920 | 0.0279 | - |
| 2.8460 | 930 | 0.0159 | - |
| 2.8766 | 940 | 0.0162 | - |
| 2.9073 | 950 | 0.0315 | - |
| 2.9379 | 960 | 0.0174 | - |
| 2.9686 | 970 | 0.0244 | - |
### Framework Versions
- Python: 3.11.0
- Sentence Transformers: 3.4.0
- Transformers: 4.48.1
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
tensorwa/mgq01
|
tensorwa
| 2025-01-30T04:26:33Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-28T08:27:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mergekit-community/L3.1-Artemis-h-8B
|
mergekit-community
| 2025-01-30T04:22:48Z | 33 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B",
"base_model:merge:Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"base_model:merge:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"base_model:kromvault/L3.1-Ablaze-Vulca-v0.1-8B",
"base_model:merge:kromvault/L3.1-Ablaze-Vulca-v0.1-8B",
"base_model:mergekit-community/L3-Boshima-a",
"base_model:merge:mergekit-community/L3-Boshima-a",
"base_model:mlabonne/NeuralDaredevil-8B-abliterated",
"base_model:merge:mlabonne/NeuralDaredevil-8B-abliterated",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-30T04:17:24Z |
---
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Llama-8B
- mergekit-community/L3-Boshima-a
- mlabonne/NeuralDaredevil-8B-abliterated
- kromeurus/L3.1-Ablaze-Vulca-v0.1-8B
- Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [kromeurus/L3.1-Ablaze-Vulca-v0.1-8B](https://huggingface.co/kromeurus/L3.1-Ablaze-Vulca-v0.1-8B) as a base.
### Models Merged
The following models were included in the merge:
* [deepseek-ai/DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B)
* [mergekit-community/L3-Boshima-a](https://huggingface.co/mergekit-community/L3-Boshima-a)
* [mlabonne/NeuralDaredevil-8B-abliterated](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated)
* [Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: float32
out_dtype: bfloat16
merge_method: model_stock
base_model: kromeurus/L3.1-Ablaze-Vulca-v0.1-8B
models:
- model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B
parameters:
weight: 1
- model: mergekit-community/L3-Boshima-a
parameters:
weight: 1
- model: mlabonne/NeuralDaredevil-8B-abliterated
parameters:
weight: 0.8
- model: Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
parameters:
weight: 0.8
- model: kromeurus/L3.1-Ablaze-Vulca-v0.1-8B
parameters:
weight: 0.6
parameters:
normalize: true
```
|
alchemist69/6690e5a4-fdf8-499b-838d-b159414d8d63
|
alchemist69
| 2025-01-30T04:20:18Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/codegemma-2b",
"base_model:adapter:unsloth/codegemma-2b",
"license:apache-2.0",
"region:us"
] | null | 2025-01-30T04:06:14Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/codegemma-2b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6690e5a4-fdf8-499b-838d-b159414d8d63
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/codegemma-2b
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 715878661d7cd8f6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/715878661d7cd8f6_train_data.json
type:
field_instruction: question
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: alchemist69/6690e5a4-fdf8-499b-838d-b159414d8d63
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/715878661d7cd8f6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 27bcc751-fc2b-4235-9629-3df0070473d7
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 27bcc751-fc2b-4235-9629-3df0070473d7
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 6690e5a4-fdf8-499b-838d-b159414d8d63
This model is a fine-tuned version of [unsloth/codegemma-2b](https://huggingface.co/unsloth/codegemma-2b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6573
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.934 | 0.0029 | 1 | 2.4863 |
| 3.0923 | 0.1427 | 50 | 1.7766 |
| 1.3505 | 0.2853 | 100 | 1.0516 |
| 0.5032 | 0.4280 | 150 | 0.7334 |
| 0.5741 | 0.5706 | 200 | 0.6573 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
roleplaiapp/Minerva-14b-V0.1-i1-IQ4_XS-GGUF
|
roleplaiapp
| 2025-01-30T04:19:32Z | 5 | 0 |
transformers
|
[
"transformers",
"gguf",
"14b",
"IQ4_XS",
"iq4",
"llama-cpp",
"minerva",
"text-generation",
"v01",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] |
text-generation
| 2025-01-30T04:18:57Z |
---
library_name: transformers
pipeline_tag: text-generation
tags:
- 14b
- IQ4_XS
- gguf
- iq4
- llama-cpp
- minerva
- text-generation
- v01
---
# roleplaiapp/Minerva-14b-V0.1-i1-IQ4_XS-GGUF
**Repo:** `roleplaiapp/Minerva-14b-V0.1-i1-IQ4_XS-GGUF`
**Original Model:** `Minerva-14b-V0.1-i1`
**Quantized File:** `Minerva-14b-V0.1.i1-IQ4_XS.gguf`
**Quantization:** `GGUF`
**Quantization Method:** `IQ4_XS`
## Overview
This is a GGUF IQ4_XS quantized version of Minerva-14b-V0.1-i1
## Quantization By
I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models.
I hope the community finds these quantizations useful.
Andrew Webby @ [RolePlai](https://roleplai.app/).
|
daniel40/ab42ef83-2c97-436b-859b-3ccd08a68b18
|
daniel40
| 2025-01-30T04:19:23Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"gpt_neo",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/gpt-neo-1.3B",
"base_model:adapter:EleutherAI/gpt-neo-1.3B",
"license:mit",
"region:us"
] | null | 2025-01-30T04:08:16Z |
---
library_name: peft
license: mit
base_model: EleutherAI/gpt-neo-1.3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ab42ef83-2c97-436b-859b-3ccd08a68b18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/gpt-neo-1.3B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9220f3ec5e9baf46_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9220f3ec5e9baf46_train_data.json
type:
field_instruction: dialogue
field_output: reference
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: daniel40/ab42ef83-2c97-436b-859b-3ccd08a68b18
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/9220f3ec5e9baf46_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b014ffbb-9de7-4765-9d8a-0bf229f9b0e3
wandb_project: Birthday-SN56-28-Gradients-On-Demand
wandb_run: your_name
wandb_runid: b014ffbb-9de7-4765-9d8a-0bf229f9b0e3
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# ab42ef83-2c97-436b-859b-3ccd08a68b18
This model is a fine-tuned version of [EleutherAI/gpt-neo-1.3B](https://huggingface.co/EleutherAI/gpt-neo-1.3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5613
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 1.6485 |
| 6.3357 | 0.0011 | 13 | 1.6094 |
| 6.6091 | 0.0022 | 26 | 1.5745 |
| 6.0299 | 0.0033 | 39 | 1.5613 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
ajku2199/Llama-2-7b-hf_abstract_prob6_dataset2_n1000_seed7_epochs10_batch8_qlora
|
ajku2199
| 2025-01-30T04:16:46Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"region:us"
] | null | 2025-01-10T06:32:38Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
mrferr3t/75661d4b-a41b-4faa-ba01-a492bad28d27
|
mrferr3t
| 2025-01-30T04:15:56Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Llama-2-7b-128k",
"base_model:adapter:NousResearch/Yarn-Llama-2-7b-128k",
"region:us"
] | null | 2025-01-30T02:38:45Z |
---
library_name: peft
base_model: NousResearch/Yarn-Llama-2-7b-128k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 75661d4b-a41b-4faa-ba01-a492bad28d27
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Llama-2-7b-128k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 00748ae27c0f3538_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/00748ae27c0f3538_train_data.json
type:
field_input: context
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 30
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/75661d4b-a41b-4faa-ba01-a492bad28d27
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0005
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 2
mlflow_experiment_name: /tmp/00748ae27c0f3538_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 300
saves_per_epoch: 0
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d55b15aa-62e7-4486-8bc4-33f1c5e10ec7
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d55b15aa-62e7-4486-8bc4-33f1c5e10ec7
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 75661d4b-a41b-4faa-ba01-a492bad28d27
This model is a fine-tuned version of [NousResearch/Yarn-Llama-2-7b-128k](https://huggingface.co/NousResearch/Yarn-Llama-2-7b-128k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3377
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 7.0344 | 0.0006 | 1 | 1.6728 |
| 7.8571 | 0.0171 | 30 | 1.3879 |
| 6.0655 | 0.0341 | 60 | 1.3566 |
| 4.6908 | 0.0512 | 90 | 1.3377 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
NikIman1/companio_test
|
NikIman1
| 2025-01-30T04:14:40Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"granite",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-01-30T04:11:07Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Lion-Lamarck-v.1.0.8-i1-GGUF
|
mradermacher
| 2025-01-30T04:11:09Z | 534 | 1 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:nlpguy/Lion-Lamarck-v.1.0.8",
"base_model:quantized:nlpguy/Lion-Lamarck-v.1.0.8",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-01-29T19:39:16Z |
---
base_model: nlpguy/Lion-Lamarck-v.1.0.8
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/nlpguy/Lion-Lamarck-v.1.0.8
<!-- provided-files -->
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Lion-Lamarck-v.1.0.8-i1-GGUF/resolve/main/Lion-Lamarck-v.1.0.8.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Lion-Lamarck-v.1.0.8-i1-GGUF/resolve/main/Lion-Lamarck-v.1.0.8.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Lion-Lamarck-v.1.0.8-i1-GGUF/resolve/main/Lion-Lamarck-v.1.0.8.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Lion-Lamarck-v.1.0.8-i1-GGUF/resolve/main/Lion-Lamarck-v.1.0.8.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Lion-Lamarck-v.1.0.8-i1-GGUF/resolve/main/Lion-Lamarck-v.1.0.8.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Lion-Lamarck-v.1.0.8-i1-GGUF/resolve/main/Lion-Lamarck-v.1.0.8.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Lion-Lamarck-v.1.0.8-i1-GGUF/resolve/main/Lion-Lamarck-v.1.0.8.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Lion-Lamarck-v.1.0.8-i1-GGUF/resolve/main/Lion-Lamarck-v.1.0.8.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Lion-Lamarck-v.1.0.8-i1-GGUF/resolve/main/Lion-Lamarck-v.1.0.8.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Lion-Lamarck-v.1.0.8-i1-GGUF/resolve/main/Lion-Lamarck-v.1.0.8.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/Lion-Lamarck-v.1.0.8-i1-GGUF/resolve/main/Lion-Lamarck-v.1.0.8.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Lion-Lamarck-v.1.0.8-i1-GGUF/resolve/main/Lion-Lamarck-v.1.0.8.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Lion-Lamarck-v.1.0.8-i1-GGUF/resolve/main/Lion-Lamarck-v.1.0.8.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/Lion-Lamarck-v.1.0.8-i1-GGUF/resolve/main/Lion-Lamarck-v.1.0.8.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Lion-Lamarck-v.1.0.8-i1-GGUF/resolve/main/Lion-Lamarck-v.1.0.8.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Lion-Lamarck-v.1.0.8-i1-GGUF/resolve/main/Lion-Lamarck-v.1.0.8.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/Lion-Lamarck-v.1.0.8-i1-GGUF/resolve/main/Lion-Lamarck-v.1.0.8.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Lion-Lamarck-v.1.0.8-i1-GGUF/resolve/main/Lion-Lamarck-v.1.0.8.i1-IQ4_NL.gguf) | i1-IQ4_NL | 8.6 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Lion-Lamarck-v.1.0.8-i1-GGUF/resolve/main/Lion-Lamarck-v.1.0.8.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Lion-Lamarck-v.1.0.8-i1-GGUF/resolve/main/Lion-Lamarck-v.1.0.8.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Lion-Lamarck-v.1.0.8-i1-GGUF/resolve/main/Lion-Lamarck-v.1.0.8.i1-Q4_1.gguf) | i1-Q4_1 | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/Lion-Lamarck-v.1.0.8-i1-GGUF/resolve/main/Lion-Lamarck-v.1.0.8.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Lion-Lamarck-v.1.0.8-i1-GGUF/resolve/main/Lion-Lamarck-v.1.0.8.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Lion-Lamarck-v.1.0.8-i1-GGUF/resolve/main/Lion-Lamarck-v.1.0.8.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Re-ultiima-14B-GGUF
|
mradermacher
| 2025-01-30T04:07:49Z | 261 | 1 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"ch",
"base_model:TeamDelta/Re-ultiima-14B",
"base_model:quantized:TeamDelta/Re-ultiima-14B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-29T23:55:38Z |
---
base_model: TeamDelta/Re-ultiima-14B
language:
- en
- ch
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/TeamDelta/Re-ultiima-14B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Re-ultiima-14B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Re-ultiima-14B-GGUF/resolve/main/Re-ultiima-14B.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Re-ultiima-14B-GGUF/resolve/main/Re-ultiima-14B.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Re-ultiima-14B-GGUF/resolve/main/Re-ultiima-14B.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Re-ultiima-14B-GGUF/resolve/main/Re-ultiima-14B.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Re-ultiima-14B-GGUF/resolve/main/Re-ultiima-14B.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Re-ultiima-14B-GGUF/resolve/main/Re-ultiima-14B.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Re-ultiima-14B-GGUF/resolve/main/Re-ultiima-14B.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Re-ultiima-14B-GGUF/resolve/main/Re-ultiima-14B.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Re-ultiima-14B-GGUF/resolve/main/Re-ultiima-14B.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Re-ultiima-14B-GGUF/resolve/main/Re-ultiima-14B.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Re-ultiima-14B-GGUF/resolve/main/Re-ultiima-14B.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
cunghoctienganh/49d059e1-846e-41c0-94a7-a1689d0acac4
|
cunghoctienganh
| 2025-01-30T04:05:38Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v0.6",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v0.6",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-30T03:52:59Z |
---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v0.6
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 49d059e1-846e-41c0-94a7-a1689d0acac4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: TinyLlama/TinyLlama-1.1B-Chat-v0.6
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 17faa1212cf04019_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/17faa1212cf04019_train_data.json
type:
field_input: problem
field_instruction: question
field_output: solution
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: cunghoctienganh/49d059e1-846e-41c0-94a7-a1689d0acac4
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/17faa1212cf04019_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 30bba4af-cf5b-44b3-8b13-edea30eaea8e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 30bba4af-cf5b-44b3-8b13-edea30eaea8e
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 49d059e1-846e-41c0-94a7-a1689d0acac4
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v0.6](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.6) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4934
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5377 | 0.0599 | 200 | 0.4934 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
rl-llm-coders/RS_RM_1B_iter2
|
rl-llm-coders
| 2025-01-30T04:04:52Z | 577 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-30T04:01:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mrferr3t/6cd30b78-f6c0-4c61-aa6b-02e6624528b8
|
mrferr3t
| 2025-01-30T04:03:29Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.1-Storm-8B",
"base_model:adapter:unsloth/Llama-3.1-Storm-8B",
"license:llama3.1",
"region:us"
] | null | 2025-01-30T03:54:47Z |
---
library_name: peft
license: llama3.1
base_model: unsloth/Llama-3.1-Storm-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6cd30b78-f6c0-4c61-aa6b-02e6624528b8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.1-Storm-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 91c8fbf3b2faa749_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/91c8fbf3b2faa749_train_data.json
type:
field_input: ingredients
field_instruction: title
field_output: steps
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 30
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/6cd30b78-f6c0-4c61-aa6b-02e6624528b8
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0005
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 2
mlflow_experiment_name: /tmp/91c8fbf3b2faa749_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 300
saves_per_epoch: 0
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 01617280-a23f-4c01-a9d7-f64d9905e269
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 01617280-a23f-4c01-a9d7-f64d9905e269
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 6cd30b78-f6c0-4c61-aa6b-02e6624528b8
This model is a fine-tuned version of [unsloth/Llama-3.1-Storm-8B](https://huggingface.co/unsloth/Llama-3.1-Storm-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3744
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7049 | 0.0003 | 1 | 1.6411 |
| 1.3963 | 0.0094 | 30 | 1.4294 |
| 1.5106 | 0.0187 | 60 | 1.3944 |
| 1.3941 | 0.0281 | 90 | 1.3744 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
minhtrannnn/04b3b496-70cc-4463-a6b7-67be6cf4a0dc
|
minhtrannnn
| 2025-01-30T03:59:42Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:berkeley-nest/Starling-LM-7B-alpha",
"base_model:adapter:berkeley-nest/Starling-LM-7B-alpha",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-30T02:35:32Z |
---
library_name: peft
license: apache-2.0
base_model: berkeley-nest/Starling-LM-7B-alpha
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 04b3b496-70cc-4463-a6b7-67be6cf4a0dc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: berkeley-nest/Starling-LM-7B-alpha
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c7e16a2b3005e907_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c7e16a2b3005e907_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: minhtrannnn/04b3b496-70cc-4463-a6b7-67be6cf4a0dc
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/c7e16a2b3005e907_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0b6a4e43-35ca-49e0-9627-90df8e791f7d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0b6a4e43-35ca-49e0-9627-90df8e791f7d
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 04b3b496-70cc-4463-a6b7-67be6cf4a0dc
This model is a fine-tuned version of [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5760
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 6.3507 | 0.0087 | 200 | 1.5760 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nbninh/b8f92d42-3f4a-4426-8d2c-5bb722d3963b
|
nbninh
| 2025-01-30T03:59:32Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/zephyr-sft",
"base_model:adapter:unsloth/zephyr-sft",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-30T03:21:25Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/zephyr-sft
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b8f92d42-3f4a-4426-8d2c-5bb722d3963b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/zephyr-sft
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 932b45b740ac91ad_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/932b45b740ac91ad_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nbninh/b8f92d42-3f4a-4426-8d2c-5bb722d3963b
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/932b45b740ac91ad_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1c67bdce-2bb5-4db7-acd8-febcebc77549
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1c67bdce-2bb5-4db7-acd8-febcebc77549
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# b8f92d42-3f4a-4426-8d2c-5bb722d3963b
This model is a fine-tuned version of [unsloth/zephyr-sft](https://huggingface.co/unsloth/zephyr-sft) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1352
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6271 | 0.1671 | 200 | 0.1352 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso14/aa047f8f-060f-4f1e-a864-84d2b563ddd5
|
lesso14
| 2025-01-30T03:57:58Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:heegyu/WizardVicuna2-13b-hf",
"base_model:adapter:heegyu/WizardVicuna2-13b-hf",
"region:us"
] | null | 2025-01-30T03:44:24Z |
---
library_name: peft
base_model: heegyu/WizardVicuna2-13b-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: aa047f8f-060f-4f1e-a864-84d2b563ddd5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: heegyu/WizardVicuna2-13b-hf
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- de2f5a3df66e2619_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/de2f5a3df66e2619_train_data.json
type:
field_input: package_name
field_instruction: products
field_output: review
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso14/aa047f8f-060f-4f1e-a864-84d2b563ddd5
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/de2f5a3df66e2619_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5ff6924c-590d-48b4-b2d4-0517ebbf6eba
wandb_project: multi
wandb_run: your_name
wandb_runid: 5ff6924c-590d-48b4-b2d4-0517ebbf6eba
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# aa047f8f-060f-4f1e-a864-84d2b563ddd5
This model is a fine-tuned version of [heegyu/WizardVicuna2-13b-hf](https://huggingface.co/heegyu/WizardVicuna2-13b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3175
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.2786 | 0.0468 | 200 | 3.3175 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
rl-llm-coders/RS_RM_1B_iter1
|
rl-llm-coders
| 2025-01-30T03:57:52Z | 143 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-30T03:51:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zzunyang/KLQD_ko_gemma2
|
zzunyang
| 2025-01-30T03:56:54Z | 25 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:recoilme/recoilme-gemma-2-9B-v0.4",
"base_model:adapter:recoilme/recoilme-gemma-2-9B-v0.4",
"region:us"
] | null | 2025-01-30T02:50:44Z |
---
base_model: recoilme/recoilme-gemma-2-9B-v0.4
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1
|
datlaaaaaaa/6b319281-304b-48d5-911f-78c6d5201d27
|
datlaaaaaaa
| 2025-01-30T03:56:09Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-360M",
"base_model:adapter:unsloth/SmolLM-360M",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-30T03:08:29Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-360M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6b319281-304b-48d5-911f-78c6d5201d27
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-360M
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 932b975fca203429_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/932b975fca203429_train_data.json
type:
field_input: note
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: datlaaaaaaa/6b319281-304b-48d5-911f-78c6d5201d27
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/932b975fca203429_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 192f06f0-5909-42fe-bc5f-7c55cc9d7e7c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 192f06f0-5909-42fe-bc5f-7c55cc9d7e7c
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 6b319281-304b-48d5-911f-78c6d5201d27
This model is a fine-tuned version of [unsloth/SmolLM-360M](https://huggingface.co/unsloth/SmolLM-360M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1469 | 0.0107 | 200 | 1.0261 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nbninh/c769df87-e2aa-412c-850f-fd7bc1d5b5b6
|
nbninh
| 2025-01-30T03:54:51Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-360M",
"base_model:adapter:unsloth/SmolLM-360M",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-30T03:07:42Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-360M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c769df87-e2aa-412c-850f-fd7bc1d5b5b6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-360M
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 932b975fca203429_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/932b975fca203429_train_data.json
type:
field_input: note
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nbninh/c769df87-e2aa-412c-850f-fd7bc1d5b5b6
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/932b975fca203429_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 192f06f0-5909-42fe-bc5f-7c55cc9d7e7c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 192f06f0-5909-42fe-bc5f-7c55cc9d7e7c
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# c769df87-e2aa-412c-850f-fd7bc1d5b5b6
This model is a fine-tuned version of [unsloth/SmolLM-360M](https://huggingface.co/unsloth/SmolLM-360M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0264
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1578 | 0.0107 | 200 | 1.0264 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
sercetexam9/afro-xlmr-base-sun-finetuned-augmentation-LUNAR
|
sercetexam9
| 2025-01-30T03:53:01Z | 38 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:Davlan/afro-xlmr-base",
"base_model:finetune:Davlan/afro-xlmr-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-30T03:42:13Z |
---
library_name: transformers
license: mit
base_model: Davlan/afro-xlmr-base
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: afro-xlmr-base-sun-finetuned-augmentation-LUNAR
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afro-xlmr-base-sun-finetuned-augmentation-LUNAR
This model is a fine-tuned version of [Davlan/afro-xlmr-base](https://huggingface.co/Davlan/afro-xlmr-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3556
- F1: 0.3987
- Roc Auc: 0.6273
- Accuracy: 0.5156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.6264 | 1.0 | 57 | 0.4136 | 0.1420 | 0.5 | 0.5067 |
| 0.382 | 2.0 | 114 | 0.3981 | 0.1420 | 0.5 | 0.5067 |
| 0.4333 | 3.0 | 171 | 0.3526 | 0.2407 | 0.5539 | 0.5244 |
| 0.3472 | 4.0 | 228 | 0.3299 | 0.2767 | 0.5946 | 0.5511 |
| 0.325 | 5.0 | 285 | 0.3186 | 0.2669 | 0.6007 | 0.5156 |
| 0.3188 | 6.0 | 342 | 0.3278 | 0.2681 | 0.5975 | 0.5289 |
| 0.2909 | 7.0 | 399 | 0.3446 | 0.2675 | 0.5809 | 0.5422 |
| 0.2579 | 8.0 | 456 | 0.3238 | 0.2935 | 0.6150 | 0.5289 |
| 0.2779 | 9.0 | 513 | 0.3341 | 0.2891 | 0.6043 | 0.52 |
| 0.2547 | 10.0 | 570 | 0.3615 | 0.3142 | 0.5980 | 0.52 |
| 0.2266 | 11.0 | 627 | 0.3394 | 0.3499 | 0.6212 | 0.5289 |
| 0.2258 | 12.0 | 684 | 0.3587 | 0.3515 | 0.6061 | 0.5022 |
| 0.2159 | 13.0 | 741 | 0.3402 | 0.3677 | 0.6297 | 0.5333 |
| 0.2163 | 14.0 | 798 | 0.3485 | 0.3678 | 0.6198 | 0.4978 |
| 0.2007 | 15.0 | 855 | 0.3556 | 0.3987 | 0.6273 | 0.5156 |
| 0.1955 | 16.0 | 912 | 0.3552 | 0.3724 | 0.6195 | 0.5022 |
| 0.1806 | 17.0 | 969 | 0.3619 | 0.3744 | 0.6195 | 0.5111 |
| 0.189 | 18.0 | 1026 | 0.3559 | 0.3850 | 0.6227 | 0.4889 |
| 0.1837 | 19.0 | 1083 | 0.3561 | 0.3868 | 0.6241 | 0.4933 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
earnxus/53a074bd-abb1-494f-b930-1bda27dbdb63
|
earnxus
| 2025-01-30T03:46:20Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/codegemma-2b",
"base_model:adapter:unsloth/codegemma-2b",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-30T03:33:47Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/codegemma-2b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 53a074bd-abb1-494f-b930-1bda27dbdb63
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/codegemma-2b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 715878661d7cd8f6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/715878661d7cd8f6_train_data.json
type:
field_instruction: question
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: earnxus/53a074bd-abb1-494f-b930-1bda27dbdb63
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/715878661d7cd8f6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 27bcc751-fc2b-4235-9629-3df0070473d7
wandb_project: Gradients-On-Nine
wandb_run: your_name
wandb_runid: 27bcc751-fc2b-4235-9629-3df0070473d7
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 53a074bd-abb1-494f-b930-1bda27dbdb63
This model is a fine-tuned version of [unsloth/codegemma-2b](https://huggingface.co/unsloth/codegemma-2b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6553
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5465 | 0.1427 | 200 | 1.6553 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso05/3d6c2bef-5196-4e10-a18c-e8a671e5592b
|
lesso05
| 2025-01-30T03:46:09Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Mistral-7b-128k",
"base_model:adapter:NousResearch/Yarn-Mistral-7b-128k",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-30T02:41:38Z |
---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Mistral-7b-128k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3d6c2bef-5196-4e10-a18c-e8a671e5592b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Mistral-7b-128k
bf16: true
chat_template: llama3
datasets:
- data_files:
- b1454fa1fd1fe58d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b1454fa1fd1fe58d_train_data.json
type:
field_input: possible_answers
field_instruction: question
field_output: memory_answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso05/3d6c2bef-5196-4e10-a18c-e8a671e5592b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/b1454fa1fd1fe58d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4d3d1b80-2351-40f7-99cf-7e411e41051a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4d3d1b80-2351-40f7-99cf-7e411e41051a
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 3d6c2bef-5196-4e10-a18c-e8a671e5592b
This model is a fine-tuned version of [NousResearch/Yarn-Mistral-7b-128k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5389
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 6.7208 | 0.0001 | 1 | 1.9607 |
| 5.7267 | 0.0007 | 5 | 1.2537 |
| 2.5735 | 0.0014 | 10 | 0.5844 |
| 1.6465 | 0.0021 | 15 | 0.5603 |
| 2.2664 | 0.0027 | 20 | 0.5498 |
| 2.0219 | 0.0034 | 25 | 0.5389 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
fifxus/e82a8c93-2b5c-4be6-a263-00b6ce01c774
|
fifxus
| 2025-01-30T03:46:06Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/codegemma-2b",
"base_model:adapter:unsloth/codegemma-2b",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-30T03:33:51Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/codegemma-2b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e82a8c93-2b5c-4be6-a263-00b6ce01c774
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/codegemma-2b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 715878661d7cd8f6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/715878661d7cd8f6_train_data.json
type:
field_instruction: question
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: fifxus/e82a8c93-2b5c-4be6-a263-00b6ce01c774
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/715878661d7cd8f6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 27bcc751-fc2b-4235-9629-3df0070473d7
wandb_project: Gradients-On-10
wandb_run: your_name
wandb_runid: 27bcc751-fc2b-4235-9629-3df0070473d7
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# e82a8c93-2b5c-4be6-a263-00b6ce01c774
This model is a fine-tuned version of [unsloth/codegemma-2b](https://huggingface.co/unsloth/codegemma-2b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6569
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.528 | 0.1427 | 200 | 1.6569 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Lauther/emb-multilingual-e5-large-instruct-3e
|
Lauther
| 2025-01-30T03:44:39Z | 117 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:5220",
"loss:CosineSimilarityLoss",
"dataset:Lauther/embeddings-train-semantic",
"arxiv:1908.10084",
"base_model:intfloat/multilingual-e5-large-instruct",
"base_model:finetune:intfloat/multilingual-e5-large-instruct",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-01-30T03:43:45Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:5220
- loss:CosineSimilarityLoss
base_model: intfloat/multilingual-e5-large-instruct
widget:
- source_sentence: Identify the column that stores the uncertainty value.
sentences:
- "What is measuring equipment?\nMeasuring equipment refers to the devices that\
\ make up a measurement system. Each piece of equipment has:\n- A unique serial\
\ number for identification.\n- A technical name, such as transmitter, plate,\
\ thermometer, etc.\n\nHow is equipment assigned to a measurement system?\nWhen\
\ equipment is assigned to a measurement system, it is given a unique identifier\
\ called an \"\"Equipment Tag.\"\"\n- If a piece of equipment has a tag, it is\
\ considered in use in a measurement system.\n- If it does not have a tag, it\
\ is considered spare or unused\n\nEquipment assignment based on technology:\n\
The type of equipment assigned to a measurement system depends on the technology\
\ used, for example:\n1. Differential technology (for gas measurement):\n -\
\ Static pressure transmitters\n - Differential pressure transmitters\n \
\ - Temperature transmitters\n - RTDs (thermometers)\n - Orifice plates\n\
\ - Straight stretch\n\n2. Linear technology (for gas measurement):\n -\
\ Temperature transmitters\n - RTDs\n - Static pressure transmitters\n \
\ - Ultrasonic meters\n\nRelationship between equipment and measurement systems:\n\
- A measurement system can have multiple pieces of equipment.\n- However, a piece\
\ of equipment can only be assigned to one measurement system.\n\nDatabase management:\n\
- The database includes a special table to manage the list of equipment assigned\
\ to measurement systems.\n- When a user refers to an \"\"Equipment Tag\"\", they\
\ are searching for operational equipment assigned to a measurement system.\n\
- If a user is looking for spare or unused equipment, they are searching for equipment\
\ not listed in the tagged equipment table.\n- Commonly used when user refers\
\ directly to an \"\"Equipment Tag\""
- 'What is equipment calibration?
Calibration is a metrological verification process used to ensure the accuracy
of measurement equipment. It is performed periodically, based on intervals set
by the company or a regulatory body.
Purpose of calibration:
The calibration process corrects any deviations in how the equipment measures
physical magnitudes (variables). This ensures the equipment provides accurate
and reliable data.
Calibration cycles:
There are two main calibration cycles:
1. As-found: Represents the equipment''s measurement accuracy before any adjustments
are made. This cycle is almost always implemented.
2. As-left: Represents the equipment''s measurement accuracy after adjustments
are made. This cycle is used depending on regulatory requirements.
Calibration uncertainty:
- Uncertainty is included in the results of a calibration.
- Calibration uncertainty refers to the margin of error in the device''s measurements,
which also affects the uncertainty of the measured variable or magnitude.'
- 'What kind of data store an equipment?
Equipments can capture meteorological data, such as pressure, temperature, and
volume (magnitudes). This data is essential for users to perform various calculations.
Data storage:
- The measured values are stored in a special table in the database for magnitudes.
This table contains the values of the variables captured by the equipments.
- These values are **direct measurements** from the fluid (e.g., raw pressure,
temperature, or volume readings). **They are not calculated values**, such as
uncertainty.
- The values stored in the variable values table are **different** from variable
uncertainty values, which are calculated separately and represent the margin of
error.
Accessing the data:
- Users typically access the data by referring to the readings from the measurement
system, not directly from the individual equipments.
- The readings are stored in a "variable values" table within the database.
Linking variable names:
If the user needs to know the name of a variable, they must link the data to another
table that stores information about the types of variables.'
- source_sentence: SELECT * FROM EquipmentType LIMIT 1
sentences:
- 'What kind of data store an equipment?
Equipments can capture meteorological data, such as pressure, temperature, and
volume (magnitudes). This data is essential for users to perform various calculations.
Data storage:
- The measured values are stored in a special table in the database for magnitudes.
This table contains the values of the variables captured by the equipments.
- These values are **direct measurements** from the fluid (e.g., raw pressure,
temperature, or volume readings). **They are not calculated values**, such as
uncertainty.
- The values stored in the variable values table are **different** from variable
uncertainty values, which are calculated separately and represent the margin of
error.
Accessing the data:
- Users typically access the data by referring to the readings from the measurement
system, not directly from the individual equipments.
- The readings are stored in a "variable values" table within the database.
Linking variable names:
If the user needs to know the name of a variable, they must link the data to another
table that stores information about the types of variables.'
- "How does a flow computer generate and store reports?\nA flow computer generates\
\ daily or hourly reports to provide users with operational data. These reports\
\ are stored in the flow computer's memory in an organized format.\n\nReport structure:\n\
- Each report includes:\n- Date and time of the data recording.\n- Data recorded\
\ from flow computers.\n\nData storage in tables:\nThe reports are saved in two\
\ tables:\n1. Main table (Index):\n - Stores the date, time, and flow computer\
\ identifier.\n2. Detail table:\n - Stores the measured values associated with\
\ the report.\n\nConnection to the Modbus table:\nThe flow computer's reports\
\ are linked to a Modbus table. This table contains the names corresponding to\
\ each value in the reports, making it easier to interpret the data."
- 'What is a flow computer?
A flow computer is a device used in measurement engineering. It collects analog
and digital data from flow meters and other sensors.
Key features of a flow computer:
- It has a unique name, firmware version, and manufacturer information.
- It is designed to record and process data such as temperature, pressure, and
fluid volume (for gases or oils).
Main function:
The flow computer sends the collected data to a measurement system. This allows
measurement engineers to analyze the data and perform their tasks effectively.'
- source_sentence: What tables store measurement system data?
sentences:
- "What is uncertainty?\nUncertainty is a measure of confidence in the precision\
\ and reliability of results obtained from equipment or measurement systems. It\
\ quantifies the potential error or margin of error in measurements.\n\nTypes\
\ of uncertainty:\nThere are two main types of uncertainty:\n1. Uncertainty of\
\ magnitudes (variables):\n - Refers to the uncertainty of specific variables,\
\ such as temperature or pressure.\n - It is calculated after calibrating a\
\ device or obtained from the equipment manufacturer's manual.\n - This uncertainty\
\ serves as a starting point for further calculations related to the equipment.\n\
\n2. Uncertainty of the measurement system:\n - Refers to the uncertainty calculated\
\ for the overall flow measurement.\n - It depends on the uncertainties of\
\ the individual variables (magnitudes) and represents the combined margin of\
\ error for the entire system.\n\nKey points:\n- The uncertainties of magnitudes\
\ (variables) are the foundation for calculating the uncertainty of the measurement\
\ system. Think of them as the \"building blocks.\"\n- Do not confuse the two\
\ types of uncertainty:\n - **Uncertainty of magnitudes/variables**: Specific\
\ to individual variables (e.g., temperature, pressure).\n - **Uncertainty\
\ of the measurement system**: Specific to the overall flow measurement.\n\nDatabase\
\ storage for uncertainties:\nIn the database, uncertainty calculations are stored\
\ in two separate tables:\n1. Uncertainty of magnitudes (variables):\n - Stores\
\ the uncertainty values for specific variables (e.g., temperature, pressure).\n\
\n2. Uncertainty of the measurement system:\n - Stores the uncertainty values\
\ for the overall flow measurement system.\n\nHow to retrieve uncertainty data:\n\
- To find the uncertainty of the measurement system, join the measurement systems\
\ table with the uncertainty of the measurement system table.\n- To find the uncertainty\
\ of a specific variable (magnitude), join the measurement systems table with\
\ the uncertainty of magnitudes (variables) table.\n\nImportant note:\nDo not\
\ confuse the two types of uncertainty:\n- If the user requests the uncertainty\
\ of the measurement system, use the first join (measurement systems table + uncertainty\
\ of the measurement system table).\n- If the user requests the uncertainty of\
\ a specific variable (magnitude) in a report, use the second join (measurement\
\ systems table + uncertainty of magnitudes table)."
- "What is a measurement system?\nA measurement system, also referred to as a delivery\
\ point, measurement point, or reception point, is used to measure and monitor\
\ fluids in industrial processes.\n\nKey characteristics of a measurement system:\n\
1. Measurement technology:\n - Differential: Used for precise measurements.\n\
\ - Linear: Used for straightforward measurements.\n\n2. System identifier\
\ (TAG):\n - A unique identifier for the system.\n\n3. Fluid type:\n - The\
\ system can measure gases, oils, condensates, water, steam, or other fluids.\n\
4. System type:\n - Specifies the category or purpose of the system.\n\nMeasurement\
\ technology by fluid type:\n- Gas measurement systems: Use both linear and differential\
\ measurement technologies.\n- Oil measurement systems: Do not use linear or differential\
\ technologies; they are programmed differently.\"\n\n\nClassification of measurement\
\ systems:\nMeasurement systems are classified based on the stage of the process\
\ in which they are used. Common classifications include:\n- Fiscal\n- Operational\n\
- Appropriation\n- Custody\n- Production Poços"
- 'What do measurement equipment measure?
Each equipment measures a physical magnitude, also known as a variable. Based
on the type of variable they measure, devices are classified into different categories.
Equipment classification:
- Primary meter: Assigned by default to equipments like orifice plates.
- Secondary meter: Assigned by default to equipments like transmitters.
- Tertiary meter: Used for other types of equipments.
Equipment types in the database:
The database includes a table listing all equipment types. Examples of equipment
types are:
- Differential pressure transmitters
- RTDs (Resistance Temperature Detectors)
- Orifice plates
- Multivariable transmitters
- Ultrasonic meters
Meteorological checks for equipments:
Each equipment type is assigned a meteorological check, which can be either:
- Calibration: To ensure measurement accuracy.
- Inspection: To verify proper functioning.
Data storage in tables:
The database also includes a separate table for equipment classifications, which
are:
- Primary meter
- Secondary meter
- Tertiary meter
So, an equipment has equipment types and this types has classifications.'
- source_sentence: What is the table structure for equipment types?
sentences:
- "How does a flow computer generate and store reports?\nA flow computer generates\
\ daily or hourly reports to provide users with operational data. These reports\
\ are stored in the flow computer's memory in an organized format.\n\nReport structure:\n\
- Each report includes:\n- Date and time of the data recording.\n- Data recorded\
\ from flow computers.\n\nData storage in tables:\nThe reports are saved in two\
\ tables:\n1. Main table (Index):\n - Stores the date, time, and flow computer\
\ identifier.\n2. Detail table:\n - Stores the measured values associated with\
\ the report.\n\nConnection to the Modbus table:\nThe flow computer's reports\
\ are linked to a Modbus table. This table contains the names corresponding to\
\ each value in the reports, making it easier to interpret the data."
- "What is measuring equipment?\nMeasuring equipment refers to the devices that\
\ make up a measurement system. Each piece of equipment has:\n- A unique serial\
\ number for identification.\n- A technical name, such as transmitter, plate,\
\ thermometer, etc.\n\nHow is equipment assigned to a measurement system?\nWhen\
\ equipment is assigned to a measurement system, it is given a unique identifier\
\ called an \"\"Equipment Tag.\"\"\n- If a piece of equipment has a tag, it is\
\ considered in use in a measurement system.\n- If it does not have a tag, it\
\ is considered spare or unused\n\nEquipment assignment based on technology:\n\
The type of equipment assigned to a measurement system depends on the technology\
\ used, for example:\n1. Differential technology (for gas measurement):\n -\
\ Static pressure transmitters\n - Differential pressure transmitters\n \
\ - Temperature transmitters\n - RTDs (thermometers)\n - Orifice plates\n\
\ - Straight stretch\n\n2. Linear technology (for gas measurement):\n -\
\ Temperature transmitters\n - RTDs\n - Static pressure transmitters\n \
\ - Ultrasonic meters\n\nRelationship between equipment and measurement systems:\n\
- A measurement system can have multiple pieces of equipment.\n- However, a piece\
\ of equipment can only be assigned to one measurement system.\n\nDatabase management:\n\
- The database includes a special table to manage the list of equipment assigned\
\ to measurement systems.\n- When a user refers to an \"\"Equipment Tag\"\", they\
\ are searching for operational equipment assigned to a measurement system.\n\
- If a user is looking for spare or unused equipment, they are searching for equipment\
\ not listed in the tagged equipment table.\n- Commonly used when user refers\
\ directly to an \"\"Equipment Tag\""
- "What is uncertainty?\nUncertainty is a measure of confidence in the precision\
\ and reliability of results obtained from equipment or measurement systems. It\
\ quantifies the potential error or margin of error in measurements.\n\nTypes\
\ of uncertainty:\nThere are two main types of uncertainty:\n1. Uncertainty of\
\ magnitudes (variables):\n - Refers to the uncertainty of specific variables,\
\ such as temperature or pressure.\n - It is calculated after calibrating a\
\ device or obtained from the equipment manufacturer's manual.\n - This uncertainty\
\ serves as a starting point for further calculations related to the equipment.\n\
\n2. Uncertainty of the measurement system:\n - Refers to the uncertainty calculated\
\ for the overall flow measurement.\n - It depends on the uncertainties of\
\ the individual variables (magnitudes) and represents the combined margin of\
\ error for the entire system.\n\nKey points:\n- The uncertainties of magnitudes\
\ (variables) are the foundation for calculating the uncertainty of the measurement\
\ system. Think of them as the \"building blocks.\"\n- Do not confuse the two\
\ types of uncertainty:\n - **Uncertainty of magnitudes/variables**: Specific\
\ to individual variables (e.g., temperature, pressure).\n - **Uncertainty\
\ of the measurement system**: Specific to the overall flow measurement.\n\nDatabase\
\ storage for uncertainties:\nIn the database, uncertainty calculations are stored\
\ in two separate tables:\n1. Uncertainty of magnitudes (variables):\n - Stores\
\ the uncertainty values for specific variables (e.g., temperature, pressure).\n\
\n2. Uncertainty of the measurement system:\n - Stores the uncertainty values\
\ for the overall flow measurement system.\n\nHow to retrieve uncertainty data:\n\
- To find the uncertainty of the measurement system, join the measurement systems\
\ table with the uncertainty of the measurement system table.\n- To find the uncertainty\
\ of a specific variable (magnitude), join the measurement systems table with\
\ the uncertainty of magnitudes (variables) table.\n\nImportant note:\nDo not\
\ confuse the two types of uncertainty:\n- If the user requests the uncertainty\
\ of the measurement system, use the first join (measurement systems table + uncertainty\
\ of the measurement system table).\n- If the user requests the uncertainty of\
\ a specific variable (magnitude) in a report, use the second join (measurement\
\ systems table + uncertainty of magnitudes table)."
- source_sentence: What columns store the uncertainty values?
sentences:
- "What is a measurement system?\nA measurement system, also referred to as a delivery\
\ point, measurement point, or reception point, is used to measure and monitor\
\ fluids in industrial processes.\n\nKey characteristics of a measurement system:\n\
1. Measurement technology:\n - Differential: Used for precise measurements.\n\
\ - Linear: Used for straightforward measurements.\n\n2. System identifier\
\ (TAG):\n - A unique identifier for the system.\n\n3. Fluid type:\n - The\
\ system can measure gases, oils, condensates, water, steam, or other fluids.\n\
4. System type:\n - Specifies the category or purpose of the system.\n\nMeasurement\
\ technology by fluid type:\n- Gas measurement systems: Use both linear and differential\
\ measurement technologies.\n- Oil measurement systems: Do not use linear or differential\
\ technologies; they are programmed differently.\"\n\n\nClassification of measurement\
\ systems:\nMeasurement systems are classified based on the stage of the process\
\ in which they are used. Common classifications include:\n- Fiscal\n- Operational\n\
- Appropriation\n- Custody\n- Production Poços"
- 'How are flow computers and measurement systems related?
Flow computers can have multiple systems assigned to them. However, a measurement
system can only be assigned to one flow computer.
Database terminology:
In the database, this relationship is referred to as:
- Meter streams
- Meter runs
- Sections
Storage of the relationship:
The relationship between a flow computer and its assigned measurement system is
stored in a special table.
User context:
When a user refers to a "meter stream," they are indicating that they are searching
for a measurement system assigned to a specific flow computer.'
- "What is uncertainty?\nUncertainty is a measure of confidence in the precision\
\ and reliability of results obtained from equipment or measurement systems. It\
\ quantifies the potential error or margin of error in measurements.\n\nTypes\
\ of uncertainty:\nThere are two main types of uncertainty:\n1. Uncertainty of\
\ magnitudes (variables):\n - Refers to the uncertainty of specific variables,\
\ such as temperature or pressure.\n - It is calculated after calibrating a\
\ device or obtained from the equipment manufacturer's manual.\n - This uncertainty\
\ serves as a starting point for further calculations related to the equipment.\n\
\n2. Uncertainty of the measurement system:\n - Refers to the uncertainty calculated\
\ for the overall flow measurement.\n - It depends on the uncertainties of\
\ the individual variables (magnitudes) and represents the combined margin of\
\ error for the entire system.\n\nKey points:\n- The uncertainties of magnitudes\
\ (variables) are the foundation for calculating the uncertainty of the measurement\
\ system. Think of them as the \"building blocks.\"\n- Do not confuse the two\
\ types of uncertainty:\n - **Uncertainty of magnitudes/variables**: Specific\
\ to individual variables (e.g., temperature, pressure).\n - **Uncertainty\
\ of the measurement system**: Specific to the overall flow measurement.\n\nDatabase\
\ storage for uncertainties:\nIn the database, uncertainty calculations are stored\
\ in two separate tables:\n1. Uncertainty of magnitudes (variables):\n - Stores\
\ the uncertainty values for specific variables (e.g., temperature, pressure).\n\
\n2. Uncertainty of the measurement system:\n - Stores the uncertainty values\
\ for the overall flow measurement system.\n\nHow to retrieve uncertainty data:\n\
- To find the uncertainty of the measurement system, join the measurement systems\
\ table with the uncertainty of the measurement system table.\n- To find the uncertainty\
\ of a specific variable (magnitude), join the measurement systems table with\
\ the uncertainty of magnitudes (variables) table.\n\nImportant note:\nDo not\
\ confuse the two types of uncertainty:\n- If the user requests the uncertainty\
\ of the measurement system, use the first join (measurement systems table + uncertainty\
\ of the measurement system table).\n- If the user requests the uncertainty of\
\ a specific variable (magnitude) in a report, use the second join (measurement\
\ systems table + uncertainty of magnitudes table)."
datasets:
- Lauther/embeddings-train-semantic
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on intfloat/multilingual-e5-large-instruct
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct) on the [embeddings-train-semantic](https://huggingface.co/datasets/Lauther/embeddings-train-semantic) dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct) <!-- at revision c9e87c786ffac96aeaeb42863276930883923ecb -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [embeddings-train-semantic](https://huggingface.co/datasets/Lauther/embeddings-train-semantic)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Lauther/emb-multilingual-e5-large-instruct-3e")
# Run inference
sentences = [
'What columns store the uncertainty values?',
'How are flow computers and measurement systems related?\nFlow computers can have multiple systems assigned to them. However, a measurement system can only be assigned to one flow computer.\n\nDatabase terminology:\nIn the database, this relationship is referred to as:\n- Meter streams\n- Meter runs\n- Sections\n\nStorage of the relationship:\nThe relationship between a flow computer and its assigned measurement system is stored in a special table.\n\nUser context:\nWhen a user refers to a "meter stream," they are indicating that they are searching for a measurement system assigned to a specific flow computer.',
'What is uncertainty?\nUncertainty is a measure of confidence in the precision and reliability of results obtained from equipment or measurement systems. It quantifies the potential error or margin of error in measurements.\n\nTypes of uncertainty:\nThere are two main types of uncertainty:\n1. Uncertainty of magnitudes (variables):\n - Refers to the uncertainty of specific variables, such as temperature or pressure.\n - It is calculated after calibrating a device or obtained from the equipment manufacturer\'s manual.\n - This uncertainty serves as a starting point for further calculations related to the equipment.\n\n2. Uncertainty of the measurement system:\n - Refers to the uncertainty calculated for the overall flow measurement.\n - It depends on the uncertainties of the individual variables (magnitudes) and represents the combined margin of error for the entire system.\n\nKey points:\n- The uncertainties of magnitudes (variables) are the foundation for calculating the uncertainty of the measurement system. Think of them as the "building blocks."\n- Do not confuse the two types of uncertainty:\n - **Uncertainty of magnitudes/variables**: Specific to individual variables (e.g., temperature, pressure).\n - **Uncertainty of the measurement system**: Specific to the overall flow measurement.\n\nDatabase storage for uncertainties:\nIn the database, uncertainty calculations are stored in two separate tables:\n1. Uncertainty of magnitudes (variables):\n - Stores the uncertainty values for specific variables (e.g., temperature, pressure).\n\n2. Uncertainty of the measurement system:\n - Stores the uncertainty values for the overall flow measurement system.\n\nHow to retrieve uncertainty data:\n- To find the uncertainty of the measurement system, join the measurement systems table with the uncertainty of the measurement system table.\n- To find the uncertainty of a specific variable (magnitude), join the measurement systems table with the uncertainty of magnitudes (variables) table.\n\nImportant note:\nDo not confuse the two types of uncertainty:\n- If the user requests the uncertainty of the measurement system, use the first join (measurement systems table + uncertainty of the measurement system table).\n- If the user requests the uncertainty of a specific variable (magnitude) in a report, use the second join (measurement systems table + uncertainty of magnitudes table).',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### embeddings-train-semantic
* Dataset: [embeddings-train-semantic](https://huggingface.co/datasets/Lauther/embeddings-train-semantic) at [ce90f53](https://huggingface.co/datasets/Lauther/embeddings-train-semantic/tree/ce90f531bc39037053d223b27868ad178852f330)
* Size: 5,220 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 8 tokens</li><li>mean: 18.3 tokens</li><li>max: 102 tokens</li></ul> | <ul><li>min: 120 tokens</li><li>mean: 257.3 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.23</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------|
| <code>What is the data type of differential pressure in the measurement system?</code> | <code>What is uncertainty?<br>Uncertainty is a measure of confidence in the precision and reliability of results obtained from equipment or measurement systems. It quantifies the potential error or margin of error in measurements.<br><br>Types of uncertainty:<br>There are two main types of uncertainty:<br>1. Uncertainty of magnitudes (variables):<br> - Refers to the uncertainty of specific variables, such as temperature or pressure.<br> - It is calculated after calibrating a device or obtained from the equipment manufacturer's manual.<br> - This uncertainty serves as a starting point for further calculations related to the equipment.<br><br>2. Uncertainty of the measurement system:<br> - Refers to the uncertainty calculated for the overall flow measurement.<br> - It depends on the uncertainties of the individual variables (magnitudes) and represents the combined margin of error for the entire system.<br><br>Key points:<br>- The uncertainties of magnitudes (variables) are the foundation for calculating the uncertainty of ...</code> | <code>0.15000000000000002</code> |
| <code>What is the structure of the &&&equipment_data&&& table?</code> | <code>How are flow computers and measurement systems related?<br>Flow computers can have multiple systems assigned to them. However, a measurement system can only be assigned to one flow computer.<br><br>Database terminology:<br>In the database, this relationship is referred to as:<br>- Meter streams<br>- Meter runs<br>- Sections<br><br>Storage of the relationship:<br>The relationship between a flow computer and its assigned measurement system is stored in a special table.<br><br>User context:<br>When a user refers to a "meter stream," they are indicating that they are searching for a measurement system assigned to a specific flow computer.</code> | <code>0.35000000000000003</code> |
| <code>Find the columns in the flow computer table that identify the flow computer.</code> | <code>What kind of data store an equipment?<br>Equipments can capture meteorological data, such as pressure, temperature, and volume (magnitudes). This data is essential for users to perform various calculations.<br><br>Data storage:<br>- The measured values are stored in a special table in the database for magnitudes. This table contains the values of the variables captured by the equipments.<br>- These values are **direct measurements** from the fluid (e.g., raw pressure, temperature, or volume readings). **They are not calculated values**, such as uncertainty.<br>- The values stored in the variable values table are **different** from variable uncertainty values, which are calculated separately and represent the margin of error.<br><br>Accessing the data:<br>- Users typically access the data by referring to the readings from the measurement system, not directly from the individual equipments.<br>- The readings are stored in a "variable values" table within the database.<br><br>Linking variable names:<br>If the user needs to kno...</code> | <code>0.1</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Evaluation Dataset
#### embeddings-train-semantic
* Dataset: [embeddings-train-semantic](https://huggingface.co/datasets/Lauther/embeddings-train-semantic) at [ce90f53](https://huggingface.co/datasets/Lauther/embeddings-train-semantic/tree/ce90f531bc39037053d223b27868ad178852f330)
* Size: 652 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 652 samples:
| | sentence1 | sentence2 | score |
|:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 8 tokens</li><li>mean: 17.8 tokens</li><li>max: 102 tokens</li></ul> | <ul><li>min: 120 tokens</li><li>mean: 253.84 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.24</li><li>max: 0.9</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:-------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------|
| <code>How can I filter uncertainty reports by equipment tag?</code> | <code>How does a flow computer generate and store reports?<br>A flow computer generates daily or hourly reports to provide users with operational data. These reports are stored in the flow computer's memory in an organized format.<br><br>Report structure:<br>- Each report includes:<br>- Date and time of the data recording.<br>- Data recorded from flow computers.<br><br>Data storage in tables:<br>The reports are saved in two tables:<br>1. Main table (Index):<br> - Stores the date, time, and flow computer identifier.<br>2. Detail table:<br> - Stores the measured values associated with the report.<br><br>Connection to the Modbus table:<br>The flow computer's reports are linked to a Modbus table. This table contains the names corresponding to each value in the reports, making it easier to interpret the data.</code> | <code>0.09999999999999999</code> |
| <code>What is the purpose of the flow_data table?</code> | <code>What is uncertainty?<br>Uncertainty is a measure of confidence in the precision and reliability of results obtained from equipment or measurement systems. It quantifies the potential error or margin of error in measurements.<br><br>Types of uncertainty:<br>There are two main types of uncertainty:<br>1. Uncertainty of magnitudes (variables):<br> - Refers to the uncertainty of specific variables, such as temperature or pressure.<br> - It is calculated after calibrating a device or obtained from the equipment manufacturer's manual.<br> - This uncertainty serves as a starting point for further calculations related to the equipment.<br><br>2. Uncertainty of the measurement system:<br> - Refers to the uncertainty calculated for the overall flow measurement.<br> - It depends on the uncertainties of the individual variables (magnitudes) and represents the combined margin of error for the entire system.<br><br>Key points:<br>- The uncertainties of magnitudes (variables) are the foundation for calculating the uncertainty of ...</code> | <code>0.15000000000000002</code> |
| <code>What is the column name for the report date in the Reports table?</code> | <code>What is equipment calibration?<br>Calibration is a metrological verification process used to ensure the accuracy of measurement equipment. It is performed periodically, based on intervals set by the company or a regulatory body.<br><br>Purpose of calibration:<br>The calibration process corrects any deviations in how the equipment measures physical magnitudes (variables). This ensures the equipment provides accurate and reliable data.<br><br>Calibration cycles:<br>There are two main calibration cycles:<br>1. As-found: Represents the equipment's measurement accuracy before any adjustments are made. This cycle is almost always implemented.<br>2. As-left: Represents the equipment's measurement accuracy after adjustments are made. This cycle is used depending on regulatory requirements.<br><br>Calibration uncertainty:<br>- Uncertainty is included in the results of a calibration.<br>- Calibration uncertainty refers to the margin of error in the device's measurements, which also affects the uncertainty of the measured variable or ...</code> | <code>0.1</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 4
- `gradient_accumulation_steps`: 4
- `learning_rate`: 2e-05
- `warmup_ratio`: 0.1
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 4
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 4
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0307 | 10 | 1.5374 | - |
| 0.0613 | 20 | 1.0251 | - |
| 0.0920 | 30 | 0.361 | - |
| 0.1226 | 40 | 0.1819 | - |
| 0.1533 | 50 | 0.186 | - |
| 0.1839 | 60 | 0.1697 | - |
| 0.2146 | 70 | 0.1437 | - |
| 0.2452 | 80 | 0.172 | - |
| 0.2759 | 90 | 0.1199 | - |
| 0.3065 | 100 | 0.1278 | - |
| 0.3372 | 110 | 0.1037 | - |
| 0.3678 | 120 | 0.1156 | - |
| 0.3985 | 130 | 0.0971 | - |
| 0.4291 | 140 | 0.0911 | - |
| 0.4598 | 150 | 0.1158 | 0.0249 |
| 0.4904 | 160 | 0.0906 | - |
| 0.5211 | 170 | 0.106 | - |
| 0.5517 | 180 | 0.0921 | - |
| 0.5824 | 190 | 0.0748 | - |
| 0.6130 | 200 | 0.0741 | - |
| 0.6437 | 210 | 0.0894 | - |
| 0.6743 | 220 | 0.0815 | - |
| 0.7050 | 230 | 0.0771 | - |
| 0.7356 | 240 | 0.1156 | - |
| 0.7663 | 250 | 0.0857 | - |
| 0.7969 | 260 | 0.0566 | - |
| 0.8276 | 270 | 0.0716 | - |
| 0.8582 | 280 | 0.0662 | - |
| 0.8889 | 290 | 0.0963 | - |
| 0.9195 | 300 | 0.0678 | 0.0212 |
| 0.9502 | 310 | 0.077 | - |
| 0.9808 | 320 | 0.0642 | - |
| 1.0092 | 330 | 0.0725 | - |
| 1.0398 | 340 | 0.0701 | - |
| 1.0705 | 350 | 0.0549 | - |
| 1.1011 | 360 | 0.0699 | - |
| 1.1318 | 370 | 0.0714 | - |
| 1.1625 | 380 | 0.0745 | - |
| 1.1931 | 390 | 0.0754 | - |
| 1.2238 | 400 | 0.0486 | - |
| 1.2544 | 410 | 0.047 | - |
| 1.2851 | 420 | 0.076 | - |
| 1.3157 | 430 | 0.0689 | - |
| 1.3464 | 440 | 0.0629 | - |
| 1.3770 | 450 | 0.0657 | 0.0178 |
| 1.4077 | 460 | 0.0622 | - |
| 1.4383 | 470 | 0.0657 | - |
| 1.4690 | 480 | 0.0498 | - |
| 1.4996 | 490 | 0.0653 | - |
| 1.5303 | 500 | 0.0715 | - |
| 1.5609 | 510 | 0.0615 | - |
| 1.5916 | 520 | 0.0441 | - |
| 1.6222 | 530 | 0.0566 | - |
| 1.6529 | 540 | 0.0524 | - |
| 1.6835 | 550 | 0.0423 | - |
| 1.7142 | 560 | 0.0441 | - |
| 1.7448 | 570 | 0.0553 | - |
| 1.7755 | 580 | 0.0572 | - |
| 1.8061 | 590 | 0.0686 | - |
| 1.8368 | 600 | 0.06 | 0.0146 |
| 1.8674 | 610 | 0.0562 | - |
| 1.8981 | 620 | 0.0517 | - |
| 1.9287 | 630 | 0.0498 | - |
| 1.9594 | 640 | 0.0424 | - |
| 1.9900 | 650 | 0.0729 | - |
| 2.0184 | 660 | 0.0347 | - |
| 2.0490 | 670 | 0.06 | - |
| 2.0797 | 680 | 0.0441 | - |
| 2.1103 | 690 | 0.0409 | - |
| 2.1410 | 700 | 0.0416 | - |
| 2.1716 | 710 | 0.0345 | - |
| 2.2023 | 720 | 0.024 | - |
| 2.2330 | 730 | 0.0458 | - |
| 2.2636 | 740 | 0.0465 | - |
| 2.2943 | 750 | 0.0494 | 0.0132 |
| 2.3249 | 760 | 0.0388 | - |
| 2.3556 | 770 | 0.0363 | - |
| 2.3862 | 780 | 0.0441 | - |
| 2.4169 | 790 | 0.0378 | - |
| 2.4475 | 800 | 0.0484 | - |
| 2.4782 | 810 | 0.051 | - |
| 2.5088 | 820 | 0.0464 | - |
| 2.5395 | 830 | 0.036 | - |
| 2.5701 | 840 | 0.0423 | - |
| 2.6008 | 850 | 0.0278 | - |
| 2.6314 | 860 | 0.0474 | - |
| 2.6621 | 870 | 0.0357 | - |
| 2.6927 | 880 | 0.0386 | - |
| 2.7234 | 890 | 0.0334 | - |
| 2.7540 | 900 | 0.0199 | 0.0127 |
| 2.7847 | 910 | 0.0381 | - |
| 2.8153 | 920 | 0.0415 | - |
| 2.8460 | 930 | 0.0274 | - |
| 2.8766 | 940 | 0.0353 | - |
| 2.9073 | 950 | 0.0423 | - |
| 2.9379 | 960 | 0.0267 | - |
| 2.9686 | 970 | 0.042 | - |
### Framework Versions
- Python: 3.11.0
- Sentence Transformers: 3.4.0
- Transformers: 4.48.1
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
nghiatrannnnnn/7e2a3bae-1052-48c0-a520-a5f0ddfb314d
|
nghiatrannnnnn
| 2025-01-30T03:44:18Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/zephyr-sft",
"base_model:adapter:unsloth/zephyr-sft",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-30T03:21:08Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/zephyr-sft
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7e2a3bae-1052-48c0-a520-a5f0ddfb314d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/zephyr-sft
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 932b45b740ac91ad_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/932b45b740ac91ad_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nghiatrannnnnn/7e2a3bae-1052-48c0-a520-a5f0ddfb314d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/932b45b740ac91ad_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1c67bdce-2bb5-4db7-acd8-febcebc77549
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1c67bdce-2bb5-4db7-acd8-febcebc77549
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 7e2a3bae-1052-48c0-a520-a5f0ddfb314d
This model is a fine-tuned version of [unsloth/zephyr-sft](https://huggingface.co/unsloth/zephyr-sft) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1347
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7164 | 0.1671 | 200 | 0.1347 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
cunghoctienganh/a3f1d3f3-0422-41bc-8014-1a7000a20f88
|
cunghoctienganh
| 2025-01-30T03:43:14Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:berkeley-nest/Starling-LM-7B-alpha",
"base_model:adapter:berkeley-nest/Starling-LM-7B-alpha",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-30T02:34:45Z |
---
library_name: peft
license: apache-2.0
base_model: berkeley-nest/Starling-LM-7B-alpha
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a3f1d3f3-0422-41bc-8014-1a7000a20f88
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: berkeley-nest/Starling-LM-7B-alpha
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c7e16a2b3005e907_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c7e16a2b3005e907_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: cunghoctienganh/a3f1d3f3-0422-41bc-8014-1a7000a20f88
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/c7e16a2b3005e907_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0b6a4e43-35ca-49e0-9627-90df8e791f7d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0b6a4e43-35ca-49e0-9627-90df8e791f7d
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# a3f1d3f3-0422-41bc-8014-1a7000a20f88
This model is a fine-tuned version of [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 6.3536 | 0.0087 | 200 | 1.5756 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nhungphammmmm/223e2da2-48fe-42dc-b9f9-8fb012228a77
|
nhungphammmmm
| 2025-01-30T03:36:50Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:berkeley-nest/Starling-LM-7B-alpha",
"base_model:adapter:berkeley-nest/Starling-LM-7B-alpha",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-30T02:35:00Z |
---
library_name: peft
license: apache-2.0
base_model: berkeley-nest/Starling-LM-7B-alpha
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 223e2da2-48fe-42dc-b9f9-8fb012228a77
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: berkeley-nest/Starling-LM-7B-alpha
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c7e16a2b3005e907_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c7e16a2b3005e907_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhungphammmmm/223e2da2-48fe-42dc-b9f9-8fb012228a77
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/c7e16a2b3005e907_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0b6a4e43-35ca-49e0-9627-90df8e791f7d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0b6a4e43-35ca-49e0-9627-90df8e791f7d
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 223e2da2-48fe-42dc-b9f9-8fb012228a77
This model is a fine-tuned version of [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 6.3481 | 0.0087 | 200 | 1.5758 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nadejdatarabukina/71d3a083-9dbb-44f7-b1ff-8365afb19043
|
nadejdatarabukina
| 2025-01-30T03:36:40Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-0.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-30T03:17:00Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 71d3a083-9dbb-44f7-b1ff-8365afb19043
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-0.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5dd32cdee5c892d5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5dd32cdee5c892d5_train_data.json
type:
field_instruction: english_prompt
field_output: sql_statement
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: nadejdatarabukina/71d3a083-9dbb-44f7-b1ff-8365afb19043
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 16
lora_dropout: 0.02
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 33
micro_batch_size: 2
mlflow_experiment_name: /tmp/5dd32cdee5c892d5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 17
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 132b9665-5e41-4e60-9e8b-87e501bd6138
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 132b9665-5e41-4e60-9e8b-87e501bd6138
warmup_steps: 17
weight_decay: 0.005
xformers_attention: true
```
</details><br>
# 71d3a083-9dbb-44f7-b1ff-8365afb19043
This model is a fine-tuned version of [unsloth/Qwen2-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 17
- training_steps: 33
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | nan |
| 0.0 | 0.0004 | 5 | nan |
| 0.0 | 0.0008 | 10 | nan |
| 0.0 | 0.0013 | 15 | nan |
| 0.0 | 0.0017 | 20 | nan |
| 0.0 | 0.0021 | 25 | nan |
| 0.0 | 0.0025 | 30 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
aseratus1/a732b9eb-3dbc-4eaa-8205-7f8501b363f6
|
aseratus1
| 2025-01-30T03:36:22Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-360M",
"base_model:adapter:unsloth/SmolLM-360M",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-30T03:11:43Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-360M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a732b9eb-3dbc-4eaa-8205-7f8501b363f6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-360M
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 932b975fca203429_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/932b975fca203429_train_data.json
type:
field_input: note
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: aseratus1/a732b9eb-3dbc-4eaa-8205-7f8501b363f6
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/932b975fca203429_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 192f06f0-5909-42fe-bc5f-7c55cc9d7e7c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 192f06f0-5909-42fe-bc5f-7c55cc9d7e7c
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a732b9eb-3dbc-4eaa-8205-7f8501b363f6
This model is a fine-tuned version of [unsloth/SmolLM-360M](https://huggingface.co/unsloth/SmolLM-360M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9370
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.849 | 0.0107 | 200 | 0.9370 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
oldiday/eff7dcc1-80b7-4bad-bc83-8d05fb95ccbd
|
oldiday
| 2025-01-30T03:35:29Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-7B",
"base_model:adapter:Qwen/Qwen2.5-7B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-30T03:10:05Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: eff7dcc1-80b7-4bad-bc83-8d05fb95ccbd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fd7b4a135a8a3353_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fd7b4a135a8a3353_train_data.json
type:
field_input: choices
field_instruction: instruction
field_output: summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: oldiday/eff7dcc1-80b7-4bad-bc83-8d05fb95ccbd
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/fd7b4a135a8a3353_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 2585e5aa-7408-49e8-8a48-8d96ba8b51db
wandb_project: Gradients-On-Six
wandb_run: your_name
wandb_runid: 2585e5aa-7408-49e8-8a48-8d96ba8b51db
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# eff7dcc1-80b7-4bad-bc83-8d05fb95ccbd
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9515
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0021 | 1 | 4.1402 |
| 3.7985 | 0.0187 | 9 | 3.5354 |
| 3.1328 | 0.0375 | 18 | 3.0675 |
| 2.9045 | 0.0563 | 27 | 3.0002 |
| 2.9692 | 0.075 | 36 | 2.9799 |
| 2.92 | 0.0938 | 45 | 2.9708 |
| 3.0109 | 0.1125 | 54 | 2.9612 |
| 2.9287 | 0.1313 | 63 | 2.9583 |
| 3.0034 | 0.15 | 72 | 2.9546 |
| 3.046 | 0.1688 | 81 | 2.9525 |
| 3.1371 | 0.1875 | 90 | 2.9511 |
| 2.9368 | 0.2062 | 99 | 2.9515 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
demohong/e5c786ca-02b3-4886-bf43-37fcac7659c8
|
demohong
| 2025-01-30T03:35:11Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-360M",
"base_model:adapter:unsloth/SmolLM-360M",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-30T03:07:58Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-360M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e5c786ca-02b3-4886-bf43-37fcac7659c8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-360M
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 932b975fca203429_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/932b975fca203429_train_data.json
type:
field_input: note
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: demohong/e5c786ca-02b3-4886-bf43-37fcac7659c8
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/932b975fca203429_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 192f06f0-5909-42fe-bc5f-7c55cc9d7e7c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 192f06f0-5909-42fe-bc5f-7c55cc9d7e7c
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# e5c786ca-02b3-4886-bf43-37fcac7659c8
This model is a fine-tuned version of [unsloth/SmolLM-360M](https://huggingface.co/unsloth/SmolLM-360M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0272
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1611 | 0.0107 | 200 | 1.0272 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Nexspear/ef78cf79-127f-4342-9abd-936a24e3de25
|
Nexspear
| 2025-01-30T03:32:21Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:adapter:unsloth/Llama-3.2-1B-Instruct",
"license:llama3.2",
"region:us"
] | null | 2025-01-30T03:13:54Z |
---
library_name: peft
license: llama3.2
base_model: unsloth/Llama-3.2-1B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ef78cf79-127f-4342-9abd-936a24e3de25
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.2-1B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a1e5e079b3bd8977_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a1e5e079b3bd8977_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: Nexspear/ef78cf79-127f-4342-9abd-936a24e3de25
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/a1e5e079b3bd8977_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 5ee1387e-ec6a-44cd-b489-4ec211ccdb84
wandb_project: Gradients-On-Four
wandb_run: your_name
wandb_runid: 5ee1387e-ec6a-44cd-b489-4ec211ccdb84
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# ef78cf79-127f-4342-9abd-936a24e3de25
This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0975
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0007 | 1 | 2.7159 |
| 2.5733 | 0.0059 | 9 | 2.6051 |
| 2.4322 | 0.0117 | 18 | 2.4262 |
| 2.2516 | 0.0176 | 27 | 2.3187 |
| 2.3156 | 0.0234 | 36 | 2.2441 |
| 2.265 | 0.0293 | 45 | 2.1921 |
| 2.123 | 0.0351 | 54 | 2.1536 |
| 2.1664 | 0.0410 | 63 | 2.1280 |
| 2.1344 | 0.0468 | 72 | 2.1116 |
| 2.117 | 0.0527 | 81 | 2.1024 |
| 2.0101 | 0.0585 | 90 | 2.0983 |
| 2.0481 | 0.0644 | 99 | 2.0975 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
ancient41/8af5193a-522c-44d4-aa8f-baf653373378
|
ancient41
| 2025-01-30T03:31:52Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-0.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-30T03:16:51Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8af5193a-522c-44d4-aa8f-baf653373378
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-0.5B-Instruct
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 5dd32cdee5c892d5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5dd32cdee5c892d5_train_data.json
type:
field_instruction: english_prompt
field_output: sql_statement
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: ancient41/8af5193a-522c-44d4-aa8f-baf653373378
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/5dd32cdee5c892d5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 132b9665-5e41-4e60-9e8b-87e501bd6138
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 132b9665-5e41-4e60-9e8b-87e501bd6138
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 8af5193a-522c-44d4-aa8f-baf653373378
This model is a fine-tuned version of [unsloth/Qwen2-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0420
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.4922 | 0.0003 | 1 | 2.5977 |
| 0.4016 | 0.0169 | 50 | 0.2359 |
| 0.1551 | 0.0337 | 100 | 0.0872 |
| 0.1064 | 0.0506 | 150 | 0.0486 |
| 0.1183 | 0.0675 | 200 | 0.0420 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
rl-llm-coders/RM_8B_iter0
|
rl-llm-coders
| 2025-01-30T03:31:41Z | 470 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-30T03:07:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lesso17/5940c906-038e-4301-b293-a9ba8eaa500c
|
lesso17
| 2025-01-30T03:30:59Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-0.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-0.5B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-30T03:16:43Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5940c906-038e-4301-b293-a9ba8eaa500c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-0.5B-Instruct
bf16: auto
chat_template: llama3
datasets:
- data_files:
- 5dd32cdee5c892d5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5dd32cdee5c892d5_train_data.json
type:
field_instruction: english_prompt
field_output: sql_statement
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso17/5940c906-038e-4301-b293-a9ba8eaa500c
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/5dd32cdee5c892d5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 132b9665-5e41-4e60-9e8b-87e501bd6138
wandb_project: new-01-29
wandb_run: your_name
wandb_runid: 132b9665-5e41-4e60-9e8b-87e501bd6138
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 5940c906-038e-4301-b293-a9ba8eaa500c
This model is a fine-tuned version of [unsloth/Qwen2-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0169 | 200 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
robiual-awal/043838b0-9dbf-4345-9236-21024adcee21
|
robiual-awal
| 2025-01-30T03:29:34Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"falcon",
"axolotl",
"generated_from_trainer",
"base_model:katuni4ka/tiny-random-falcon-40b",
"base_model:adapter:katuni4ka/tiny-random-falcon-40b",
"region:us"
] | null | 2025-01-30T03:20:30Z |
---
library_name: peft
base_model: katuni4ka/tiny-random-falcon-40b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 043838b0-9dbf-4345-9236-21024adcee21
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: katuni4ka/tiny-random-falcon-40b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 943fd678f7c64ba8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/943fd678f7c64ba8_train_data.json
type:
field_instruction: text
field_output: target
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: robiual-awal/043838b0-9dbf-4345-9236-21024adcee21
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/943fd678f7c64ba8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 66129ec5-b788-45d9-a9f3-2f23dc0fd9cd
wandb_project: Birthday-SN56-30-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 66129ec5-b788-45d9-a9f3-2f23dc0fd9cd
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 043838b0-9dbf-4345-9236-21024adcee21
This model is a fine-tuned version of [katuni4ka/tiny-random-falcon-40b](https://huggingface.co/katuni4ka/tiny-random-falcon-40b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.9899
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0007 | 1 | 11.1302 |
| 44.4981 | 0.0097 | 13 | 11.0910 |
| 44.3207 | 0.0194 | 26 | 11.0259 |
| 44.1317 | 0.0291 | 39 | 10.9899 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
tarabukinivan/9d5ba643-9c85-4b83-ad27-3e3c76dcd8c3
|
tarabukinivan
| 2025-01-30T03:29:11Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-14B",
"base_model:adapter:unsloth/Qwen2.5-14B",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-30T02:36:23Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-14B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9d5ba643-9c85-4b83-ad27-3e3c76dcd8c3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-14B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- dc8bf750a2046088_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/dc8bf750a2046088_train_data.json
type:
field_instruction: query
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: tarabukinivan/9d5ba643-9c85-4b83-ad27-3e3c76dcd8c3
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 37
micro_batch_size: 2
mlflow_experiment_name: /tmp/dc8bf750a2046088_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 18
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2705e754-c046-43d1-ab6e-d5d01d275ab7
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2705e754-c046-43d1-ab6e-d5d01d275ab7
warmup_steps: 18
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 9d5ba643-9c85-4b83-ad27-3e3c76dcd8c3
This model is a fine-tuned version of [unsloth/Qwen2.5-14B](https://huggingface.co/unsloth/Qwen2.5-14B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 18
- training_steps: 37
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0004 | 1 | nan |
| 0.0 | 0.0021 | 5 | nan |
| 0.0 | 0.0041 | 10 | nan |
| 0.0 | 0.0062 | 15 | nan |
| 0.0 | 0.0082 | 20 | nan |
| 0.0 | 0.0103 | 25 | nan |
| 0.0 | 0.0124 | 30 | nan |
| 0.0 | 0.0144 | 35 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
kostiantynk-out/f53cf115-7f97-4453-9091-3d8838b2f696
|
kostiantynk-out
| 2025-01-30T03:28:55Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"falcon",
"axolotl",
"generated_from_trainer",
"base_model:katuni4ka/tiny-random-falcon-40b",
"base_model:adapter:katuni4ka/tiny-random-falcon-40b",
"region:us"
] | null | 2025-01-30T03:19:25Z |
---
library_name: peft
base_model: katuni4ka/tiny-random-falcon-40b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f53cf115-7f97-4453-9091-3d8838b2f696
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: katuni4ka/tiny-random-falcon-40b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 943fd678f7c64ba8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/943fd678f7c64ba8_train_data.json
type:
field_instruction: text
field_output: target
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk-out/f53cf115-7f97-4453-9091-3d8838b2f696
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/943fd678f7c64ba8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 66129ec5-b788-45d9-a9f3-2f23dc0fd9cd
wandb_project: Birthday-SN56-10-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 66129ec5-b788-45d9-a9f3-2f23dc0fd9cd
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f53cf115-7f97-4453-9091-3d8838b2f696
This model is a fine-tuned version of [katuni4ka/tiny-random-falcon-40b](https://huggingface.co/katuni4ka/tiny-random-falcon-40b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.9964
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0007 | 1 | 11.1302 |
| 44.4995 | 0.0097 | 13 | 11.0934 |
| 44.3301 | 0.0194 | 26 | 11.0310 |
| 44.1495 | 0.0291 | 39 | 10.9964 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Best000/169cb39b-46ea-4c31-b0cd-71786d79bada
|
Best000
| 2025-01-30T03:28:54Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"falcon",
"axolotl",
"generated_from_trainer",
"base_model:katuni4ka/tiny-random-falcon-40b",
"base_model:adapter:katuni4ka/tiny-random-falcon-40b",
"region:us"
] | null | 2025-01-30T03:19:30Z |
---
library_name: peft
base_model: katuni4ka/tiny-random-falcon-40b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 169cb39b-46ea-4c31-b0cd-71786d79bada
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: katuni4ka/tiny-random-falcon-40b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 943fd678f7c64ba8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/943fd678f7c64ba8_train_data.json
type:
field_instruction: text
field_output: target
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Best000/169cb39b-46ea-4c31-b0cd-71786d79bada
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/943fd678f7c64ba8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 66129ec5-b788-45d9-a9f3-2f23dc0fd9cd
wandb_project: Birthday-SN56-16-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 66129ec5-b788-45d9-a9f3-2f23dc0fd9cd
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 169cb39b-46ea-4c31-b0cd-71786d79bada
This model is a fine-tuned version of [katuni4ka/tiny-random-falcon-40b](https://huggingface.co/katuni4ka/tiny-random-falcon-40b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.9872
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0007 | 1 | 11.1302 |
| 44.5122 | 0.0097 | 13 | 11.1009 |
| 44.3598 | 0.0194 | 26 | 11.0305 |
| 44.1579 | 0.0291 | 39 | 10.9872 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nathanialhunt/a3304126-ec97-44ca-8218-479f9fc1cf06
|
nathanialhunt
| 2025-01-30T03:28:39Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"falcon",
"axolotl",
"generated_from_trainer",
"base_model:katuni4ka/tiny-random-falcon-40b",
"base_model:adapter:katuni4ka/tiny-random-falcon-40b",
"region:us"
] | null | 2025-01-30T03:19:30Z |
---
library_name: peft
base_model: katuni4ka/tiny-random-falcon-40b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a3304126-ec97-44ca-8218-479f9fc1cf06
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: katuni4ka/tiny-random-falcon-40b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 943fd678f7c64ba8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/943fd678f7c64ba8_train_data.json
type:
field_instruction: text
field_output: target
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: nathanialhunt/a3304126-ec97-44ca-8218-479f9fc1cf06
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/943fd678f7c64ba8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 66129ec5-b788-45d9-a9f3-2f23dc0fd9cd
wandb_project: Birthday-SN56-24-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 66129ec5-b788-45d9-a9f3-2f23dc0fd9cd
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a3304126-ec97-44ca-8218-479f9fc1cf06
This model is a fine-tuned version of [katuni4ka/tiny-random-falcon-40b](https://huggingface.co/katuni4ka/tiny-random-falcon-40b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.0013
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0007 | 1 | 11.1302 |
| 44.5008 | 0.0097 | 13 | 11.0950 |
| 44.3424 | 0.0194 | 26 | 11.0348 |
| 44.1683 | 0.0291 | 39 | 11.0013 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
ajku2199/Llama-2-7b-hf_abstract_prob6_dataset2_n1000_seed42_epochs10_batch8_qlora
|
ajku2199
| 2025-01-30T03:28:12Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"region:us"
] | null | 2025-01-10T08:43:56Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
baby-dev/b5646e27-268c-44f7-8190-922eca3eecdd
|
baby-dev
| 2025-01-30T03:28:04Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"falcon",
"axolotl",
"generated_from_trainer",
"base_model:katuni4ka/tiny-random-falcon-40b",
"base_model:adapter:katuni4ka/tiny-random-falcon-40b",
"region:us"
] | null | 2025-01-30T03:18:58Z |
---
library_name: peft
base_model: katuni4ka/tiny-random-falcon-40b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b5646e27-268c-44f7-8190-922eca3eecdd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: katuni4ka/tiny-random-falcon-40b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 943fd678f7c64ba8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/943fd678f7c64ba8_train_data.json
type:
field_instruction: text
field_output: target
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: baby-dev/b5646e27-268c-44f7-8190-922eca3eecdd
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 2
mlflow_experiment_name: /tmp/943fd678f7c64ba8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 66129ec5-b788-45d9-a9f3-2f23dc0fd9cd
wandb_project: SN56-41
wandb_run: your_name
wandb_runid: 66129ec5-b788-45d9-a9f3-2f23dc0fd9cd
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b5646e27-268c-44f7-8190-922eca3eecdd
This model is a fine-tuned version of [katuni4ka/tiny-random-falcon-40b](https://huggingface.co/katuni4ka/tiny-random-falcon-40b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.8207
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 44.5585 | 0.0007 | 1 | 11.1302 |
| 44.1183 | 0.0186 | 25 | 11.0326 |
| 43.5945 | 0.0373 | 50 | 10.8938 |
| 43.4202 | 0.0559 | 75 | 10.8310 |
| 43.375 | 0.0746 | 100 | 10.8207 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso16/b5106f45-749e-4df3-9209-be5e7efca82a
|
lesso16
| 2025-01-30T03:23:26Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-7B",
"base_model:adapter:Qwen/Qwen2.5-7B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-30T03:09:54Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b5106f45-749e-4df3-9209-be5e7efca82a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fd7b4a135a8a3353_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fd7b4a135a8a3353_train_data.json
type:
field_input: choices
field_instruction: instruction
field_output: summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso16/b5106f45-749e-4df3-9209-be5e7efca82a
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/fd7b4a135a8a3353_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2585e5aa-7408-49e8-8a48-8d96ba8b51db
wandb_project: multi
wandb_run: your_name
wandb_runid: 2585e5aa-7408-49e8-8a48-8d96ba8b51db
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# b5106f45-749e-4df3-9209-be5e7efca82a
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9728
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.8812 | 0.8333 | 200 | 2.9728 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso08/984c3792-83c0-425a-a701-df77f4630471
|
lesso08
| 2025-01-30T03:22:50Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-7B",
"base_model:adapter:Qwen/Qwen2.5-7B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-30T03:09:33Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 984c3792-83c0-425a-a701-df77f4630471
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fd7b4a135a8a3353_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fd7b4a135a8a3353_train_data.json
type:
field_input: choices
field_instruction: instruction
field_output: summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso08/984c3792-83c0-425a-a701-df77f4630471
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/fd7b4a135a8a3353_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2585e5aa-7408-49e8-8a48-8d96ba8b51db
wandb_project: multi
wandb_run: your_name
wandb_runid: 2585e5aa-7408-49e8-8a48-8d96ba8b51db
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 984c3792-83c0-425a-a701-df77f4630471
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9727
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.8824 | 0.8333 | 200 | 2.9727 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso05/fc17191b-9d07-4747-ae77-e7cf37c0fa12
|
lesso05
| 2025-01-30T03:22:31Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"falcon",
"axolotl",
"generated_from_trainer",
"base_model:katuni4ka/tiny-random-falcon-40b",
"base_model:adapter:katuni4ka/tiny-random-falcon-40b",
"region:us"
] | null | 2025-01-30T03:19:16Z |
---
library_name: peft
base_model: katuni4ka/tiny-random-falcon-40b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fc17191b-9d07-4747-ae77-e7cf37c0fa12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: katuni4ka/tiny-random-falcon-40b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 943fd678f7c64ba8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/943fd678f7c64ba8_train_data.json
type:
field_instruction: text
field_output: target
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso05/fc17191b-9d07-4747-ae77-e7cf37c0fa12
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/943fd678f7c64ba8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 66129ec5-b788-45d9-a9f3-2f23dc0fd9cd
wandb_project: new-01-29
wandb_run: your_name
wandb_runid: 66129ec5-b788-45d9-a9f3-2f23dc0fd9cd
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# fc17191b-9d07-4747-ae77-e7cf37c0fa12
This model is a fine-tuned version of [katuni4ka/tiny-random-falcon-40b](https://huggingface.co/katuni4ka/tiny-random-falcon-40b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.9569
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 43.8882 | 0.1492 | 200 | 10.9569 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
minhtrannnn/0be0c0ac-3aaf-4bf8-944a-8eb9eefd4884
|
minhtrannnn
| 2025-01-30T03:21:16Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:adapter:Qwen/Qwen2.5-1.5B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-30T02:49:05Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0be0c0ac-3aaf-4bf8-944a-8eb9eefd4884
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-1.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f111de4bd336466a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f111de4bd336466a_train_data.json
type:
field_input: dialogue
field_instruction: topic
field_output: summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: minhtrannnn/0be0c0ac-3aaf-4bf8-944a-8eb9eefd4884
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/f111de4bd336466a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4e5f8251-6316-40ce-a0d9-ee3cf277b82f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4e5f8251-6316-40ce-a0d9-ee3cf277b82f
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 0be0c0ac-3aaf-4bf8-944a-8eb9eefd4884
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0148
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9161 | 0.5814 | 200 | 1.0148 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso17/af41af90-3a2f-4bad-b29d-c9e42a978817
|
lesso17
| 2025-01-30T03:20:34Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"falcon",
"axolotl",
"generated_from_trainer",
"base_model:katuni4ka/tiny-random-falcon-40b",
"base_model:adapter:katuni4ka/tiny-random-falcon-40b",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-30T03:17:49Z |
---
library_name: peft
base_model: katuni4ka/tiny-random-falcon-40b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: af41af90-3a2f-4bad-b29d-c9e42a978817
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: katuni4ka/tiny-random-falcon-40b
bf16: auto
chat_template: llama3
datasets:
- data_files:
- 943fd678f7c64ba8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/943fd678f7c64ba8_train_data.json
type:
field_instruction: text
field_output: target
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso17/af41af90-3a2f-4bad-b29d-c9e42a978817
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/943fd678f7c64ba8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 66129ec5-b788-45d9-a9f3-2f23dc0fd9cd
wandb_project: new-01-29
wandb_run: your_name
wandb_runid: 66129ec5-b788-45d9-a9f3-2f23dc0fd9cd
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# af41af90-3a2f-4bad-b29d-c9e42a978817
This model is a fine-tuned version of [katuni4ka/tiny-random-falcon-40b](https://huggingface.co/katuni4ka/tiny-random-falcon-40b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.8625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 43.5327 | 0.1492 | 200 | 10.8625 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
TweedleDeepLearnings/b11a04a6-df02-4dc5-a708-09c67528553b
|
TweedleDeepLearnings
| 2025-01-30T03:16:36Z | 96 | 0 |
peft
|
[
"peft",
"safetensors",
"axolotl",
"generated_from_trainer",
"base_model:huggyllama/llama-7b",
"base_model:adapter:huggyllama/llama-7b",
"license:other",
"region:us"
] | null | 2025-01-30T02:51:38Z |
---
library_name: peft
license: other
base_model: huggyllama/llama-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c4b201cf-0eeb-4380-a91f-cd6329614a81
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
bf16: auto
chat_template: llama3
dataset_prepared_path: null
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
gradient_clipping: 0.1
group_by_length: false
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-04
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: linear
max_steps: 200
micro_batch_size: 128
mlflow_experiment_name: /tmp/aed51b8e2c089967_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 4096
special_tokens:
pad_token: </PAD>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6a8f76dd-7262-490a-905c-7b83c0f56891
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6a8f76dd-7262-490a-905c-7b83c0f56891
warmup_steps: 5
weight_decay: 0.1
xformers_attention: true
```
</details><br>
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 128
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 2048
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
ardaspear/b7ed16d0-5329-47fb-bed4-ed3cd4bb985a
|
ardaspear
| 2025-01-30T03:15:00Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:adapter:unsloth/Llama-3.2-1B-Instruct",
"license:llama3.2",
"region:us"
] | null | 2025-01-30T02:56:36Z |
---
library_name: peft
license: llama3.2
base_model: unsloth/Llama-3.2-1B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b7ed16d0-5329-47fb-bed4-ed3cd4bb985a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.2-1B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a1e5e079b3bd8977_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a1e5e079b3bd8977_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: ardaspear/b7ed16d0-5329-47fb-bed4-ed3cd4bb985a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/a1e5e079b3bd8977_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 5ee1387e-ec6a-44cd-b489-4ec211ccdb84
wandb_project: Gradients-On-Five
wandb_run: your_name
wandb_runid: 5ee1387e-ec6a-44cd-b489-4ec211ccdb84
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b7ed16d0-5329-47fb-bed4-ed3cd4bb985a
This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0962
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0007 | 1 | 2.7159 |
| 2.5711 | 0.0059 | 9 | 2.6027 |
| 2.4298 | 0.0117 | 18 | 2.4243 |
| 2.2505 | 0.0176 | 27 | 2.3168 |
| 2.3139 | 0.0234 | 36 | 2.2426 |
| 2.2642 | 0.0293 | 45 | 2.1904 |
| 2.1207 | 0.0351 | 54 | 2.1520 |
| 2.1652 | 0.0410 | 63 | 2.1266 |
| 2.1328 | 0.0468 | 72 | 2.1104 |
| 2.1168 | 0.0527 | 81 | 2.1012 |
| 2.0086 | 0.0585 | 90 | 2.0971 |
| 2.0465 | 0.0644 | 99 | 2.0962 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
leixa/f7b52739-6b80-4099-bc37-a7c5225f8341
|
leixa
| 2025-01-30T03:14:58Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:adapter:unsloth/Llama-3.2-1B-Instruct",
"license:llama3.2",
"region:us"
] | null | 2025-01-30T02:56:13Z |
---
library_name: peft
license: llama3.2
base_model: unsloth/Llama-3.2-1B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f7b52739-6b80-4099-bc37-a7c5225f8341
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.2-1B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a1e5e079b3bd8977_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a1e5e079b3bd8977_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: leixa/f7b52739-6b80-4099-bc37-a7c5225f8341
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/a1e5e079b3bd8977_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 5ee1387e-ec6a-44cd-b489-4ec211ccdb84
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5ee1387e-ec6a-44cd-b489-4ec211ccdb84
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f7b52739-6b80-4099-bc37-a7c5225f8341
This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0977
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0007 | 1 | 2.7159 |
| 2.5701 | 0.0059 | 9 | 2.6009 |
| 2.4309 | 0.0117 | 18 | 2.4248 |
| 2.2524 | 0.0176 | 27 | 2.3185 |
| 2.3157 | 0.0234 | 36 | 2.2440 |
| 2.2649 | 0.0293 | 45 | 2.1922 |
| 2.123 | 0.0351 | 54 | 2.1536 |
| 2.1657 | 0.0410 | 63 | 2.1282 |
| 2.1347 | 0.0468 | 72 | 2.1120 |
| 2.1166 | 0.0527 | 81 | 2.1028 |
| 2.0103 | 0.0585 | 90 | 2.0987 |
| 2.0491 | 0.0644 | 99 | 2.0977 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
karline/tts_me_realCS_dataset
|
karline
| 2025-01-30T03:14:53Z | 68 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:audiofolder",
"base_model:karline/tts_me_realCS_dataset",
"base_model:finetune:karline/tts_me_realCS_dataset",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2025-01-28T09:48:39Z |
---
library_name: transformers
license: mit
base_model: karline/tts_me_realCS_dataset
tags:
- generated_from_trainer
datasets:
- audiofolder
model-index:
- name: tts_me_realCS_dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tts_me_realCS_dataset
This model is a fine-tuned version of [karline/tts_me_realCS_dataset](https://huggingface.co/karline/tts_me_realCS_dataset) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4252
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 40000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:-----:|:---------------:|
| 0.5661 | 0.1808 | 100 | 0.5027 |
| 0.5194 | 0.3616 | 200 | 0.4732 |
| 0.4983 | 0.5424 | 300 | 0.4571 |
| 0.4966 | 0.7232 | 400 | 0.4554 |
| 0.4867 | 0.9040 | 500 | 0.4494 |
| 0.4808 | 1.0832 | 600 | 0.4493 |
| 0.4806 | 1.2640 | 700 | 0.4455 |
| 0.4763 | 1.4447 | 800 | 0.4439 |
| 0.4733 | 1.6255 | 900 | 0.4427 |
| 0.4756 | 1.8063 | 1000 | 0.4377 |
| 0.4689 | 1.9871 | 1100 | 0.4357 |
| 0.4658 | 2.1663 | 1200 | 0.4343 |
| 0.4637 | 2.3471 | 1300 | 0.4342 |
| 0.462 | 2.5279 | 1400 | 0.4299 |
| 0.4621 | 2.7087 | 1500 | 0.4258 |
| 0.4571 | 2.8895 | 1600 | 0.4234 |
| 0.4539 | 3.0687 | 1700 | 0.4214 |
| 0.4485 | 3.2495 | 1800 | 0.4184 |
| 0.4502 | 3.4303 | 1900 | 0.4173 |
| 0.4493 | 3.6111 | 2000 | 0.4160 |
| 0.4459 | 3.7919 | 2100 | 0.4156 |
| 0.444 | 3.9727 | 2200 | 0.4144 |
| 0.4405 | 4.1519 | 2300 | 0.4129 |
| 0.4411 | 4.3327 | 2400 | 0.4141 |
| 0.4403 | 4.5134 | 2500 | 0.4120 |
| 0.4411 | 4.6942 | 2600 | 0.4118 |
| 0.4396 | 4.8750 | 2700 | 0.4091 |
| 0.4345 | 5.0542 | 2800 | 0.4085 |
| 0.4348 | 5.2350 | 2900 | 0.4089 |
| 0.4363 | 5.4158 | 3000 | 0.4088 |
| 0.4325 | 5.5966 | 3100 | 0.4088 |
| 0.4325 | 5.7774 | 3200 | 0.4081 |
| 0.4345 | 5.9582 | 3300 | 0.4080 |
| 0.4332 | 6.1374 | 3400 | 0.4076 |
| 0.4321 | 6.3182 | 3500 | 0.4067 |
| 0.4273 | 6.4990 | 3600 | 0.4071 |
| 0.4309 | 6.6798 | 3700 | 0.4079 |
| 0.432 | 6.8606 | 3800 | 0.4057 |
| 0.4145 | 7.0398 | 3900 | 0.4057 |
| 0.4277 | 7.2206 | 4000 | 0.4053 |
| 0.4275 | 7.4014 | 4100 | 0.4045 |
| 0.4307 | 7.5821 | 4200 | 0.4054 |
| 0.4252 | 7.7629 | 4300 | 0.4044 |
| 0.4306 | 7.9437 | 4400 | 0.4048 |
| 0.4257 | 8.1229 | 4500 | 0.4042 |
| 0.4332 | 8.3037 | 4600 | 0.4049 |
| 0.4269 | 8.4845 | 4700 | 0.4041 |
| 0.429 | 8.6653 | 4800 | 0.4033 |
| 0.4245 | 8.8461 | 4900 | 0.4043 |
| 0.4111 | 9.0253 | 5000 | 0.4043 |
| 0.4304 | 9.2224 | 5100 | 0.4078 |
| 0.4316 | 9.4032 | 5200 | 0.4080 |
| 0.4304 | 9.5840 | 5300 | 0.4079 |
| 0.431 | 9.7647 | 5400 | 0.4073 |
| 0.4325 | 9.9455 | 5500 | 0.4059 |
| 0.4293 | 10.1266 | 5600 | 0.4087 |
| 0.4285 | 10.3073 | 5700 | 0.4089 |
| 0.4279 | 10.4881 | 5800 | 0.4092 |
| 0.4295 | 10.6689 | 5900 | 0.4074 |
| 0.4319 | 10.8497 | 6000 | 0.4064 |
| 0.4152 | 11.0289 | 6100 | 0.4053 |
| 0.4209 | 11.2097 | 6200 | 0.4049 |
| 0.4285 | 11.3905 | 6300 | 0.4052 |
| 0.4258 | 11.5713 | 6400 | 0.4063 |
| 0.4302 | 11.7521 | 6500 | 0.4055 |
| 0.4274 | 11.9329 | 6600 | 0.4046 |
| 0.42 | 12.1121 | 6700 | 0.4055 |
| 0.4254 | 12.2929 | 6800 | 0.4042 |
| 0.4234 | 12.4737 | 6900 | 0.4050 |
| 0.4208 | 12.6545 | 7000 | 0.4064 |
| 0.423 | 12.8353 | 7100 | 0.4032 |
| 0.4093 | 13.0145 | 7200 | 0.4050 |
| 0.4217 | 13.1953 | 7300 | 0.4070 |
| 0.422 | 13.3760 | 7400 | 0.4053 |
| 0.4198 | 13.5568 | 7500 | 0.4029 |
| 0.421 | 13.7376 | 7600 | 0.4032 |
| 0.4215 | 13.9184 | 7700 | 0.4052 |
| 0.4176 | 14.0976 | 7800 | 0.4042 |
| 0.4197 | 14.2784 | 7900 | 0.4040 |
| 0.42 | 14.4592 | 8000 | 0.4059 |
| 0.423 | 14.64 | 8100 | 0.4045 |
| 0.418 | 14.8208 | 8200 | 0.4032 |
| 0.4038 | 15.0 | 8300 | 0.4036 |
| 0.4213 | 15.1808 | 8400 | 0.4049 |
| 0.4175 | 15.3616 | 8500 | 0.4059 |
| 0.4186 | 15.5424 | 8600 | 0.4051 |
| 0.4181 | 15.7232 | 8700 | 0.4023 |
| 0.4136 | 15.9040 | 8800 | 0.4037 |
| 0.4165 | 16.0832 | 8900 | 0.4069 |
| 0.4164 | 16.2640 | 9000 | 0.4044 |
| 0.4158 | 16.4447 | 9100 | 0.4072 |
| 0.4145 | 16.6255 | 9200 | 0.4040 |
| 0.4158 | 16.8063 | 9300 | 0.4016 |
| 0.4206 | 16.9871 | 9400 | 0.4113 |
| 0.4135 | 17.1663 | 9500 | 0.4052 |
| 0.4134 | 17.3471 | 9600 | 0.4049 |
| 0.4145 | 17.5279 | 9700 | 0.4070 |
| 0.4138 | 17.7087 | 9800 | 0.4056 |
| 0.4152 | 17.8895 | 9900 | 0.4058 |
| 0.4151 | 18.0687 | 10000 | 0.4057 |
| 0.4135 | 18.2495 | 10100 | 0.4055 |
| 0.4114 | 18.4303 | 10200 | 0.4062 |
| 0.4111 | 18.6111 | 10300 | 0.4048 |
| 0.4128 | 18.7919 | 10400 | 0.4058 |
| 0.4092 | 18.9727 | 10500 | 0.4043 |
| 0.4118 | 19.1519 | 10600 | 0.4064 |
| 0.4131 | 19.3327 | 10700 | 0.4059 |
| 0.4104 | 19.5134 | 10800 | 0.4044 |
| 0.4157 | 19.6942 | 10900 | 0.4060 |
| 0.4133 | 19.8750 | 11000 | 0.4051 |
| 0.4109 | 20.0542 | 11100 | 0.4058 |
| 0.4128 | 20.2350 | 11200 | 0.4043 |
| 0.4101 | 20.4158 | 11300 | 0.4055 |
| 0.4096 | 20.5966 | 11400 | 0.4043 |
| 0.4101 | 20.7774 | 11500 | 0.4031 |
| 0.4092 | 20.9582 | 11600 | 0.4062 |
| 0.41 | 21.1374 | 11700 | 0.4052 |
| 0.4101 | 21.3182 | 11800 | 0.4064 |
| 0.407 | 21.4990 | 11900 | 0.4049 |
| 0.4106 | 21.6798 | 12000 | 0.4068 |
| 0.4077 | 21.8606 | 12100 | 0.4035 |
| 0.3941 | 22.0398 | 12200 | 0.4071 |
| 0.4087 | 22.2206 | 12300 | 0.4110 |
| 0.4097 | 22.4014 | 12400 | 0.4045 |
| 0.4096 | 22.5821 | 12500 | 0.4056 |
| 0.4099 | 22.7629 | 12600 | 0.4052 |
| 0.4064 | 22.9437 | 12700 | 0.4082 |
| 0.4065 | 23.1229 | 12800 | 0.4071 |
| 0.405 | 23.3037 | 12900 | 0.4071 |
| 0.4069 | 23.4845 | 13000 | 0.4062 |
| 0.405 | 23.6653 | 13100 | 0.4069 |
| 0.4078 | 23.8461 | 13200 | 0.4057 |
| 0.394 | 24.0253 | 13300 | 0.4079 |
| 0.4063 | 24.2061 | 13400 | 0.4075 |
| 0.4087 | 24.4122 | 13500 | 0.4092 |
| 0.4094 | 24.5930 | 13600 | 0.4069 |
| 0.4104 | 24.7738 | 13700 | 0.4066 |
| 0.4076 | 24.9546 | 13800 | 0.4107 |
| 0.4094 | 25.1356 | 13900 | 0.4087 |
| 0.4061 | 25.3164 | 14000 | 0.4059 |
| 0.4095 | 25.4972 | 14100 | 0.4082 |
| 0.4069 | 25.6780 | 14200 | 0.4099 |
| 0.4101 | 25.8588 | 14300 | 0.4076 |
| 0.3903 | 26.0380 | 14400 | 0.4075 |
| 0.4075 | 26.2188 | 14500 | 0.4102 |
| 0.4091 | 26.3995 | 14600 | 0.4092 |
| 0.4095 | 26.5803 | 14700 | 0.4070 |
| 0.4065 | 26.7611 | 14800 | 0.4088 |
| 0.4099 | 26.9419 | 14900 | 0.4088 |
| 0.4072 | 27.1211 | 15000 | 0.4088 |
| 0.404 | 27.3019 | 15100 | 0.4076 |
| 0.4072 | 27.4827 | 15200 | 0.4088 |
| 0.4058 | 27.6635 | 15300 | 0.4074 |
| 0.4089 | 27.8443 | 15400 | 0.4084 |
| 0.3922 | 28.0235 | 15500 | 0.4076 |
| 0.4069 | 28.2043 | 15600 | 0.4118 |
| 0.406 | 28.3851 | 15700 | 0.4077 |
| 0.4039 | 28.5659 | 15800 | 0.4084 |
| 0.4076 | 28.7467 | 15900 | 0.4056 |
| 0.4057 | 28.9275 | 16000 | 0.4067 |
| 0.4065 | 29.1067 | 16100 | 0.4081 |
| 0.407 | 29.2875 | 16200 | 0.4092 |
| 0.4061 | 29.4682 | 16300 | 0.4150 |
| 0.4049 | 29.6490 | 16400 | 0.4074 |
| 0.4057 | 29.8298 | 16500 | 0.4068 |
| 0.3907 | 30.0090 | 16600 | 0.4106 |
| 0.4069 | 30.1898 | 16700 | 0.4098 |
| 0.399 | 30.3706 | 16800 | 0.4051 |
| 0.4066 | 30.5514 | 16900 | 0.4100 |
| 0.403 | 30.7322 | 17000 | 0.4073 |
| 0.4052 | 30.9130 | 17100 | 0.4060 |
| 0.4007 | 31.0922 | 17200 | 0.4074 |
| 0.4053 | 31.2730 | 17300 | 0.4097 |
| 0.4016 | 31.4538 | 17400 | 0.4130 |
| 0.4034 | 31.6346 | 17500 | 0.4100 |
| 0.3997 | 31.8154 | 17600 | 0.4098 |
| 0.4054 | 31.9962 | 17700 | 0.4088 |
| 0.403 | 32.1754 | 17800 | 0.4127 |
| 0.4043 | 32.3562 | 17900 | 0.4080 |
| 0.4041 | 32.5369 | 18000 | 0.4069 |
| 0.4046 | 32.7177 | 18100 | 0.4078 |
| 0.401 | 32.8985 | 18200 | 0.4081 |
| 0.4019 | 33.0777 | 18300 | 0.4113 |
| 0.4005 | 33.2585 | 18400 | 0.4083 |
| 0.4058 | 33.4393 | 18500 | 0.4083 |
| 0.4031 | 33.6201 | 18600 | 0.4089 |
| 0.4027 | 33.8009 | 18700 | 0.4092 |
| 0.4005 | 33.9817 | 18800 | 0.4102 |
| 0.3994 | 34.1609 | 18900 | 0.4086 |
| 0.4017 | 34.3417 | 19000 | 0.4113 |
| 0.4002 | 34.5225 | 19100 | 0.4114 |
| 0.4018 | 34.7033 | 19200 | 0.4117 |
| 0.3996 | 34.8841 | 19300 | 0.4089 |
| 0.402 | 35.0633 | 19400 | 0.4101 |
| 0.3999 | 35.2441 | 19500 | 0.4125 |
| 0.401 | 35.4249 | 19600 | 0.4144 |
| 0.3983 | 35.6056 | 19700 | 0.4122 |
| 0.4008 | 35.7864 | 19800 | 0.4106 |
| 0.4036 | 35.9672 | 19900 | 0.4084 |
| 0.3991 | 36.1464 | 20000 | 0.4149 |
| 0.4022 | 36.3272 | 20100 | 0.4183 |
| 0.3966 | 36.5080 | 20200 | 0.4134 |
| 0.3977 | 36.6888 | 20300 | 0.4113 |
| 0.4031 | 36.8696 | 20400 | 0.4136 |
| 0.3977 | 37.0488 | 20500 | 0.4127 |
| 0.3951 | 37.2296 | 20600 | 0.4145 |
| 0.3977 | 37.4104 | 20700 | 0.4126 |
| 0.3984 | 37.5912 | 20800 | 0.4091 |
| 0.4003 | 37.7720 | 20900 | 0.4107 |
| 0.3994 | 37.9528 | 21000 | 0.4102 |
| 0.3996 | 38.1320 | 21100 | 0.4132 |
| 0.3976 | 38.3128 | 21200 | 0.4152 |
| 0.3982 | 38.4936 | 21300 | 0.4085 |
| 0.3993 | 38.6744 | 21400 | 0.4112 |
| 0.3969 | 38.8551 | 21500 | 0.4104 |
| 0.3845 | 39.0344 | 21600 | 0.4127 |
| 0.3985 | 39.2151 | 21700 | 0.4116 |
| 0.3949 | 39.3959 | 21800 | 0.4121 |
| 0.3998 | 39.5767 | 21900 | 0.4108 |
| 0.399 | 39.7575 | 22000 | 0.4106 |
| 0.3994 | 39.9383 | 22100 | 0.4164 |
| 0.398 | 40.1175 | 22200 | 0.4125 |
| 0.396 | 40.2983 | 22300 | 0.4138 |
| 0.3953 | 40.4791 | 22400 | 0.4104 |
| 0.3951 | 40.6599 | 22500 | 0.4190 |
| 0.3967 | 40.8407 | 22600 | 0.4120 |
| 0.3809 | 41.0199 | 22700 | 0.4141 |
| 0.3966 | 41.2007 | 22800 | 0.4141 |
| 0.3965 | 41.3815 | 22900 | 0.4132 |
| 0.396 | 41.5623 | 23000 | 0.4114 |
| 0.3949 | 41.7431 | 23100 | 0.4120 |
| 0.3989 | 41.9238 | 23200 | 0.4149 |
| 0.3962 | 42.1031 | 23300 | 0.4115 |
| 0.3957 | 42.2838 | 23400 | 0.4131 |
| 0.3951 | 42.4646 | 23500 | 0.4153 |
| 0.3953 | 42.6454 | 23600 | 0.4147 |
| 0.3952 | 42.8262 | 23700 | 0.4110 |
| 0.3817 | 43.0054 | 23800 | 0.4150 |
| 0.3987 | 43.1862 | 23900 | 0.4156 |
| 0.3946 | 43.3670 | 24000 | 0.4156 |
| 0.3939 | 43.5478 | 24100 | 0.4123 |
| 0.3938 | 43.7286 | 24200 | 0.4161 |
| 0.3958 | 43.9094 | 24300 | 0.4183 |
| 0.3955 | 44.0886 | 24400 | 0.4157 |
| 0.3949 | 44.2694 | 24500 | 0.4145 |
| 0.3951 | 44.4502 | 24600 | 0.4151 |
| 0.3982 | 44.6310 | 24700 | 0.4167 |
| 0.3962 | 44.8118 | 24800 | 0.4133 |
| 0.3927 | 44.9925 | 24900 | 0.4180 |
| 0.3951 | 45.1718 | 25000 | 0.4119 |
| 0.3937 | 45.3525 | 25100 | 0.4153 |
| 0.3942 | 45.5333 | 25200 | 0.4152 |
| 0.3968 | 45.7141 | 25300 | 0.4141 |
| 0.3935 | 45.8949 | 25400 | 0.4121 |
| 0.3912 | 46.0741 | 25500 | 0.4161 |
| 0.391 | 46.2549 | 25600 | 0.4120 |
| 0.3942 | 46.4357 | 25700 | 0.4167 |
| 0.3931 | 46.6165 | 25800 | 0.4157 |
| 0.3933 | 46.7973 | 25900 | 0.4171 |
| 0.3954 | 46.9781 | 26000 | 0.4175 |
| 0.3926 | 47.1573 | 26100 | 0.4124 |
| 0.3929 | 47.3381 | 26200 | 0.4148 |
| 0.3955 | 47.5189 | 26300 | 0.4183 |
| 0.3963 | 47.6997 | 26400 | 0.4152 |
| 0.3928 | 47.8805 | 26500 | 0.4154 |
| 0.3929 | 48.0597 | 26600 | 0.4140 |
| 0.3945 | 48.2405 | 26700 | 0.4200 |
| 0.3938 | 48.4212 | 26800 | 0.4159 |
| 0.39 | 48.6020 | 26900 | 0.4132 |
| 0.3922 | 48.7828 | 27000 | 0.4195 |
| 0.3928 | 48.9636 | 27100 | 0.4168 |
| 0.3931 | 49.1428 | 27200 | 0.4177 |
| 0.3915 | 49.3236 | 27300 | 0.4157 |
| 0.3911 | 49.5044 | 27400 | 0.4167 |
| 0.3919 | 49.6852 | 27500 | 0.4188 |
| 0.3936 | 49.8660 | 27600 | 0.4137 |
| 0.3924 | 50.0452 | 27700 | 0.4162 |
| 0.3911 | 50.2260 | 27800 | 0.4165 |
| 0.3942 | 50.4068 | 27900 | 0.4186 |
| 0.3895 | 50.5876 | 28000 | 0.4165 |
| 0.3907 | 50.7684 | 28100 | 0.4217 |
| 0.3885 | 50.9492 | 28200 | 0.4166 |
| 0.3918 | 51.1284 | 28300 | 0.4171 |
| 0.3885 | 51.3092 | 28400 | 0.4153 |
| 0.3899 | 51.4899 | 28500 | 0.4161 |
| 0.3933 | 51.6707 | 28600 | 0.4176 |
| 0.3911 | 51.8515 | 28700 | 0.4160 |
| 0.3771 | 52.0307 | 28800 | 0.4169 |
| 0.393 | 52.2115 | 28900 | 0.4188 |
| 0.3901 | 52.3923 | 29000 | 0.4145 |
| 0.3918 | 52.5731 | 29100 | 0.4176 |
| 0.3901 | 52.7539 | 29200 | 0.4179 |
| 0.3928 | 52.9347 | 29300 | 0.4179 |
| 0.3883 | 53.1139 | 29400 | 0.4172 |
| 0.3886 | 53.2947 | 29500 | 0.4205 |
| 0.3876 | 53.4755 | 29600 | 0.4184 |
| 0.3939 | 53.6563 | 29700 | 0.4168 |
| 0.3906 | 53.8371 | 29800 | 0.4165 |
| 0.3763 | 54.0163 | 29900 | 0.4173 |
| 0.3902 | 54.1971 | 30000 | 0.4165 |
| 0.3886 | 54.3779 | 30100 | 0.4175 |
| 0.3889 | 54.5586 | 30200 | 0.4191 |
| 0.3926 | 54.7394 | 30300 | 0.4196 |
| 0.389 | 54.9202 | 30400 | 0.4182 |
| 0.3921 | 55.0994 | 30500 | 0.4196 |
| 0.3923 | 55.2802 | 30600 | 0.4196 |
| 0.3882 | 55.4610 | 30700 | 0.4202 |
| 0.3906 | 55.6418 | 30800 | 0.4187 |
| 0.3902 | 55.8226 | 30900 | 0.4187 |
| 0.3751 | 56.0018 | 31000 | 0.4189 |
| 0.3874 | 56.1826 | 31100 | 0.4208 |
| 0.3907 | 56.3634 | 31200 | 0.4198 |
| 0.3915 | 56.5442 | 31300 | 0.4197 |
| 0.3872 | 56.7250 | 31400 | 0.4216 |
| 0.3905 | 56.9058 | 31500 | 0.4208 |
| 0.3893 | 57.0850 | 31600 | 0.4207 |
| 0.3904 | 57.2658 | 31700 | 0.4228 |
| 0.3872 | 57.4466 | 31800 | 0.4217 |
| 0.3878 | 57.6273 | 31900 | 0.4205 |
| 0.3899 | 57.8081 | 32000 | 0.4220 |
| 0.3865 | 57.9889 | 32100 | 0.4212 |
| 0.388 | 58.1681 | 32200 | 0.4181 |
| 0.3878 | 58.3489 | 32300 | 0.4194 |
| 0.3917 | 58.5297 | 32400 | 0.4188 |
| 0.3894 | 58.7105 | 32500 | 0.4202 |
| 0.3876 | 58.8913 | 32600 | 0.4224 |
| 0.3903 | 59.0705 | 32700 | 0.4207 |
| 0.3887 | 59.2513 | 32800 | 0.4200 |
| 0.3871 | 59.4321 | 32900 | 0.4208 |
| 0.3867 | 59.6129 | 33000 | 0.4220 |
| 0.3864 | 59.7937 | 33100 | 0.4187 |
| 0.3881 | 59.9745 | 33200 | 0.4215 |
| 0.3853 | 60.1537 | 33300 | 0.4197 |
| 0.3883 | 60.3345 | 33400 | 0.4202 |
| 0.3883 | 60.5153 | 33500 | 0.4189 |
| 0.3879 | 60.6960 | 33600 | 0.4198 |
| 0.3919 | 60.8768 | 33700 | 0.4195 |
| 0.3898 | 61.0560 | 33800 | 0.4199 |
| 0.3877 | 61.2368 | 33900 | 0.4218 |
| 0.3869 | 61.4176 | 34000 | 0.4216 |
| 0.3898 | 61.5984 | 34100 | 0.4209 |
| 0.3877 | 61.7792 | 34200 | 0.4201 |
| 0.3857 | 61.96 | 34300 | 0.4216 |
| 0.3869 | 62.1392 | 34400 | 0.4207 |
| 0.3863 | 62.32 | 34500 | 0.4227 |
| 0.387 | 62.5008 | 34600 | 0.4216 |
| 0.386 | 62.6816 | 34700 | 0.4227 |
| 0.3885 | 62.8624 | 34800 | 0.4200 |
| 0.3726 | 63.0416 | 34900 | 0.4223 |
| 0.3894 | 63.2224 | 35000 | 0.4240 |
| 0.386 | 63.4032 | 35100 | 0.4219 |
| 0.3875 | 63.5840 | 35200 | 0.4217 |
| 0.3854 | 63.7647 | 35300 | 0.4207 |
| 0.3849 | 63.9455 | 35400 | 0.4207 |
| 0.3879 | 64.1247 | 35500 | 0.4229 |
| 0.3864 | 64.3055 | 35600 | 0.4216 |
| 0.3845 | 64.4863 | 35700 | 0.4219 |
| 0.3853 | 64.6671 | 35800 | 0.4200 |
| 0.3927 | 64.8479 | 35900 | 0.4214 |
| 0.3747 | 65.0271 | 36000 | 0.4207 |
| 0.3858 | 65.2079 | 36100 | 0.4222 |
| 0.3879 | 65.3887 | 36200 | 0.4225 |
| 0.3886 | 65.5695 | 36300 | 0.4222 |
| 0.3851 | 65.7503 | 36400 | 0.4222 |
| 0.3875 | 65.9311 | 36500 | 0.4239 |
| 0.3859 | 66.1103 | 36600 | 0.4231 |
| 0.3878 | 66.2911 | 36700 | 0.4227 |
| 0.3873 | 66.4719 | 36800 | 0.4257 |
| 0.385 | 66.6527 | 36900 | 0.4239 |
| 0.3853 | 66.8334 | 37000 | 0.4236 |
| 0.3691 | 67.0127 | 37100 | 0.4251 |
| 0.3888 | 67.1934 | 37200 | 0.4256 |
| 0.3844 | 67.3742 | 37300 | 0.4222 |
| 0.387 | 67.5550 | 37400 | 0.4233 |
| 0.3853 | 67.7358 | 37500 | 0.4224 |
| 0.3846 | 67.9166 | 37600 | 0.4237 |
| 0.3869 | 68.0958 | 37700 | 0.4246 |
| 0.3827 | 68.2766 | 37800 | 0.4232 |
| 0.3838 | 68.4574 | 37900 | 0.4225 |
| 0.3849 | 68.6382 | 38000 | 0.4245 |
| 0.3885 | 68.8190 | 38100 | 0.4241 |
| 0.3878 | 68.9998 | 38200 | 0.4238 |
| 0.3858 | 69.1790 | 38300 | 0.4247 |
| 0.3853 | 69.3598 | 38400 | 0.4247 |
| 0.3883 | 69.5406 | 38500 | 0.4248 |
| 0.3895 | 69.7214 | 38600 | 0.4258 |
| 0.3858 | 69.9021 | 38700 | 0.4236 |
| 0.3876 | 70.0814 | 38800 | 0.4241 |
| 0.3865 | 70.2621 | 38900 | 0.4248 |
| 0.3859 | 70.4429 | 39000 | 0.4259 |
| 0.3872 | 70.6237 | 39100 | 0.4247 |
| 0.3823 | 70.8045 | 39200 | 0.4247 |
| 0.385 | 70.9853 | 39300 | 0.4251 |
| 0.3852 | 71.1645 | 39400 | 0.4252 |
| 0.3858 | 71.3453 | 39500 | 0.4255 |
| 0.3826 | 71.5261 | 39600 | 0.4253 |
| 0.3877 | 71.7069 | 39700 | 0.4251 |
| 0.3861 | 71.8877 | 39800 | 0.4253 |
| 0.3845 | 72.0669 | 39900 | 0.4248 |
| 0.3857 | 72.2477 | 40000 | 0.4252 |
### Framework versions
- Transformers 4.48.0
- Pytorch 2.2.1
- Datasets 3.2.0
- Tokenizers 0.21.0
|
nadejdatarabukina/5e4b4140-24a3-417d-a250-a8a2dc2b4a6f
|
nadejdatarabukina
| 2025-01-30T03:07:37Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-14B",
"base_model:adapter:unsloth/Qwen2.5-14B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-30T02:36:19Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-14B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5e4b4140-24a3-417d-a250-a8a2dc2b4a6f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-14B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- dc8bf750a2046088_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/dc8bf750a2046088_train_data.json
type:
field_instruction: query
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: nadejdatarabukina/5e4b4140-24a3-417d-a250-a8a2dc2b4a6f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 16
lora_dropout: 0.02
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 33
micro_batch_size: 2
mlflow_experiment_name: /tmp/dc8bf750a2046088_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 17
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2705e754-c046-43d1-ab6e-d5d01d275ab7
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2705e754-c046-43d1-ab6e-d5d01d275ab7
warmup_steps: 17
weight_decay: 0.005
xformers_attention: true
```
</details><br>
# 5e4b4140-24a3-417d-a250-a8a2dc2b4a6f
This model is a fine-tuned version of [unsloth/Qwen2.5-14B](https://huggingface.co/unsloth/Qwen2.5-14B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 17
- training_steps: 33
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0004 | 1 | nan |
| 0.0 | 0.0021 | 5 | nan |
| 0.0 | 0.0041 | 10 | nan |
| 0.0 | 0.0062 | 15 | nan |
| 0.0 | 0.0082 | 20 | nan |
| 0.0 | 0.0103 | 25 | nan |
| 0.0 | 0.0124 | 30 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
gavrilstep/607242f0-2e49-4fc1-b572-c3e0437aa354
|
gavrilstep
| 2025-01-30T03:07:23Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:adapter:unsloth/Llama-3.2-1B-Instruct",
"license:llama3.2",
"region:us"
] | null | 2025-01-30T02:56:34Z |
---
library_name: peft
license: llama3.2
base_model: unsloth/Llama-3.2-1B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 607242f0-2e49-4fc1-b572-c3e0437aa354
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.2-1B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a1e5e079b3bd8977_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a1e5e079b3bd8977_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: gavrilstep/607242f0-2e49-4fc1-b572-c3e0437aa354
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 39
micro_batch_size: 2
mlflow_experiment_name: /tmp/a1e5e079b3bd8977_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 21
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5ee1387e-ec6a-44cd-b489-4ec211ccdb84
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5ee1387e-ec6a-44cd-b489-4ec211ccdb84
warmup_steps: 21
weight_decay: 0.02
xformers_attention: true
```
</details><br>
# 607242f0-2e49-4fc1-b572-c3e0437aa354
This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 21
- training_steps: 39
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | nan |
| 0.0 | 0.0008 | 5 | nan |
| 0.0 | 0.0016 | 10 | nan |
| 0.0 | 0.0024 | 15 | nan |
| 0.0 | 0.0033 | 20 | nan |
| 0.0 | 0.0041 | 25 | nan |
| 0.0 | 0.0049 | 30 | nan |
| 0.0 | 0.0057 | 35 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Elana/InterPLM-esm2-8m
|
Elana
| 2025-01-30T03:06:49Z | 5 | 0 | null |
[
"sparse_autoencoder",
"protein-language-models",
"sparse-autoencoder",
"en",
"license:mit",
"region:us"
] | null | 2025-01-25T00:40:45Z |
---
language:
- en
tags:
- protein-language-models
- sparse-autoencoder
license: mit
---
# Sparse Autoencoders for ESM-2 (8M)
Interpret protein language model representations using sparse autoencoders trained on ESM-2 (8M) layers. These models decompose complex neural representations into interpretable features, enabling deeper understanding of how protein language models process sequence information.
* 📊 Model details in the [InterPLM pre-print](https://www.biorxiv.org/content/10.1101/2024.11.14.623630v1)
* 👩💻 Training and analysis code in the [GitHub repo](https://github.com/ElanaPearl/InterPLM)
* 🧬 Explore features at [InterPLM.ai](https://www.interplm.ai)
## Model Details
- Base Model: ESM-2 8M (6 layers)
- Architecture: Sparse Autoencoder
- Input Dimension: 320
- Feature Dimension: 10,240
## Available Models
We provide SAE models trained on different layers of ESM-2-8M:
| Model name | ESM2 model | ESM2 layer |
|-|-|-|
| [InterPLM-esm2-8m-l1](https://huggingface.co/Elana/InterPLM-esm2-8m/tree/main/layer_1) | esm2_t6_8m_UR50D | 1 |
| [InterPLM-esm2-8m-l2](https://huggingface.co/Elana/InterPLM-esm2-8m/tree/main/layer_2) | esm2_t6_8m_UR50D | 2 |
| [InterPLM-esm2-8m-l3](https://huggingface.co/Elana/InterPLM-esm2-8m/tree/main/layer_3) | esm2_t6_8m_UR50D | 3 |
| [InterPLM-esm2-8m-l4](https://huggingface.co/Elana/InterPLM-esm2-8m/tree/main/layer_4) | esm2_t6_8m_UR50D | 4 |
| [InterPLM-esm2-8m-l5](https://huggingface.co/Elana/InterPLM-esm2-8m/tree/main/layer_5) | esm2_t6_8m_UR50D | 5 |
| [InterPLM-esm2-8m-l6](https://huggingface.co/Elana/InterPLM-esm2-8m/tree/main/layer_6) | esm2_t6_8m_UR50D | 6 |
All models share the same architecture and dictionary size (10,240). See [here](https://huggingface.co/Elana/InterPLM-esm2-650m) for SAEs trained on ESM-2 650M. The 650M SAEs capture more known biological concepts than the 8M but require additional compute for both ESM embedding and SAE feature extraction.
## Usage
Extract interpretable features from protein sequences:
```python
from interplm.sae.inference import load_sae_from_hf
from interplm.esm.embed import embed_single_sequence
# Get ESM embeddings for protein sequence
embeddings = embed_single_sequence(
sequence="MRWQEMGYIFYPRKLR",
model_name="esm2_t6_8M_UR50D",
layer=4 # Choose ESM layer (1-6)
)
# Load SAE model and extract features
sae = load_sae_from_hf(plm_model="esm2-8m", plm_layer=4)
features = sae.encode(embeddings)
```
For detailed training and analysis examples, see the [GitHub README](https://github.com/ElanaPearl/InterPLM/blob/main/README.md).
## Model Variants
The SAEs we've trained have arbitrary scales between features since encoder/decoder weights could be linearly scaled without changing reconstructions. To make features comparable, we normalize them to activate between 0-1 based on max activation values from Swiss-Prot (since this is our primary analysis dataset). By default, use our pre-normalized SAEs (`ae_normalized.pt`). As this might not perfectly scale features not present in Swiss-Prot proteins, for custom normalization use `ae_unnormalized.pt` with [this code](https://github.com/ElanaPearl/InterPLM/blob/main/interplm/sae/normalize.py).
|
vapegod/g6
|
vapegod
| 2025-01-30T03:04:50Z | 70 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-30T03:02:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mrhunghd/ccf7dcea-1711-44d4-af66-50b54f3673e5
|
mrhunghd
| 2025-01-30T03:03:38Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:adapter:Qwen/Qwen2.5-1.5B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-30T02:49:16Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ccf7dcea-1711-44d4-af66-50b54f3673e5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-1.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f111de4bd336466a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f111de4bd336466a_train_data.json
type:
field_input: dialogue
field_instruction: topic
field_output: summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: mrhunghd/ccf7dcea-1711-44d4-af66-50b54f3673e5
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/f111de4bd336466a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4e5f8251-6316-40ce-a0d9-ee3cf277b82f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4e5f8251-6316-40ce-a0d9-ee3cf277b82f
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# ccf7dcea-1711-44d4-af66-50b54f3673e5
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0151
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9136 | 0.5814 | 200 | 1.0151 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
visdata/raise3
|
visdata
| 2025-01-30T03:02:30Z | 38 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-30T02:55:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nbninh/eb743aac-07c7-4f74-be63-f274512bd706
|
nbninh
| 2025-01-30T03:02:08Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:adapter:Qwen/Qwen2.5-1.5B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-30T02:49:03Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: eb743aac-07c7-4f74-be63-f274512bd706
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-1.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f111de4bd336466a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f111de4bd336466a_train_data.json
type:
field_input: dialogue
field_instruction: topic
field_output: summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nbninh/eb743aac-07c7-4f74-be63-f274512bd706
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/f111de4bd336466a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4e5f8251-6316-40ce-a0d9-ee3cf277b82f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4e5f8251-6316-40ce-a0d9-ee3cf277b82f
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# eb743aac-07c7-4f74-be63-f274512bd706
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0146
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9062 | 0.5814 | 200 | 1.0146 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
myhaaaaaaa/9d922f19-b335-41c3-ac79-7931f65119d7
|
myhaaaaaaa
| 2025-01-30T03:01:41Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:adapter:Qwen/Qwen2.5-1.5B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-30T02:49:32Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9d922f19-b335-41c3-ac79-7931f65119d7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-1.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f111de4bd336466a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f111de4bd336466a_train_data.json
type:
field_input: dialogue
field_instruction: topic
field_output: summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: myhaaaaaaa/9d922f19-b335-41c3-ac79-7931f65119d7
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/f111de4bd336466a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4e5f8251-6316-40ce-a0d9-ee3cf277b82f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4e5f8251-6316-40ce-a0d9-ee3cf277b82f
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 9d922f19-b335-41c3-ac79-7931f65119d7
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0134
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9084 | 0.5814 | 200 | 1.0134 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
datlaaaaaaa/5e4a99bb-d816-4045-ac35-60d7eb9593de
|
datlaaaaaaa
| 2025-01-30T03:01:36Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:adapter:Qwen/Qwen2.5-1.5B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-30T02:49:14Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5e4a99bb-d816-4045-ac35-60d7eb9593de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-1.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f111de4bd336466a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f111de4bd336466a_train_data.json
type:
field_input: dialogue
field_instruction: topic
field_output: summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: datlaaaaaaa/5e4a99bb-d816-4045-ac35-60d7eb9593de
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/f111de4bd336466a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4e5f8251-6316-40ce-a0d9-ee3cf277b82f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4e5f8251-6316-40ce-a0d9-ee3cf277b82f
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 5e4a99bb-d816-4045-ac35-60d7eb9593de
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0133
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.903 | 0.5814 | 200 | 1.0133 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
JacksonBrune/c9327a65-4034-4f0b-af5c-58ce19320cf4
|
JacksonBrune
| 2025-01-30T02:59:27Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:adapter:Qwen/Qwen2.5-1.5B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-30T02:53:31Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c9327a65-4034-4f0b-af5c-58ce19320cf4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-1.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f111de4bd336466a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f111de4bd336466a_train_data.json
type:
field_input: dialogue
field_instruction: topic
field_output: summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: JacksonBrune/c9327a65-4034-4f0b-af5c-58ce19320cf4
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/f111de4bd336466a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4e5f8251-6316-40ce-a0d9-ee3cf277b82f
wandb_project: Birthday-SN56-12-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4e5f8251-6316-40ce-a0d9-ee3cf277b82f
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c9327a65-4034-4f0b-af5c-58ce19320cf4
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0709
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.2582 | 0.0029 | 1 | 2.1271 |
| 1.6584 | 0.0378 | 13 | 1.3767 |
| 1.143 | 0.0756 | 26 | 1.1119 |
| 1.0689 | 0.1134 | 39 | 1.0709 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
prxy5604/984fa9e1-fce5-45f4-a5b4-eb088ca84258
|
prxy5604
| 2025-01-30T02:59:13Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:deepseek-ai/deepseek-coder-6.7b-instruct",
"base_model:adapter:deepseek-ai/deepseek-coder-6.7b-instruct",
"license:other",
"region:us"
] | null | 2025-01-30T02:02:23Z |
---
library_name: peft
license: other
base_model: deepseek-ai/deepseek-coder-6.7b-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 984fa9e1-fce5-45f4-a5b4-eb088ca84258
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: deepseek-ai/deepseek-coder-6.7b-instruct
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 8e83a81599a1c92e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8e83a81599a1c92e_train_data.json
type:
field_instruction: description
field_output: title
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: prxy5604/984fa9e1-fce5-45f4-a5b4-eb088ca84258
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/8e83a81599a1c92e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f3fae5bf-6f85-4e00-b401-849bb92f687b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f3fae5bf-6f85-4e00-b401-849bb92f687b
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 984fa9e1-fce5-45f4-a5b4-eb088ca84258
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-6.7b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9099
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.5346 | 0.0004 | 1 | 3.9100 |
| 3.0605 | 0.0177 | 50 | 2.2425 |
| 2.9457 | 0.0354 | 100 | 2.0306 |
| 2.6463 | 0.0530 | 150 | 1.9341 |
| 2.6142 | 0.0707 | 200 | 1.9099 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
chchen/Ministral-8B-Instruct-2410-PsyCourse-fold3
|
chchen
| 2025-01-30T02:56:59Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:mistralai/Ministral-8B-Instruct-2410",
"base_model:adapter:mistralai/Ministral-8B-Instruct-2410",
"license:other",
"region:us"
] | null | 2025-01-29T14:24:03Z |
---
library_name: peft
license: other
base_model: mistralai/Ministral-8B-Instruct-2410
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: Ministral-8B-Instruct-2410-PsyCourse-fold3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Ministral-8B-Instruct-2410-PsyCourse-fold3
This model is a fine-tuned version of [mistralai/Ministral-8B-Instruct-2410](https://huggingface.co/mistralai/Ministral-8B-Instruct-2410) on the course-train-fold1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0309
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.2581 | 0.0770 | 50 | 0.2414 |
| 0.0852 | 0.1539 | 100 | 0.0696 |
| 0.0612 | 0.2309 | 150 | 0.0584 |
| 0.0579 | 0.3078 | 200 | 0.0537 |
| 0.0436 | 0.3848 | 250 | 0.0433 |
| 0.0395 | 0.4617 | 300 | 0.0470 |
| 0.0436 | 0.5387 | 350 | 0.0454 |
| 0.0487 | 0.6156 | 400 | 0.0436 |
| 0.0302 | 0.6926 | 450 | 0.0377 |
| 0.0301 | 0.7695 | 500 | 0.0377 |
| 0.0422 | 0.8465 | 550 | 0.0353 |
| 0.0352 | 0.9234 | 600 | 0.0341 |
| 0.0327 | 1.0004 | 650 | 0.0346 |
| 0.0328 | 1.0773 | 700 | 0.0361 |
| 0.0278 | 1.1543 | 750 | 0.0347 |
| 0.0277 | 1.2312 | 800 | 0.0336 |
| 0.0278 | 1.3082 | 850 | 0.0347 |
| 0.0208 | 1.3851 | 900 | 0.0341 |
| 0.037 | 1.4621 | 950 | 0.0345 |
| 0.0335 | 1.5391 | 1000 | 0.0357 |
| 0.0305 | 1.6160 | 1050 | 0.0322 |
| 0.0337 | 1.6930 | 1100 | 0.0377 |
| 0.0221 | 1.7699 | 1150 | 0.0325 |
| 0.0192 | 1.8469 | 1200 | 0.0378 |
| 0.0282 | 1.9238 | 1250 | 0.0325 |
| 0.0216 | 2.0008 | 1300 | 0.0309 |
| 0.0172 | 2.0777 | 1350 | 0.0312 |
| 0.0238 | 2.1547 | 1400 | 0.0342 |
| 0.0118 | 2.2316 | 1450 | 0.0379 |
| 0.02 | 2.3086 | 1500 | 0.0349 |
| 0.0162 | 2.3855 | 1550 | 0.0389 |
| 0.0138 | 2.4625 | 1600 | 0.0367 |
| 0.0193 | 2.5394 | 1650 | 0.0348 |
| 0.0208 | 2.6164 | 1700 | 0.0356 |
| 0.0228 | 2.6933 | 1750 | 0.0326 |
| 0.0195 | 2.7703 | 1800 | 0.0323 |
| 0.0219 | 2.8472 | 1850 | 0.0317 |
| 0.0169 | 2.9242 | 1900 | 0.0329 |
| 0.0235 | 3.0012 | 1950 | 0.0340 |
| 0.0092 | 3.0781 | 2000 | 0.0377 |
| 0.0107 | 3.1551 | 2050 | 0.0413 |
| 0.0093 | 3.2320 | 2100 | 0.0398 |
| 0.0076 | 3.3090 | 2150 | 0.0406 |
| 0.0115 | 3.3859 | 2200 | 0.0380 |
| 0.0065 | 3.4629 | 2250 | 0.0371 |
| 0.0115 | 3.5398 | 2300 | 0.0394 |
| 0.006 | 3.6168 | 2350 | 0.0399 |
| 0.0119 | 3.6937 | 2400 | 0.0366 |
| 0.0068 | 3.7707 | 2450 | 0.0387 |
| 0.0079 | 3.8476 | 2500 | 0.0394 |
| 0.0092 | 3.9246 | 2550 | 0.0405 |
| 0.0088 | 4.0015 | 2600 | 0.0393 |
| 0.0017 | 4.0785 | 2650 | 0.0415 |
| 0.0076 | 4.1554 | 2700 | 0.0446 |
| 0.0017 | 4.2324 | 2750 | 0.0453 |
| 0.0027 | 4.3093 | 2800 | 0.0469 |
| 0.003 | 4.3863 | 2850 | 0.0485 |
| 0.0047 | 4.4633 | 2900 | 0.0493 |
| 0.0021 | 4.5402 | 2950 | 0.0484 |
| 0.0031 | 4.6172 | 3000 | 0.0485 |
| 0.0036 | 4.6941 | 3050 | 0.0488 |
| 0.0028 | 4.7711 | 3100 | 0.0488 |
| 0.0031 | 4.8480 | 3150 | 0.0487 |
| 0.0035 | 4.9250 | 3200 | 0.0487 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b7-i1-GGUF
|
mradermacher
| 2025-01-30T02:56:19Z | 548 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:jpacifico/Chocolatine-2-14B-Instruct-DPO-v2.0b7",
"base_model:quantized:jpacifico/Chocolatine-2-14B-Instruct-DPO-v2.0b7",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-01-29T16:56:29Z |
---
base_model: jpacifico/Chocolatine-2-14B-Instruct-DPO-v2.0b7
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/jpacifico/Chocolatine-2-14B-Instruct-DPO-v2.0b7
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b7-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b7-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b7.i1-IQ1_S.gguf) | i1-IQ1_S | 3.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b7-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b7.i1-IQ1_M.gguf) | i1-IQ1_M | 3.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b7-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b7.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b7-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b7.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b7-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b7.i1-IQ2_S.gguf) | i1-IQ2_S | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b7-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b7.i1-IQ2_M.gguf) | i1-IQ2_M | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b7-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b7.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.3 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b7-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b7.i1-Q2_K.gguf) | i1-Q2_K | 5.7 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b7-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b7.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b7-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b7.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.3 | |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b7-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b7.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b7-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b7.i1-IQ3_S.gguf) | i1-IQ3_S | 6.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b7-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b7.i1-IQ3_M.gguf) | i1-IQ3_M | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b7-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b7.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b7-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b7.i1-Q3_K_L.gguf) | i1-Q3_K_L | 7.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b7-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b7.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.1 | |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b7-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b7.i1-Q4_0.gguf) | i1-Q4_0 | 8.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b7-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b7.i1-IQ4_NL.gguf) | i1-IQ4_NL | 8.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b7-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b7.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b7-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b7.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b7-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b7.i1-Q4_1.gguf) | i1-Q4_1 | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b7-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b7.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.3 | |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b7-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b7.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b7-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b7.i1-Q6_K.gguf) | i1-Q6_K | 12.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b7-GGUF
|
mradermacher
| 2025-01-30T02:56:15Z | 308 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:jpacifico/Chocolatine-2-14B-Instruct-DPO-v2.0b7",
"base_model:quantized:jpacifico/Chocolatine-2-14B-Instruct-DPO-v2.0b7",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-27T17:45:54Z |
---
base_model: jpacifico/Chocolatine-2-14B-Instruct-DPO-v2.0b7
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/jpacifico/Chocolatine-2-14B-Instruct-DPO-v2.0b7
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b7-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b7-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b7.Q2_K.gguf) | Q2_K | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b7-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b7.Q3_K_S.gguf) | Q3_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b7-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b7.Q3_K_M.gguf) | Q3_K_M | 7.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b7-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b7.Q3_K_L.gguf) | Q3_K_L | 7.9 | |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b7-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b7.IQ4_XS.gguf) | IQ4_XS | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b7-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b7.Q4_K_S.gguf) | Q4_K_S | 8.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b7-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b7.Q4_K_M.gguf) | Q4_K_M | 9.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b7-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b7.Q5_K_S.gguf) | Q5_K_S | 10.3 | |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b7-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b7.Q5_K_M.gguf) | Q5_K_M | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b7-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b7.Q6_K.gguf) | Q6_K | 12.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b7-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b7.Q8_0.gguf) | Q8_0 | 15.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b8-GGUF
|
mradermacher
| 2025-01-30T02:56:15Z | 259 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:jpacifico/Chocolatine-2-14B-Instruct-DPO-v2.0b8",
"base_model:quantized:jpacifico/Chocolatine-2-14B-Instruct-DPO-v2.0b8",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-29T21:37:02Z |
---
base_model: jpacifico/Chocolatine-2-14B-Instruct-DPO-v2.0b8
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/jpacifico/Chocolatine-2-14B-Instruct-DPO-v2.0b8
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b8-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b8-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b8.Q2_K.gguf) | Q2_K | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b8-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b8.Q3_K_S.gguf) | Q3_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b8-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b8.Q3_K_M.gguf) | Q3_K_M | 7.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b8-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b8.Q3_K_L.gguf) | Q3_K_L | 7.9 | |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b8-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b8.IQ4_XS.gguf) | IQ4_XS | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b8-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b8.Q4_K_S.gguf) | Q4_K_S | 8.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b8-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b8.Q4_K_M.gguf) | Q4_K_M | 9.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b8-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b8.Q5_K_S.gguf) | Q5_K_S | 10.3 | |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b8-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b8.Q5_K_M.gguf) | Q5_K_M | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b8-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b8.Q6_K.gguf) | Q6_K | 12.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b8-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b8.Q8_0.gguf) | Q8_0 | 15.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
shibajustfor/a690348b-9d15-46c5-92f0-37a6633c8cd5
|
shibajustfor
| 2025-01-30T02:50:54Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Mistral-7b-128k",
"base_model:adapter:NousResearch/Yarn-Mistral-7b-128k",
"license:apache-2.0",
"region:us"
] | null | 2025-01-30T02:41:36Z |
---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Mistral-7b-128k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a690348b-9d15-46c5-92f0-37a6633c8cd5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Mistral-7b-128k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b1454fa1fd1fe58d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b1454fa1fd1fe58d_train_data.json
type:
field_input: possible_answers
field_instruction: question
field_output: memory_answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: shibajustfor/a690348b-9d15-46c5-92f0-37a6633c8cd5
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: constant
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/b1454fa1fd1fe58d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4d3d1b80-2351-40f7-99cf-7e411e41051a
wandb_project: Birthday-SN56-38-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4d3d1b80-2351-40f7-99cf-7e411e41051a
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a690348b-9d15-46c5-92f0-37a6633c8cd5
This model is a fine-tuned version of [NousResearch/Yarn-Mistral-7b-128k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5517
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 1.4000 |
| 3.4423 | 0.0018 | 13 | 0.5717 |
| 2.3137 | 0.0036 | 26 | 0.5613 |
| 1.6598 | 0.0054 | 39 | 0.5517 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
trenden/c009a7b4-a9d4-465f-8146-9343cd836b63
|
trenden
| 2025-01-30T02:50:53Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Mistral-7b-128k",
"base_model:adapter:NousResearch/Yarn-Mistral-7b-128k",
"license:apache-2.0",
"region:us"
] | null | 2025-01-30T02:41:15Z |
---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Mistral-7b-128k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c009a7b4-a9d4-465f-8146-9343cd836b63
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Mistral-7b-128k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b1454fa1fd1fe58d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b1454fa1fd1fe58d_train_data.json
type:
field_input: possible_answers
field_instruction: question
field_output: memory_answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: trenden/c009a7b4-a9d4-465f-8146-9343cd836b63
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/b1454fa1fd1fe58d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4d3d1b80-2351-40f7-99cf-7e411e41051a
wandb_project: Birthday-SN56-3-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4d3d1b80-2351-40f7-99cf-7e411e41051a
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c009a7b4-a9d4-465f-8146-9343cd836b63
This model is a fine-tuned version of [NousResearch/Yarn-Mistral-7b-128k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5117
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 1.9597 |
| 4.551 | 0.0018 | 13 | 0.5514 |
| 2.2749 | 0.0036 | 26 | 0.5348 |
| 1.641 | 0.0054 | 39 | 0.5117 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
sebastiansarasti/MNISTAutoEncoder
|
sebastiansarasti
| 2025-01-30T02:49:22Z | 7 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"license:mit",
"region:us"
] | null | 2025-01-05T20:42:11Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
license: mit
---
# ImageGenerationTAU: Autoencoder for MNIST Image Generation
## Model Details
- **Model Architecture:** Convolutional Autoencoder
- **Framework:** PyTorch
- **Input Shape:** (1, 28, 28) (Grayscale MNIST Images)
- **Latent Dimension:** User-defined (`hidden_dim`)
- **Dataset:** [MNIST Handwritten Digits](http://yann.lecun.com/exdb/mnist/)
## Model Description
The **ImageGenerationTAU** model is a **convolutional autoencoder** designed for **image generation and feature extraction** from MNIST. It consists of:
- An **encoder** that compresses the input image into a **low-dimensional representation**.
- A **decoder** that reconstructs the original image from the compressed representation.
This model can be used for **image denoising, feature learning, and generative tasks**.
## Training Details
- **Loss Function:** Smooth L1 Loss
- **Optimizer:** Adam
- **Batch Size:** 512
- **Number of Epochs:** TBD
- **Regularization:** Batch Normalization
### Model Architecture
```python
class ImageGenerationTAU(nn.Module, PyTorchModelHubMixin):
def __init__(self, hidden_dim):
super(ImageGenerationTAU, self).__init__()
self.encoder = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=3, stride=1, padding=1),
nn.MaxPool2d(kernel_size=2, stride=2),
nn.ReLU(),
nn.BatchNorm2d(64),
nn.Conv2d(64, 32, kernel_size=3, stride=1, padding=1),
nn.MaxPool2d(kernel_size=2, stride=2),
nn.ReLU(),
nn.BatchNorm2d(32),
nn.Flatten(),
nn.Linear(32 * 7 * 7, hidden_dim),
)
self.decoder = nn.Sequential(
nn.Linear(hidden_dim, 32 * 7 * 7),
nn.ReLU(),
nn.Unflatten(1, (32, 7, 7)),
nn.ConvTranspose2d(32, 64, kernel_size=2, stride=2),
nn.ReLU(),
nn.BatchNorm2d(64),
nn.ConvTranspose2d(64, 1, kernel_size=2, stride=2),
nn.Sigmoid(),
)
def forward(self, x):
x = self.encoder(x)
x = self.decoder(x)
return x
```
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed]
|
Romain-XV/3d737383-c3fa-4c8f-ad38-b5af93247aca
|
Romain-XV
| 2025-01-30T02:48:57Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"dbrx",
"axolotl",
"generated_from_trainer",
"base_model:katuni4ka/tiny-random-dbrx",
"base_model:adapter:katuni4ka/tiny-random-dbrx",
"region:us"
] | null | 2025-01-30T02:44:38Z |
---
library_name: peft
base_model: katuni4ka/tiny-random-dbrx
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3d737383-c3fa-4c8f-ad38-b5af93247aca
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: katuni4ka/tiny-random-dbrx
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- cf2f1c242df238b1_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/cf2f1c242df238b1_train_data.json
type:
field_input: fidelity_label
field_instruction: prompt
field_output: element_score
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: Romain-XV/3d737383-c3fa-4c8f-ad38-b5af93247aca
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_best_model_at_end: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: true
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
lr_scheduler: cosine
max_steps: 1451
micro_batch_size: 4
mlflow_experiment_name: /tmp/cf2f1c242df238b1_train_data.json
model_type: AutoModelForCausalLM
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: dce7eeea-a6c9-46de-944c-a4358d11654c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: dce7eeea-a6c9-46de-944c-a4358d11654c
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 3d737383-c3fa-4c8f-ad38-b5af93247aca
This model is a fine-tuned version of [katuni4ka/tiny-random-dbrx](https://huggingface.co/katuni4ka/tiny-random-dbrx) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 483
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 184.0 | 0.0021 | 1 | 11.5 |
| 184.0 | 0.1036 | 50 | 11.5 |
| 184.0 | 0.2072 | 100 | 11.5 |
| 184.0 | 0.3108 | 150 | 11.5 |
| 184.0 | 0.4143 | 200 | 11.5 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
dixedus/f39688c1-f7cc-408e-a991-becbf6c7b66e
|
dixedus
| 2025-01-30T02:48:47Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:peft-internal-testing/tiny-dummy-qwen2",
"base_model:adapter:peft-internal-testing/tiny-dummy-qwen2",
"region:us"
] | null | 2025-01-30T02:38:04Z |
---
library_name: peft
base_model: peft-internal-testing/tiny-dummy-qwen2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f39688c1-f7cc-408e-a991-becbf6c7b66e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: peft-internal-testing/tiny-dummy-qwen2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c97ac490508cd842_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c97ac490508cd842_train_data.json
type:
field_input: cot
field_instruction: query
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: dixedus/f39688c1-f7cc-408e-a991-becbf6c7b66e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/c97ac490508cd842_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: d3fbaf7f-09ff-402e-b347-2eff8a768f9c
wandb_project: Gradients-On-Eight
wandb_run: your_name
wandb_runid: d3fbaf7f-09ff-402e-b347-2eff8a768f9c
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f39688c1-f7cc-408e-a991-becbf6c7b66e
This model is a fine-tuned version of [peft-internal-testing/tiny-dummy-qwen2](https://huggingface.co/peft-internal-testing/tiny-dummy-qwen2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.9302
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0006 | 1 | 11.9326 |
| 11.9329 | 0.0051 | 9 | 11.9325 |
| 11.9317 | 0.0101 | 18 | 11.9323 |
| 11.9323 | 0.0152 | 27 | 11.9320 |
| 11.931 | 0.0203 | 36 | 11.9317 |
| 11.9319 | 0.0254 | 45 | 11.9314 |
| 11.9316 | 0.0304 | 54 | 11.9310 |
| 11.9312 | 0.0355 | 63 | 11.9307 |
| 11.9312 | 0.0406 | 72 | 11.9304 |
| 11.9302 | 0.0456 | 81 | 11.9303 |
| 11.9302 | 0.0507 | 90 | 11.9302 |
| 11.9303 | 0.0558 | 99 | 11.9302 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
hrasto/llamaxs1_open_subtitles_h
|
hrasto
| 2025-01-30T02:48:16Z | 97 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-30T02:19:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lesso03/2b2f62c0-7aa8-4f98-a6a1-35e5acbfde12
|
lesso03
| 2025-01-30T02:46:38Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"dbrx",
"axolotl",
"generated_from_trainer",
"base_model:katuni4ka/tiny-random-dbrx",
"base_model:adapter:katuni4ka/tiny-random-dbrx",
"region:us"
] | null | 2025-01-30T02:44:48Z |
---
library_name: peft
base_model: katuni4ka/tiny-random-dbrx
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2b2f62c0-7aa8-4f98-a6a1-35e5acbfde12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: katuni4ka/tiny-random-dbrx
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- cf2f1c242df238b1_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/cf2f1c242df238b1_train_data.json
type:
field_input: fidelity_label
field_instruction: prompt
field_output: element_score
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso03/2b2f62c0-7aa8-4f98-a6a1-35e5acbfde12
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/cf2f1c242df238b1_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: dce7eeea-a6c9-46de-944c-a4358d11654c
wandb_project: multi
wandb_run: your_name
wandb_runid: dce7eeea-a6c9-46de-944c-a4358d11654c
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 2b2f62c0-7aa8-4f98-a6a1-35e5acbfde12
This model is a fine-tuned version of [katuni4ka/tiny-random-dbrx](https://huggingface.co/katuni4ka/tiny-random-dbrx) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 46.0 | 0.4143 | 200 | 11.5 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso15/31da8fe2-de72-43ed-a294-1924045170c2
|
lesso15
| 2025-01-30T02:45:24Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Llama-3.2-1B",
"base_model:adapter:NousResearch/Llama-3.2-1B",
"license:llama3.2",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-30T02:27:28Z |
---
library_name: peft
license: llama3.2
base_model: NousResearch/Llama-3.2-1B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 31da8fe2-de72-43ed-a294-1924045170c2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Llama-3.2-1B
bf16: auto
chat_template: llama3
datasets:
- data_files:
- 2336298cd063de99_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2336298cd063de99_train_data.json
type:
field_input: ''
field_instruction: id
field_output: raw_text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso15/31da8fe2-de72-43ed-a294-1924045170c2
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/2336298cd063de99_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bbb8a723-40a9-4355-9039-6f528db7c8e1
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: bbb8a723-40a9-4355-9039-6f528db7c8e1
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 31da8fe2-de72-43ed-a294-1924045170c2
This model is a fine-tuned version of [NousResearch/Llama-3.2-1B](https://huggingface.co/NousResearch/Llama-3.2-1B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.3543 | 0.2205 | 200 | 2.2840 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
antimage88/c5e8dd67-32d8-44c5-82f6-928b3cf2e038
|
antimage88
| 2025-01-30T02:44:23Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:deepseek-ai/deepseek-coder-6.7b-instruct",
"base_model:adapter:deepseek-ai/deepseek-coder-6.7b-instruct",
"license:other",
"region:us"
] | null | 2025-01-30T01:49:57Z |
---
library_name: peft
license: other
base_model: deepseek-ai/deepseek-coder-6.7b-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c5e8dd67-32d8-44c5-82f6-928b3cf2e038
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: deepseek-ai/deepseek-coder-6.7b-instruct
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 8e83a81599a1c92e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8e83a81599a1c92e_train_data.json
type:
field_instruction: description
field_output: title
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: antimage88/c5e8dd67-32d8-44c5-82f6-928b3cf2e038
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/8e83a81599a1c92e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f3fae5bf-6f85-4e00-b401-849bb92f687b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f3fae5bf-6f85-4e00-b401-849bb92f687b
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c5e8dd67-32d8-44c5-82f6-928b3cf2e038
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-6.7b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9107
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.5346 | 0.0004 | 1 | 3.9100 |
| 3.0634 | 0.0177 | 50 | 2.2406 |
| 2.9462 | 0.0354 | 100 | 2.0306 |
| 2.6391 | 0.0530 | 150 | 1.9351 |
| 2.6142 | 0.0707 | 200 | 1.9107 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.