modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-02 00:39:05
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 532
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-02 00:38:59
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
chinxx66/uuu_fine_tune_gpt2
|
chinxx66
| 2025-06-25T03:30:28Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:39:51Z |
---
license: apache-2.0
---
|
Daniel-xue/uuu_fine_tune_gpt2
|
Daniel-xue
| 2025-06-25T03:28:59Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:24:19Z |
---
license: apache-2.0
---
|
Doctor-Shotgun/L3.3-70B-Magnum-Diamond-LoRA
|
Doctor-Shotgun
| 2025-06-25T03:26:58Z | 4 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.3-70B-Instruct",
"base_model:adapter:meta-llama/Llama-3.3-70B-Instruct",
"license:llama3.3",
"region:us"
] | null | 2025-06-03T12:47:27Z |
---
library_name: peft
license: llama3.3
base_model: meta-llama/Llama-3.3-70B-Instruct
tags:
- axolotl
- generated_from_trainer
---
# L3.3-70B-Magnum-Diamond-LoRA
Magnum "Diamond" in reference to the intense heat and pressure (generated through matrix multiplications) needed to turn the coal-esque material of dry, assistant-tuned models into creative writing gems!
This model is finetuned from [meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) as an rsLoRA adapter. It uses the same data mix as [Doctor-Shotgun/L3.3-70B-Magnum-v5-SFT-Alpha](https://huggingface.co/Doctor-Shotgun/L3.3-70B-Magnum-v5-SFT-Alpha), however with pre-tokenization and modifications to the custom loss masking.
It's for all intents and purposes a version update to the former model.
This model should perform competently with or without prepending character names, and with or without prefill.
The objective, as with the other Magnum models, is to emulate the prose style and quality of the Claude 3 Sonnet/Opus series of models on a local scale, so don't be surprised to see "Claude-isms" in its output.
[Merged full model](https://huggingface.co/Doctor-Shotgun/L3.3-70B-Magnum-Diamond)
## Intended uses and limitations
This model is intended for creative writing and roleplay purposes.
It may show biases similar to those observed in contemporary LLM-based roleplay, in addition to those exhibited by the Claude 3 series of models and the base model.
All outputs should be considered fiction, as this model is not intended to provide factual information or advice.
## Training procedure
[WandB](https://wandb.ai/doctorshotgun/70b-magnum-lora/runs/acnk2imq?nw=nwuserdoctorshotgun)
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.9.2`
```yaml
base_model: meta-llama/Llama-3.3-70B-Instruct
base_model_ignore_patterns: "*/*"
# optionally might have model_type or tokenizer_type
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
# Automatically upload checkpoint and final model to HF
hub_model_id: Doctor-Shotgun/magnum-v5-sft-prototype-70b-lora-rev1
hub_strategy: "all_checkpoints"
push_dataset_to_hub:
hf_use_auth_token: true
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: anthracite-core/magnum-v5-sft-proto-llama3-rev1-32k
ds_type: parquet
type:
shuffle_merged_datasets: true
dataset_prepared_path: /workspace/magnum-70b-data
val_set_size: 0.0
output_dir: /workspace/70b-lora-out
plugins:
- axolotl.integrations.liger.LigerPlugin
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
liger_rope: true
liger_rms_norm: true
liger_layer_norm: true
liger_glu_activation: true
liger_fused_linear_cross_entropy: false
cut_cross_entropy: true
sequence_len: 32768
sample_packing: true
eval_sample_packing: true
pad_to_sequence_len: true
adapter: lora
lora_model_dir:
lora_r: 128
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
peft_use_rslora: true
lora_modules_to_save:
- embed_tokens
- lm_head
wandb_project: 70b-magnum-lora
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 1
micro_batch_size: 2
num_epochs: 2
optimizer: paged_ademamix_8bit
lr_scheduler: cosine
learning_rate: 4e-5
max_grad_norm: 1.0
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: offload
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
s2_attention:
warmup_steps: 40
evals_per_epoch:
eval_table_size:
eval_max_new_tokens:
saves_per_epoch: 2
debug:
deepspeed: ./deepspeed_configs/zero3_bf16_torch_compile.json
weight_decay: 0.01
fsdp:
fsdp_config:
special_tokens:
pad_token: <|finetune_right_pad_id|>
```
</details><br>
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Use paged_ademamix_8bit and the args are:
No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 40
- num_epochs: 2.0
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
Rookiezz/medgemma-4b-it-sft-lora-custom
|
Rookiezz
| 2025-06-25T03:26:33Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:unsloth/medgemma-4b-it",
"base_model:finetune:unsloth/medgemma-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-06-05T13:48:34Z |
---
base_model: unsloth/medgemma-4b-it
library_name: transformers
model_name: medgemma-4b-it-sft-lora-custom
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for medgemma-4b-it-sft-lora-custom
This model is a fine-tuned version of [unsloth/medgemma-4b-it](https://huggingface.co/unsloth/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Rookiezz/medgemma-4b-it-sft-lora-custom", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Cvwisework/qwen2.5-3b-passport_e1_train-autolabeled
|
Cvwisework
| 2025-06-25T03:26:31Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-VL-3B-Instruct",
"region:us"
] | null | 2025-06-24T19:23:39Z |
---
library_name: peft
base_model: Qwen/Qwen2.5-VL-3B-Instruct
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: qwen2.5-3b-passport_e1_train-autolabeled
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen2.5-3b-passport_e1_train-autolabeled
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.13.0
- Transformers 4.53.0.dev0
- Pytorch 2.7.1+cu126
- Datasets 3.0.1
- Tokenizers 0.21.1
|
johngreendr1/72a53c5a-be56-4519-a53c-999041c64c96
|
johngreendr1
| 2025-06-25T03:24:42Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:NousResearch/Nous-Capybara-7B-V1.9",
"base_model:adapter:NousResearch/Nous-Capybara-7B-V1.9",
"region:us"
] | null | 2025-06-25T02:09:27Z |
---
base_model: NousResearch/Nous-Capybara-7B-V1.9
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
vishakr01/comp4_12
|
vishakr01
| 2025-06-25T03:24:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T03:22:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Doctor-Shotgun/MS3.2-24B-Magnum-Diamond-LoRA
|
Doctor-Shotgun
| 2025-06-25T03:23:42Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:mistralai/Mistral-Small-3.2-24B-Instruct-2506",
"base_model:adapter:mistralai/Mistral-Small-3.2-24B-Instruct-2506",
"license:apache-2.0",
"region:us"
] | null | 2025-06-22T17:45:00Z |
---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-Small-3.2-24B-Instruct-2506
tags:
- axolotl
- generated_from_trainer
---
# MS3.2-24B-Magnum-Diamond-LoRA
Magnum "Diamond" in reference to the intense heat and pressure (generated through matrix multiplications) needed to turn the coal-esque material of dry, assistant-tuned models into creative writing gems!
This model is finetuned from a text-only conversion of [mistralai/Mistral-Small-3.2-24B-Instruct-2506](https://huggingface.co/mistralai/Mistral-Small-3.2-24B-Instruct-2506) as an rsLoRA adapter. It uses the same data mix as [Doctor-Shotgun/L3.3-70B-Magnum-v5-SFT-Alpha](https://huggingface.co/Doctor-Shotgun/L3.3-70B-Magnum-v5-SFT-Alpha), however with pre-tokenization and modifications to the custom loss masking.
The goal was to re-create the model at a smaller, more consumer-friendly size.
This model should perform competently with or without prepending character names, and with or without prefill.
The objective, as with the other Magnum models, is to emulate the prose style and quality of the Claude 3 Sonnet/Opus series of models on a local scale, so don't be surprised to see "Claude-isms" in its output.
This is a minor version update over [Doctor-Shotgun/MS3.1-24B-Magnum-Diamond-LoRA](https://huggingface.co/Doctor-Shotgun/MS3.1-24B-Magnum-Diamond-LoRA) utilizing the new official instruct model from June 2025.
[Merged full model](https://huggingface.co/Doctor-Shotgun/MS3.2-24B-Magnum-Diamond)
## Intended uses and limitations
This model is intended for creative writing and roleplay purposes.
It may show biases similar to those observed in contemporary LLM-based roleplay, in addition to those exhibited by the Claude 3 series of models and the base model.
All outputs should be considered fiction, as this model is not intended to provide factual information or advice.
## Training procedure
[WandB](https://wandb.ai/gum1h0x/24b-magnum-lora/runs/3zudxeg3?nw=nwuseradrianjuliusbeck)
There was a weird loss spike of unclear significance on one sample that was not seen using the same dataset on Mistral Small 3.1 Instruct, but the resulting model appears to be sane.
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.9.2`
```yaml
base_model: anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only
#base_model_ignore_patterns: "consolidated.safetensors"
# optionally might have model_type or tokenizer_type
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
# Automatically upload checkpoint and final model to HF
hub_model_id: NewEden/magnum-v5-sft-prototype-ms3.2-lora
hub_strategy: "all_checkpoints"
push_dataset_to_hub:
hf_use_auth_token: true
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: NewEden/magnum-v5-sft-proto-mistral-v7-tekken-rev1-32k
ds_type: parquet
type:
shuffle_merged_datasets: true
dataset_prepared_path: ./magnum-24b-data
val_set_size: 0.0
output_dir: ./magnum-24b-lora-out
plugins:
- axolotl.integrations.liger.LigerPlugin
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
liger_rope: true
liger_rms_norm: true
liger_layer_norm: true
liger_glu_activation: true
liger_fused_linear_cross_entropy: false
cut_cross_entropy: true
sequence_len: 32768
sample_packing: true
eval_sample_packing: true
pad_to_sequence_len: true
adapter: lora
lora_model_dir:
lora_r: 128
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
peft_use_rslora: true
lora_modules_to_save:
- embed_tokens
- lm_head
wandb_project: 24b-magnum-lora
wandb_entity:
wandb_watch:
wandb_name: 24b-magnum-lora-mistral-3.2
wandb_log_model:
gradient_accumulation_steps: 16
micro_batch_size: 1
num_epochs: 2
optimizer: paged_ademamix_8bit
lr_scheduler: cosine
learning_rate: 2e-5
max_grad_norm: 1.0
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
s2_attention:
warmup_steps: 40
evals_per_epoch:
eval_table_size:
eval_max_new_tokens:
saves_per_epoch: 2
debug:
deepspeed:
weight_decay: 0.01
fsdp:
fsdp_config:
special_tokens:
```
</details><br>
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use paged_ademamix_8bit and the args are:
No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 40
- num_epochs: 2.0
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.7.1+cu128
- Datasets 3.5.1
- Tokenizers 0.21.1
|
Jack89215/uuu_fine_tune_gpt2
|
Jack89215
| 2025-06-25T03:23:34Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:40:26Z |
---
license: apache-2.0
---
|
eatim/uuu_fine_tune_taipower
|
eatim
| 2025-06-25T03:23:17Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:20:46Z |
---
license: apache-2.0
---
|
Bogoo/SmolLM2_1.7B_LoRA_ro_Wiki
|
Bogoo
| 2025-06-25T03:22:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-25T03:22:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CHIANG0903/uuu_fine_tune_gpt2
|
CHIANG0903
| 2025-06-25T03:21:59Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:49:35Z |
---
license: apache-2.0
---
|
New-videos-Mahiye-selin-viral-video-Clips/FULL.VIDEO.Mahiye.selin.Viral.Video.Tutorial.Official
|
New-videos-Mahiye-selin-viral-video-Clips
| 2025-06-25T03:20:27Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-25T03:20:14Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Hastagaras/XGS-9B-INS-TEST-RESIZED-FP16
|
Hastagaras
| 2025-06-25T03:18:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T03:12:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
New-videos-Shubham-Gupta-viral-video-Clips/FULL.VIDEO.Shubham.Gupta.Viral.Video.Tutorial.Official
|
New-videos-Shubham-Gupta-viral-video-Clips
| 2025-06-25T03:15:09Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-25T03:14:55Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
chinxx66/uuu_fine_tune_taipower
|
chinxx66
| 2025-06-25T03:13:46Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:39:13Z |
---
license: apache-2.0
---
|
linfone2/uuu_fine_tune_taipower
|
linfone2
| 2025-06-25T03:13:28Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:43:19Z |
---
license: apache-2.0
---
|
vincrnt/uuu_fine_tune_taipower
|
vincrnt
| 2025-06-25T03:13:14Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:34:26Z |
---
license: apache-2.0
---
|
sam34738/muril-resnet-binary
|
sam34738
| 2025-06-25T03:12:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"binary_multimodal",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-25T03:11:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mrbmaryam/SFT_F4
|
mrbmaryam
| 2025-06-25T03:11:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-25T03:11:26Z |
---
base_model: unsloth/mistral-7b-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** mrbmaryam
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
pratyushmathur/q-FrozenLake-v1-4x4-noSlippery
|
pratyushmathur
| 2025-06-25T03:11:06Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-25T03:09:31Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="pratyushmathur/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Daniel-xue/uuu_fine_tune_taipower
|
Daniel-xue
| 2025-06-25T03:09:09Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:24:04Z |
---
license: apache-2.0
---
|
John6666/illustrious-semi-realistic-anime-v30-sdxl
|
John6666
| 2025-06-25T03:08:54Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"realistic",
"semi-realistic",
"girls",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-06-25T03:02:46Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- realistic
- semi-realistic
- girls
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1711896/illustrious-semi-realistic-anime?modelVersionId=1937224).
This model created by [shishu21](https://civitai.com/user/shishu21).
|
NamVo/mini_r1_unsloth_lora128
|
NamVo
| 2025-06-25T03:08:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"grpo",
"arxiv:2402.03300",
"endpoints_compatible",
"region:us"
] | null | 2025-06-25T03:07:21Z |
---
base_model: unsloth/qwen2.5-1.5b-instruct-unsloth-bnb-4bit
library_name: transformers
model_name: mini_r1_unsloth_lora128
tags:
- generated_from_trainer
- unsloth
- trl
- grpo
licence: license
---
# Model Card for mini_r1_unsloth_lora128
This model is a fine-tuned version of [unsloth/qwen2.5-1.5b-instruct-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen2.5-1.5b-instruct-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="NamVo/mini_r1_unsloth_lora128", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/nvoz1812/huggingface/runs/vbjrbue6)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
CHIANG0903/uuu_fine_tune_taipower
|
CHIANG0903
| 2025-06-25T03:07:23Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:49:18Z |
---
license: apache-2.0
---
|
New-videos-ola-electric-viral-video-Clips/FULL.VIDEO.ola-electric.Viral.Video.Tutorial.Official
|
New-videos-ola-electric-viral-video-Clips
| 2025-06-25T03:06:16Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-25T03:06:02Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
mlx-community/Cydonia-24B-v3.1-8bit
|
mlx-community
| 2025-06-25T03:03:22Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"mistral",
"text-generation",
"base_model:TheDrummer/Cydonia-24B-v3.1",
"base_model:quantized:TheDrummer/Cydonia-24B-v3.1",
"8-bit",
"region:us"
] |
text-generation
| 2025-06-25T02:55:58Z |
---
base_model: TheDrummer/Cydonia-24B-v3.1
tags:
- mlx
pipeline_tag: text-generation
library_name: mlx
---
# mlx-community/Cydonia-24B-v3.1-8bit
This model [mlx-community/Cydonia-24B-v3.1-8bit](https://huggingface.co/mlx-community/Cydonia-24B-v3.1-8bit) was
converted to MLX format from [TheDrummer/Cydonia-24B-v3.1](https://huggingface.co/TheDrummer/Cydonia-24B-v3.1)
using mlx-lm version **0.25.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Cydonia-24B-v3.1-8bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
iwagoro/layoutlm-docbank
|
iwagoro
| 2025-06-25T03:03:03Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"layoutlm",
"generated_from_trainer",
"base_model:microsoft/layoutlm-base-uncased",
"base_model:finetune:microsoft/layoutlm-base-uncased",
"license:mit",
"region:us"
] | null | 2025-06-23T16:37:55Z |
---
license: mit
base_model: microsoft/layoutlm-base-uncased
tags:
- generated_from_trainer
model-index:
- name: layoutlm-docbank
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlm-docbank
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2981
- Able: {'precision': 0.7228813559322034, 'recall': 0.8229618909792571, 'f1': 0.7696819309722536, 'number': 2073}
- Aption: {'precision': 0.8535364768683275, 'recall': 0.8798578470709618, 'f1': 0.8664973186565058, 'number': 8723}
- Aragraph: {'precision': 0.7315439151833142, 'recall': 0.7769411439624205, 'f1': 0.7535594242387018, 'number': 43428}
- Ate: {'precision': 0.8031088082901554, 'recall': 0.8333333333333334, 'f1': 0.8179419525065963, 'number': 186}
- Bstract: {'precision': 0.9137055837563451, 'recall': 0.9399477806788512, 'f1': 0.9266409266409267, 'number': 2298}
- Ection: {'precision': 0.9108754155453538, 'recall': 0.9432786885245902, 'f1': 0.9267939115728436, 'number': 6100}
- Eference: {'precision': 0.5945041816009558, 'recall': 0.7409172126265634, 'f1': 0.6596844756728092, 'number': 3358}
- Igure: {'precision': 0.9959514170040485, 'recall': 0.9979716024340771, 'f1': 0.9969604863221885, 'number': 986}
- Ist: {'precision': 0.6354533152909337, 'recall': 0.693853427895981, 'f1': 0.6633705325610961, 'number': 3384}
- Itle: {'precision': 0.8534278959810875, 'recall': 0.8356481481481481, 'f1': 0.8444444444444444, 'number': 864}
- Ooter: {'precision': 0.6076190476190476, 'recall': 0.7057522123893806, 'f1': 0.6530194472876152, 'number': 452}
- Quation: {'precision': 0.6943667406192727, 'recall': 0.7324481074481074, 'f1': 0.7128992324832879, 'number': 19656}
- Uthor: {'precision': 0.5667556742323098, 'recall': 0.616557734204793, 'f1': 0.5906086956521739, 'number': 1377}
- Overall Precision: 0.7417
- Overall Recall: 0.7891
- Overall F1: 0.7647
- Overall Accuracy: 0.9639
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Able | Aption | Aragraph | Ate | Bstract | Ection | Eference | Igure | Ist | Itle | Ooter | Quation | Uthor | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.2526 | 1.0 | 1876 | 0.1649 | {'precision': 0.4146422628951747, 'recall': 0.6010612638687892, 'f1': 0.4907443875541552, 'number': 2073} | {'precision': 0.6553778613985576, 'recall': 0.7187894073139974, 'f1': 0.6856205576817933, 'number': 8723} | {'precision': 0.5402088876533895, 'recall': 0.6264621902919775, 'f1': 0.5801471372214522, 'number': 43428} | {'precision': 0.7005649717514124, 'recall': 0.6666666666666666, 'f1': 0.6831955922865013, 'number': 186} | {'precision': 0.7803265940902022, 'recall': 0.8733681462140992, 'f1': 0.8242299794661191, 'number': 2298} | {'precision': 0.8863070539419087, 'recall': 0.8754098360655738, 'f1': 0.8808247422680412, 'number': 6100} | {'precision': 0.5497456189937818, 'recall': 0.5792138177486599, 'f1': 0.5640951276102087, 'number': 3358} | {'precision': 0.9828801611278952, 'recall': 0.9898580121703854, 'f1': 0.9863567458312278, 'number': 986} | {'precision': 0.40529189416211675, 'recall': 0.5703309692671394, 'f1': 0.4738521973974957, 'number': 3384} | {'precision': 0.7797202797202797, 'recall': 0.7743055555555556, 'f1': 0.7770034843205574, 'number': 864} | {'precision': 0.17008797653958943, 'recall': 0.12831858407079647, 'f1': 0.1462799495586381, 'number': 452} | {'precision': 0.538917549099466, 'recall': 0.6058709808709809, 'f1': 0.5704363653781674, 'number': 19656} | {'precision': 0.2181916621548457, 'recall': 0.2926652142338417, 'f1': 0.25, 'number': 1377} | 0.5660 | 0.6469 | 0.6038 | 0.9444 |
| 0.1508 | 2.0 | 3752 | 0.1490 | {'precision': 0.5242351323478859, 'recall': 0.7356488181379643, 'f1': 0.6122039341629867, 'number': 2073} | {'precision': 0.7619706320493722, 'recall': 0.8209331651954602, 'f1': 0.7903537332376801, 'number': 8723} | {'precision': 0.5979018162780395, 'recall': 0.6837293911761997, 'f1': 0.6379417767751638, 'number': 43428} | {'precision': 0.5978260869565217, 'recall': 0.8870967741935484, 'f1': 0.7142857142857144, 'number': 186} | {'precision': 0.8250298923874053, 'recall': 0.9007832898172323, 'f1': 0.8612440191387559, 'number': 2298} | {'precision': 0.8531830642704843, 'recall': 0.9183606557377049, 'f1': 0.8845728722564344, 'number': 6100} | {'precision': 0.6411569749924676, 'recall': 0.6337105419892793, 'f1': 0.6374120113823574, 'number': 3358} | {'precision': 0.987891019172553, 'recall': 0.9929006085192698, 'f1': 0.9903894790085989, 'number': 986} | {'precision': 0.458251953125, 'recall': 0.5546690307328606, 'f1': 0.5018716577540108, 'number': 3384} | {'precision': 0.7446808510638298, 'recall': 0.7696759259259259, 'f1': 0.7569721115537849, 'number': 864} | {'precision': 0.5972850678733032, 'recall': 0.584070796460177, 'f1': 0.5906040268456375, 'number': 452} | {'precision': 0.5535211267605634, 'recall': 0.6597985347985348, 'f1': 0.6020052917420973, 'number': 19656} | {'precision': 0.2989556135770235, 'recall': 0.33260711692084244, 'f1': 0.31488484015125473, 'number': 1377} | 0.6183 | 0.7058 | 0.6592 | 0.9525 |
| 0.1176 | 3.0 | 5628 | 0.1530 | {'precision': 0.5526420341676599, 'recall': 0.6710082006753497, 'f1': 0.6061002178649237, 'number': 2073} | {'precision': 0.7773131767985418, 'recall': 0.8311360770377164, 'f1': 0.8033240997229917, 'number': 8723} | {'precision': 0.6078152985889651, 'recall': 0.6407617205489546, 'f1': 0.6238538280461833, 'number': 43428} | {'precision': 0.5854545454545454, 'recall': 0.8655913978494624, 'f1': 0.6984815618221257, 'number': 186} | {'precision': 0.8378161380971497, 'recall': 0.9081810269799826, 'f1': 0.8715807057840885, 'number': 2298} | {'precision': 0.8598871779234639, 'recall': 0.9245901639344263, 'f1': 0.8910656449956552, 'number': 6100} | {'precision': 0.5440832249674903, 'recall': 0.6229898749255509, 'f1': 0.5808690823268083, 'number': 3358} | {'precision': 0.9929292929292929, 'recall': 0.9969574036511156, 'f1': 0.9949392712550607, 'number': 986} | {'precision': 0.39487179487179486, 'recall': 0.45508274231678486, 'f1': 0.42284459088412957, 'number': 3384} | {'precision': 0.6833667334669339, 'recall': 0.7893518518518519, 'f1': 0.7325456498388828, 'number': 864} | {'precision': 0.43794579172610554, 'recall': 0.6792035398230089, 'f1': 0.5325238508239375, 'number': 452} | {'precision': 0.5741028804376977, 'recall': 0.5445156695156695, 'f1': 0.5589179874148149, 'number': 19656} | {'precision': 0.3929008567931457, 'recall': 0.4662309368191721, 'f1': 0.4264363998671538, 'number': 1377} | 0.6277 | 0.6600 | 0.6435 | 0.9527 |
| 0.0871 | 4.0 | 7504 | 0.1564 | {'precision': 0.6151919866444073, 'recall': 0.7110467920887602, 'f1': 0.6596554038934884, 'number': 2073} | {'precision': 0.7617387738363748, 'recall': 0.8517711796400321, 'f1': 0.8042431130594794, 'number': 8723} | {'precision': 0.6353752874764792, 'recall': 0.6997789444597955, 'f1': 0.6660238006530934, 'number': 43428} | {'precision': 0.6217228464419475, 'recall': 0.8924731182795699, 'f1': 0.7328918322295807, 'number': 186} | {'precision': 0.8827993254637436, 'recall': 0.9112271540469974, 'f1': 0.8967880085653105, 'number': 2298} | {'precision': 0.8789195901893821, 'recall': 0.9281967213114755, 'f1': 0.9028863020251954, 'number': 6100} | {'precision': 0.5240302512808002, 'recall': 0.6396664681357951, 'f1': 0.5761029904787448, 'number': 3358} | {'precision': 0.9828629032258065, 'recall': 0.9888438133874239, 'f1': 0.9858442871587463, 'number': 986} | {'precision': 0.48228571428571426, 'recall': 0.6235224586288416, 'f1': 0.5438845212011857, 'number': 3384} | {'precision': 0.8669301712779973, 'recall': 0.7615740740740741, 'f1': 0.8108441158348736, 'number': 864} | {'precision': 0.542016806722689, 'recall': 0.5707964601769911, 'f1': 0.5560344827586207, 'number': 452} | {'precision': 0.6165904637491836, 'recall': 0.6723646723646723, 'f1': 0.6432708688245315, 'number': 19656} | {'precision': 0.46214852198990625, 'recall': 0.46550472040668117, 'f1': 0.4638205499276411, 'number': 1377} | 0.6553 | 0.7237 | 0.6878 | 0.9542 |
| 0.0676 | 5.0 | 9380 | 0.1583 | {'precision': 0.6492985971943888, 'recall': 0.7814761215629522, 'f1': 0.7092819614711033, 'number': 2073} | {'precision': 0.8149818501814982, 'recall': 0.8493637510030952, 'f1': 0.8318176714943303, 'number': 8723} | {'precision': 0.6827026670477782, 'recall': 0.7149765128488533, 'f1': 0.6984669718476194, 'number': 43428} | {'precision': 0.9294871794871795, 'recall': 0.7795698924731183, 'f1': 0.847953216374269, 'number': 186} | {'precision': 0.8599190283400809, 'recall': 0.9242819843342036, 'f1': 0.890939597315436, 'number': 2298} | {'precision': 0.8848062015503876, 'recall': 0.9355737704918032, 'f1': 0.9094820717131474, 'number': 6100} | {'precision': 0.5955380577427821, 'recall': 0.6756998213222156, 'f1': 0.6330915178571428, 'number': 3358} | {'precision': 0.992936427850656, 'recall': 0.9979716024340771, 'f1': 0.9954476479514417, 'number': 986} | {'precision': 0.5794343113930743, 'recall': 0.6477541371158393, 'f1': 0.6116924794195621, 'number': 3384} | {'precision': 0.8134243458475541, 'recall': 0.8275462962962963, 'f1': 0.8204245553643145, 'number': 864} | {'precision': 0.6065573770491803, 'recall': 0.6548672566371682, 'f1': 0.6297872340425531, 'number': 452} | {'precision': 0.6497243107769424, 'recall': 0.6594424094424094, 'f1': 0.654547290814523, 'number': 19656} | {'precision': 0.46639784946236557, 'recall': 0.5039941902687001, 'f1': 0.4844677137870855, 'number': 1377} | 0.6989 | 0.7339 | 0.7160 | 0.9598 |
| 0.0512 | 6.0 | 11256 | 0.1844 | {'precision': 0.645, 'recall': 0.7467438494934877, 'f1': 0.6921529175050302, 'number': 2073} | {'precision': 0.8094872076424728, 'recall': 0.8451220910237304, 'f1': 0.8269209197980932, 'number': 8723} | {'precision': 0.6710134048257372, 'recall': 0.7204107948788799, 'f1': 0.6948352636780563, 'number': 43428} | {'precision': 0.6753246753246753, 'recall': 0.8387096774193549, 'f1': 0.7482014388489209, 'number': 186} | {'precision': 0.8834745762711864, 'recall': 0.9073107049608355, 'f1': 0.8952340060111635, 'number': 2298} | {'precision': 0.9024081115335868, 'recall': 0.9337704918032786, 'f1': 0.9178214631002256, 'number': 6100} | {'precision': 0.4868008948545861, 'recall': 0.6480047647409172, 'f1': 0.5559529892692898, 'number': 3358} | {'precision': 0.9929292929292929, 'recall': 0.9969574036511156, 'f1': 0.9949392712550607, 'number': 986} | {'precision': 0.5424300867888139, 'recall': 0.6648936170212766, 'f1': 0.5974508762612852, 'number': 3384} | {'precision': 0.7554179566563467, 'recall': 0.8472222222222222, 'f1': 0.7986906710310966, 'number': 864} | {'precision': 0.6563981042654028, 'recall': 0.6128318584070797, 'f1': 0.6338672768878719, 'number': 452} | {'precision': 0.650782911270056, 'recall': 0.685083435083435, 'f1': 0.6674928125309805, 'number': 19656} | {'precision': 0.4430835734870317, 'recall': 0.4466230936819172, 'f1': 0.4448462929475588, 'number': 1377} | 0.6856 | 0.7390 | 0.7113 | 0.9578 |
| 0.0389 | 7.0 | 13132 | 0.2002 | {'precision': 0.6875749101078705, 'recall': 0.8301977809937289, 'f1': 0.7521853146853146, 'number': 2073} | {'precision': 0.798666243251826, 'recall': 0.8649547174137338, 'f1': 0.8304898183819481, 'number': 8723} | {'precision': 0.6971504451749134, 'recall': 0.7374274661508704, 'f1': 0.7167235494880546, 'number': 43428} | {'precision': 0.774869109947644, 'recall': 0.7956989247311828, 'f1': 0.7851458885941645, 'number': 186} | {'precision': 0.8827004219409282, 'recall': 0.9103568320278503, 'f1': 0.8963153384747214, 'number': 2298} | {'precision': 0.9097432626375379, 'recall': 0.9352459016393443, 'f1': 0.9223183251151887, 'number': 6100} | {'precision': 0.6794092093831451, 'recall': 0.6986301369863014, 'f1': 0.6888856261929232, 'number': 3358} | {'precision': 0.9959473150962512, 'recall': 0.9969574036511156, 'f1': 0.9964521033958439, 'number': 986} | {'precision': 0.5793751587503175, 'recall': 0.6740543735224587, 'f1': 0.6231389154487093, 'number': 3384} | {'precision': 0.834128878281623, 'recall': 0.8090277777777778, 'f1': 0.8213866039952996, 'number': 864} | {'precision': 0.6046511627906976, 'recall': 0.6327433628318584, 'f1': 0.6183783783783783, 'number': 452} | {'precision': 0.6526806526806527, 'recall': 0.698005698005698, 'f1': 0.6745826880055068, 'number': 19656} | {'precision': 0.46461949265687585, 'recall': 0.5054466230936819, 'f1': 0.4841739130434783, 'number': 1377} | 0.7101 | 0.7563 | 0.7325 | 0.9579 |
| 0.0281 | 8.0 | 15008 | 0.2068 | {'precision': 0.7080638206123329, 'recall': 0.7920887602508442, 'f1': 0.7477231329690345, 'number': 2073} | {'precision': 0.8085677474769165, 'recall': 0.8633497649891092, 'f1': 0.8350612629594723, 'number': 8723} | {'precision': 0.7156752540662064, 'recall': 0.7183614258082344, 'f1': 0.7170158241303624, 'number': 43428} | {'precision': 0.578397212543554, 'recall': 0.8924731182795699, 'f1': 0.7019027484143764, 'number': 186} | {'precision': 0.8733221476510067, 'recall': 0.9060052219321149, 'f1': 0.8893635198633062, 'number': 2298} | {'precision': 0.9074427480916031, 'recall': 0.9354098360655738, 'f1': 0.9212140781401356, 'number': 6100} | {'precision': 0.6934523809523809, 'recall': 0.6938653960690887, 'f1': 0.6936588270318546, 'number': 3358} | {'precision': 0.9979736575481256, 'recall': 0.9989858012170385, 'f1': 0.9984794728839331, 'number': 986} | {'precision': 0.5753681392235609, 'recall': 0.6350472813238771, 'f1': 0.6037364798426745, 'number': 3384} | {'precision': 0.8312958435207825, 'recall': 0.7870370370370371, 'f1': 0.8085612366230678, 'number': 864} | {'precision': 0.5778688524590164, 'recall': 0.6238938053097345, 'f1': 0.6, 'number': 452} | {'precision': 0.693010752688172, 'recall': 0.6557794057794057, 'f1': 0.6738812212463404, 'number': 19656} | {'precision': 0.5053262316910786, 'recall': 0.55119825708061, 'f1': 0.5272664119485932, 'number': 1377} | 0.7302 | 0.7364 | 0.7333 | 0.9604 |
| 0.0222 | 9.0 | 16884 | 0.2193 | {'precision': 0.6235811058220432, 'recall': 0.8215147129763628, 'f1': 0.708992506244796, 'number': 2073} | {'precision': 0.8264917003140422, 'recall': 0.8447781726470251, 'f1': 0.8355348942683827, 'number': 8723} | {'precision': 0.7017585809621112, 'recall': 0.7433683337938657, 'f1': 0.7219644194965952, 'number': 43428} | {'precision': 0.90625, 'recall': 0.7795698924731183, 'f1': 0.838150289017341, 'number': 186} | {'precision': 0.8704156479217604, 'recall': 0.9295039164490861, 'f1': 0.898989898989899, 'number': 2298} | {'precision': 0.9143317230273752, 'recall': 0.9308196721311476, 'f1': 0.922502030869212, 'number': 6100} | {'precision': 0.5801470588235295, 'recall': 0.704883859440143, 'f1': 0.6364614143586985, 'number': 3358} | {'precision': 0.9949443882709808, 'recall': 0.9979716024340771, 'f1': 0.9964556962025316, 'number': 986} | {'precision': 0.6149458071876782, 'recall': 0.6371158392434988, 'f1': 0.6258345428156749, 'number': 3384} | {'precision': 0.8431137724550898, 'recall': 0.8148148148148148, 'f1': 0.8287227781047676, 'number': 864} | {'precision': 0.6629711751662971, 'recall': 0.661504424778761, 'f1': 0.6622369878183831, 'number': 452} | {'precision': 0.6730908214887978, 'recall': 0.7107244607244607, 'f1': 0.6913959070550098, 'number': 19656} | {'precision': 0.5108055009823183, 'recall': 0.5664488017429193, 'f1': 0.537190082644628, 'number': 1377} | 0.7156 | 0.7598 | 0.7371 | 0.9596 |
| 0.0162 | 10.0 | 18760 | 0.2114 | {'precision': 0.6486062033765214, 'recall': 0.7969126869271587, 'f1': 0.7151515151515152, 'number': 2073} | {'precision': 0.8267941532036488, 'recall': 0.8624326493178952, 'f1': 0.8442374593199417, 'number': 8723} | {'precision': 0.7077005538681437, 'recall': 0.7296674956249425, 'f1': 0.7185161670672531, 'number': 43428} | {'precision': 0.9085365853658537, 'recall': 0.8010752688172043, 'f1': 0.8514285714285714, 'number': 186} | {'precision': 0.844675740592474, 'recall': 0.918189730200174, 'f1': 0.8798999165971642, 'number': 2298} | {'precision': 0.9145987753786659, 'recall': 0.9304918032786885, 'f1': 0.9224768405655778, 'number': 6100} | {'precision': 0.5639344262295082, 'recall': 0.6658725431804645, 'f1': 0.6106786835996176, 'number': 3358} | {'precision': 0.9959514170040485, 'recall': 0.9979716024340771, 'f1': 0.9969604863221885, 'number': 986} | {'precision': 0.6411883472743005, 'recall': 0.6569148936170213, 'f1': 0.6489563567362429, 'number': 3384} | {'precision': 0.8991935483870968, 'recall': 0.7743055555555556, 'f1': 0.832089552238806, 'number': 864} | {'precision': 0.5370018975332068, 'recall': 0.6261061946902655, 'f1': 0.5781409601634322, 'number': 452} | {'precision': 0.6896913159687648, 'recall': 0.6695156695156695, 'f1': 0.6794537522265535, 'number': 19656} | {'precision': 0.5298196948682385, 'recall': 0.5548293391430646, 'f1': 0.5420361830436324, 'number': 1377} | 0.7237 | 0.7441 | 0.7338 | 0.9611 |
| 0.0138 | 11.0 | 20636 | 0.2391 | {'precision': 0.664185277088503, 'recall': 0.7747226242161119, 'f1': 0.7152081941661101, 'number': 2073} | {'precision': 0.8144112087178917, 'recall': 0.8396193969964462, 'f1': 0.8268232106570332, 'number': 8723} | {'precision': 0.7044044130322358, 'recall': 0.7527401676337847, 'f1': 0.7277706042121198, 'number': 43428} | {'precision': 0.8324022346368715, 'recall': 0.8010752688172043, 'f1': 0.8164383561643834, 'number': 186} | {'precision': 0.8978132884777124, 'recall': 0.9290687554395126, 'f1': 0.9131736526946108, 'number': 2298} | {'precision': 0.9141269841269841, 'recall': 0.9440983606557377, 'f1': 0.9288709677419354, 'number': 6100} | {'precision': 0.5543908688562776, 'recall': 0.7087552114353782, 'f1': 0.6221408966148216, 'number': 3358} | {'precision': 0.9949494949494949, 'recall': 0.9989858012170385, 'f1': 0.9969635627530363, 'number': 986} | {'precision': 0.5981259760541384, 'recall': 0.6790780141843972, 'f1': 0.6360365347356767, 'number': 3384} | {'precision': 0.8146453089244852, 'recall': 0.8240740740740741, 'f1': 0.8193325661680093, 'number': 864} | {'precision': 0.6401673640167364, 'recall': 0.6769911504424779, 'f1': 0.6580645161290323, 'number': 452} | {'precision': 0.6891924859721883, 'recall': 0.7186100936100936, 'f1': 0.7035939329032901, 'number': 19656} | {'precision': 0.530638852672751, 'recall': 0.5911401597676107, 'f1': 0.5592579869460667, 'number': 1377} | 0.7187 | 0.7674 | 0.7423 | 0.9601 |
| 0.0099 | 12.0 | 22512 | 0.2190 | {'precision': 0.5986635220125787, 'recall': 0.7346840328027014, 'f1': 0.6597357591509638, 'number': 2073} | {'precision': 0.8261346196009647, 'recall': 0.863922962283618, 'f1': 0.844606332305968, 'number': 8723} | {'precision': 0.7126507076708021, 'recall': 0.7513125172699641, 'f1': 0.7314711025422589, 'number': 43428} | {'precision': 0.8630952380952381, 'recall': 0.7795698924731183, 'f1': 0.8192090395480226, 'number': 186} | {'precision': 0.8786008230452675, 'recall': 0.9290687554395126, 'f1': 0.9031302876480541, 'number': 2298} | {'precision': 0.8979878334113243, 'recall': 0.9437704918032787, 'f1': 0.9203101270881625, 'number': 6100} | {'precision': 0.5727510087823404, 'recall': 0.7185824895771292, 'f1': 0.6374323074891033, 'number': 3358} | {'precision': 0.9969604863221885, 'recall': 0.9979716024340771, 'f1': 0.9974657881398886, 'number': 986} | {'precision': 0.6077103412346966, 'recall': 0.6894208037825059, 'f1': 0.6459919700955282, 'number': 3384} | {'precision': 0.8236632536973834, 'recall': 0.8379629629629629, 'f1': 0.8307515777395296, 'number': 864} | {'precision': 0.6161417322834646, 'recall': 0.6924778761061947, 'f1': 0.6520833333333333, 'number': 452} | {'precision': 0.705915521837195, 'recall': 0.7006003256003256, 'f1': 0.7032478807067715, 'number': 19656} | {'precision': 0.4981527093596059, 'recall': 0.5875090777051561, 'f1': 0.5391536154615129, 'number': 1377} | 0.7251 | 0.7652 | 0.7446 | 0.9624 |
| 0.0084 | 13.0 | 24388 | 0.2592 | {'precision': 0.6832247557003257, 'recall': 0.8094548962855764, 'f1': 0.741002428792228, 'number': 2073} | {'precision': 0.8483670295489891, 'recall': 0.8755015476326952, 'f1': 0.8617207334273626, 'number': 8723} | {'precision': 0.7274626600284495, 'recall': 0.7536612323846367, 'f1': 0.7403302420266908, 'number': 43428} | {'precision': 0.8361581920903954, 'recall': 0.7956989247311828, 'f1': 0.815426997245179, 'number': 186} | {'precision': 0.9015565839293227, 'recall': 0.9325500435161009, 'f1': 0.9167914438502675, 'number': 2298} | {'precision': 0.9054671498345676, 'recall': 0.9421311475409836, 'f1': 0.9234353659516349, 'number': 6100} | {'precision': 0.6139511458071015, 'recall': 0.726027397260274, 'f1': 0.6653022240414791, 'number': 3358} | {'precision': 0.9959514170040485, 'recall': 0.9979716024340771, 'f1': 0.9969604863221885, 'number': 986} | {'precision': 0.6385964912280702, 'recall': 0.6453900709219859, 'f1': 0.6419753086419754, 'number': 3384} | {'precision': 0.7660223804679552, 'recall': 0.8715277777777778, 'f1': 0.8153762858689767, 'number': 864} | {'precision': 0.6666666666666666, 'recall': 0.6548672566371682, 'f1': 0.6607142857142857, 'number': 452} | {'precision': 0.6978891162233645, 'recall': 0.7114875864875865, 'f1': 0.7046227484569845, 'number': 19656} | {'precision': 0.5463768115942029, 'recall': 0.5475671750181554, 'f1': 0.5469713456655786, 'number': 1377} | 0.7401 | 0.7695 | 0.7545 | 0.9625 |
| 0.0073 | 14.0 | 26264 | 0.2561 | {'precision': 0.7177685950413223, 'recall': 0.8379160636758322, 'f1': 0.7732027598486534, 'number': 2073} | {'precision': 0.8424081451969898, 'recall': 0.8726355611601513, 'f1': 0.857255476096627, 'number': 8723} | {'precision': 0.7259802747599661, 'recall': 0.7678226029289859, 'f1': 0.7463154242997349, 'number': 43428} | {'precision': 0.8418079096045198, 'recall': 0.8010752688172043, 'f1': 0.8209366391184573, 'number': 186} | {'precision': 0.8990787269681743, 'recall': 0.9342906875543951, 'f1': 0.9163465642338883, 'number': 2298} | {'precision': 0.9077385662288336, 'recall': 0.940327868852459, 'f1': 0.9237458732587165, 'number': 6100} | {'precision': 0.653671562082777, 'recall': 0.7290053603335319, 'f1': 0.6892862170913698, 'number': 3358} | {'precision': 0.9929292929292929, 'recall': 0.9969574036511156, 'f1': 0.9949392712550607, 'number': 986} | {'precision': 0.6193029490616622, 'recall': 0.6826241134751773, 'f1': 0.649423671633399, 'number': 3384} | {'precision': 0.7925531914893617, 'recall': 0.8622685185185185, 'f1': 0.8259423503325941, 'number': 864} | {'precision': 0.6074950690335306, 'recall': 0.6814159292035398, 'f1': 0.6423357664233577, 'number': 452} | {'precision': 0.6859747275007234, 'recall': 0.7235958485958486, 'f1': 0.7042832384253528, 'number': 19656} | {'precision': 0.5440105890138981, 'recall': 0.5969498910675382, 'f1': 0.569252077562327, 'number': 1377} | 0.7372 | 0.7812 | 0.7586 | 0.9625 |
| 0.0052 | 15.0 | 28140 | 0.2620 | {'precision': 0.7276975361087511, 'recall': 0.8263386396526773, 'f1': 0.7738875084707477, 'number': 2073} | {'precision': 0.8463771352015184, 'recall': 0.869081737934197, 'f1': 0.857579185520362, 'number': 8723} | {'precision': 0.7304345910702879, 'recall': 0.7635857050750667, 'f1': 0.7466423497360037, 'number': 43428} | {'precision': 0.6781115879828327, 'recall': 0.8494623655913979, 'f1': 0.7541766109785203, 'number': 186} | {'precision': 0.8993736951983299, 'recall': 0.9373368146214099, 'f1': 0.9179629235030897, 'number': 2298} | {'precision': 0.9117043121149897, 'recall': 0.9462295081967214, 'f1': 0.9286461266189365, 'number': 6100} | {'precision': 0.6430079155672823, 'recall': 0.7257296009529481, 'f1': 0.6818690542809177, 'number': 3358} | {'precision': 0.9949392712550608, 'recall': 0.9969574036511156, 'f1': 0.9959473150962513, 'number': 986} | {'precision': 0.6221982176613556, 'recall': 0.6808510638297872, 'f1': 0.6502045999717792, 'number': 3384} | {'precision': 0.7815126050420168, 'recall': 0.8611111111111112, 'f1': 0.8193832599118943, 'number': 864} | {'precision': 0.5786713286713286, 'recall': 0.7323008849557522, 'f1': 0.6464843749999999, 'number': 452} | {'precision': 0.7015840321710558, 'recall': 0.7278184778184779, 'f1': 0.7144605089020399, 'number': 19656} | {'precision': 0.5367936925098554, 'recall': 0.5933188090050835, 'f1': 0.5636426353915143, 'number': 1377} | 0.7425 | 0.7801 | 0.7609 | 0.9624 |
| 0.0042 | 16.0 | 30016 | 0.2755 | {'precision': 0.697255223269152, 'recall': 0.8210323203087313, 'f1': 0.7540983606557377, 'number': 2073} | {'precision': 0.8434147959747871, 'recall': 0.8743551530436776, 'f1': 0.858606326691433, 'number': 8723} | {'precision': 0.7236266459774574, 'recall': 0.7731647784839274, 'f1': 0.7475759498602902, 'number': 43428} | {'precision': 0.8277777777777777, 'recall': 0.8010752688172043, 'f1': 0.8142076502732241, 'number': 186} | {'precision': 0.9060402684563759, 'recall': 0.9399477806788512, 'f1': 0.922682614267407, 'number': 2298} | {'precision': 0.9122500793398921, 'recall': 0.9424590163934427, 'f1': 0.9271085308821158, 'number': 6100} | {'precision': 0.6392307692307693, 'recall': 0.7424061941631924, 'f1': 0.6869661063653899, 'number': 3358} | {'precision': 0.9959514170040485, 'recall': 0.9979716024340771, 'f1': 0.9969604863221885, 'number': 986} | {'precision': 0.6294667399670149, 'recall': 0.6767139479905437, 'f1': 0.6522358302477927, 'number': 3384} | {'precision': 0.8457943925233645, 'recall': 0.8379629629629629, 'f1': 0.8418604651162791, 'number': 864} | {'precision': 0.5521885521885522, 'recall': 0.7256637168141593, 'f1': 0.6271510516252391, 'number': 452} | {'precision': 0.6809746954076851, 'recall': 0.7393162393162394, 'f1': 0.7089472143623768, 'number': 19656} | {'precision': 0.5562870309414089, 'recall': 0.6136528685548294, 'f1': 0.5835635359116023, 'number': 1377} | 0.7346 | 0.7876 | 0.7602 | 0.9623 |
| 0.0033 | 17.0 | 31892 | 0.2743 | {'precision': 0.7272325375773652, 'recall': 0.7935359382537386, 'f1': 0.7589388696655133, 'number': 2073} | {'precision': 0.845837501389352, 'recall': 0.8724062822423478, 'f1': 0.8589164785553048, 'number': 8723} | {'precision': 0.7257006300238975, 'recall': 0.7691811734364926, 'f1': 0.7468085582060856, 'number': 43428} | {'precision': 0.8869047619047619, 'recall': 0.8010752688172043, 'f1': 0.8418079096045197, 'number': 186} | {'precision': 0.9024800336275746, 'recall': 0.9342906875543951, 'f1': 0.9181098995082317, 'number': 2298} | {'precision': 0.9123361238350971, 'recall': 0.9468852459016394, 'f1': 0.9292896790282358, 'number': 6100} | {'precision': 0.5567105567105567, 'recall': 0.7176891006551519, 'f1': 0.6270326525302459, 'number': 3358} | {'precision': 0.993933265925177, 'recall': 0.9969574036511156, 'f1': 0.9954430379746836, 'number': 986} | {'precision': 0.6185107498689041, 'recall': 0.6971040189125296, 'f1': 0.6554598499583218, 'number': 3384} | {'precision': 0.8841309823677582, 'recall': 0.8125, 'f1': 0.8468033775633294, 'number': 864} | {'precision': 0.6304347826086957, 'recall': 0.7057522123893806, 'f1': 0.6659707724425887, 'number': 452} | {'precision': 0.7017227075301352, 'recall': 0.7315323565323565, 'f1': 0.7163175330659826, 'number': 19656} | {'precision': 0.5604838709677419, 'recall': 0.6056644880174292, 'f1': 0.5821989528795811, 'number': 1377} | 0.7377 | 0.7829 | 0.7596 | 0.9630 |
| 0.003 | 18.0 | 33768 | 0.2938 | {'precision': 0.7085594989561587, 'recall': 0.818620356970574, 'f1': 0.7596239928379588, 'number': 2073} | {'precision': 0.8580645161290322, 'recall': 0.869081737934197, 'f1': 0.8635379883813646, 'number': 8723} | {'precision': 0.7304742970746947, 'recall': 0.7699180252371741, 'f1': 0.7496776941962533, 'number': 43428} | {'precision': 0.6926406926406926, 'recall': 0.8602150537634409, 'f1': 0.7673860911270983, 'number': 186} | {'precision': 0.9013848090642048, 'recall': 0.9347258485639687, 'f1': 0.9177526169621877, 'number': 2298} | {'precision': 0.9117088607594936, 'recall': 0.9445901639344262, 'f1': 0.9278582930756843, 'number': 6100} | {'precision': 0.6144427786106946, 'recall': 0.7322811197141156, 'f1': 0.6682065217391304, 'number': 3358} | {'precision': 0.9959514170040485, 'recall': 0.9979716024340771, 'f1': 0.9969604863221885, 'number': 986} | {'precision': 0.6367369285518751, 'recall': 0.6873522458628841, 'f1': 0.6610771635640187, 'number': 3384} | {'precision': 0.8362168396770473, 'recall': 0.8391203703703703, 'f1': 0.8376660889659157, 'number': 864} | {'precision': 0.6334661354581673, 'recall': 0.7035398230088495, 'f1': 0.6666666666666667, 'number': 452} | {'precision': 0.6995040357872216, 'recall': 0.7318884818884819, 'f1': 0.7153299189498284, 'number': 19656} | {'precision': 0.5398574206092028, 'recall': 0.6049382716049383, 'f1': 0.5705479452054795, 'number': 1377} | 0.7426 | 0.7839 | 0.7627 | 0.9631 |
| 0.0025 | 19.0 | 35644 | 0.2990 | {'precision': 0.707874337005304, 'recall': 0.8369512783405693, 'f1': 0.7670203359858533, 'number': 2073} | {'precision': 0.8577489950870925, 'recall': 0.8806603232832741, 'f1': 0.8690536795067595, 'number': 8723} | {'precision': 0.7345506842151137, 'recall': 0.7762273187805103, 'f1': 0.7548141513658755, 'number': 43428} | {'precision': 0.8105263157894737, 'recall': 0.8279569892473119, 'f1': 0.8191489361702128, 'number': 186} | {'precision': 0.9, 'recall': 0.9399477806788512, 'f1': 0.9195402298850573, 'number': 2298} | {'precision': 0.908573236317621, 'recall': 0.9416393442622951, 'f1': 0.9248108195137659, 'number': 6100} | {'precision': 0.61839821472849, 'recall': 0.7427039904705182, 'f1': 0.6748748477878501, 'number': 3358} | {'precision': 0.9959514170040485, 'recall': 0.9979716024340771, 'f1': 0.9969604863221885, 'number': 986} | {'precision': 0.6411716842961758, 'recall': 0.6985815602836879, 'f1': 0.6686465846414934, 'number': 3384} | {'precision': 0.8677184466019418, 'recall': 0.8275462962962963, 'f1': 0.8471563981042655, 'number': 864} | {'precision': 0.6414342629482072, 'recall': 0.7123893805309734, 'f1': 0.6750524109014675, 'number': 452} | {'precision': 0.6951624548736463, 'recall': 0.7347374847374848, 'f1': 0.7144023150552794, 'number': 19656} | {'precision': 0.5524115755627009, 'recall': 0.6238198983297023, 'f1': 0.5859481582537517, 'number': 1377} | 0.7443 | 0.7898 | 0.7664 | 0.9635 |
| 0.0021 | 20.0 | 37520 | 0.2981 | {'precision': 0.7228813559322034, 'recall': 0.8229618909792571, 'f1': 0.7696819309722536, 'number': 2073} | {'precision': 0.8535364768683275, 'recall': 0.8798578470709618, 'f1': 0.8664973186565058, 'number': 8723} | {'precision': 0.7315439151833142, 'recall': 0.7769411439624205, 'f1': 0.7535594242387018, 'number': 43428} | {'precision': 0.8031088082901554, 'recall': 0.8333333333333334, 'f1': 0.8179419525065963, 'number': 186} | {'precision': 0.9137055837563451, 'recall': 0.9399477806788512, 'f1': 0.9266409266409267, 'number': 2298} | {'precision': 0.9108754155453538, 'recall': 0.9432786885245902, 'f1': 0.9267939115728436, 'number': 6100} | {'precision': 0.5945041816009558, 'recall': 0.7409172126265634, 'f1': 0.6596844756728092, 'number': 3358} | {'precision': 0.9959514170040485, 'recall': 0.9979716024340771, 'f1': 0.9969604863221885, 'number': 986} | {'precision': 0.6354533152909337, 'recall': 0.693853427895981, 'f1': 0.6633705325610961, 'number': 3384} | {'precision': 0.8534278959810875, 'recall': 0.8356481481481481, 'f1': 0.8444444444444444, 'number': 864} | {'precision': 0.6076190476190476, 'recall': 0.7057522123893806, 'f1': 0.6530194472876152, 'number': 452} | {'precision': 0.6943667406192727, 'recall': 0.7324481074481074, 'f1': 0.7128992324832879, 'number': 19656} | {'precision': 0.5667556742323098, 'recall': 0.616557734204793, 'f1': 0.5906086956521739, 'number': 1377} | 0.7417 | 0.7891 | 0.7647 | 0.9639 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.14.5
- Tokenizers 0.15.1
|
std10012/uuu_fine_tune_taipower
|
std10012
| 2025-06-25T03:02:49Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:25:10Z |
---
license: apache-2.0
---
|
daixuancheng/sac_static0.4_constrainbyAdv_step160
|
daixuancheng
| 2025-06-25T03:02:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T02:37:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
johnnyyang0518/uuu_fine_tune_taipower
|
johnnyyang0518
| 2025-06-25T03:02:00Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T01:19:05Z |
---
license: apache-2.0
---
|
tracylu00200/uuu_fine_tune_taipower
|
tracylu00200
| 2025-06-25T03:01:41Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:31:47Z |
---
license: apache-2.0
---
|
Cameron914/uuu_fine_tune_taipower
|
Cameron914
| 2025-06-25T03:00:26Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T01:34:11Z |
---
license: apache-2.0
---
|
JS1016/uuu_fine_tune_taipower
|
JS1016
| 2025-06-25T02:59:23Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:25:52Z |
---
license: apache-2.0
---
|
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-HessianMaskSentence-1e-3_1092
|
luckeciano
| 2025-06-25T02:58:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-24T23:29:49Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-NoBaseline-HessianMaskSentence-1e-3_1092
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-NoBaseline-HessianMaskSentence-1e-3_1092
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-HessianMaskSentence-1e-3_1092", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/h13ebtuy)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
donoway/0634b9sk_20250624_005109
|
donoway
| 2025-06-25T02:58:43Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:adapter:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"region:us"
] | null | 2025-06-25T02:58:39Z |
---
library_name: peft
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 0634b9sk_20250624_005109
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0634b9sk_20250624_005109
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5751
- Model Preparation Time: 0.0086
- Move Accuracy: 0.3761
- Token Accuracy: 0.7780
- Accuracy: 0.3761
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 128
- eval_batch_size: 256
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Move Accuracy | Token Accuracy | Accuracy |
|:-------------:|:------:|:------:|:---------------:|:----------------------:|:-------------:|:--------------:|:--------:|
| No log | 0 | 0 | 6.4123 | 0.0086 | 0.0 | 0.1049 | 0.0 |
| 1.8037 | 0.0098 | 100 | 1.8310 | 0.0086 | 0.0023 | 0.2664 | 0.0023 |
| 1.7656 | 0.0196 | 200 | 1.7195 | 0.0086 | 0.0064 | 0.3148 | 0.0064 |
| 1.6675 | 0.0295 | 300 | 1.6926 | 0.0086 | 0.0085 | 0.3345 | 0.0085 |
| 1.6154 | 0.0393 | 400 | 1.6505 | 0.0086 | 0.0159 | 0.3571 | 0.0159 |
| 1.6371 | 0.0491 | 500 | 1.6237 | 0.0086 | 0.0162 | 0.3687 | 0.0162 |
| 1.5638 | 0.0589 | 600 | 1.5819 | 0.0086 | 0.0209 | 0.3853 | 0.0209 |
| 1.5692 | 0.0687 | 700 | 1.5489 | 0.0086 | 0.0269 | 0.3973 | 0.0269 |
| 1.5507 | 0.0785 | 800 | 1.5243 | 0.0086 | 0.0334 | 0.4054 | 0.0334 |
| 1.5213 | 0.0884 | 900 | 1.5079 | 0.0086 | 0.0375 | 0.4155 | 0.0375 |
| 1.5039 | 0.0982 | 1000 | 1.4827 | 0.0086 | 0.0382 | 0.4231 | 0.0382 |
| 1.4197 | 0.1080 | 1100 | 1.4383 | 0.0086 | 0.0473 | 0.4383 | 0.0473 |
| 1.296 | 0.1178 | 1200 | 1.3687 | 0.0086 | 0.0567 | 0.4690 | 0.0567 |
| 1.3415 | 0.1276 | 1300 | 1.3338 | 0.0086 | 0.0623 | 0.4862 | 0.0623 |
| 1.2246 | 0.1374 | 1400 | 1.2532 | 0.0086 | 0.0721 | 0.5197 | 0.0721 |
| 1.177 | 0.1473 | 1500 | 1.2068 | 0.0086 | 0.0863 | 0.5398 | 0.0863 |
| 1.1295 | 0.1571 | 1600 | 1.1276 | 0.0086 | 0.0992 | 0.5699 | 0.0992 |
| 1.0918 | 0.1669 | 1700 | 1.1150 | 0.0086 | 0.1059 | 0.5745 | 0.1059 |
| 1.0785 | 0.1767 | 1800 | 1.0519 | 0.0086 | 0.1257 | 0.5980 | 0.1257 |
| 0.968 | 0.1865 | 1900 | 1.0250 | 0.0086 | 0.1293 | 0.6063 | 0.1293 |
| 0.9705 | 0.1963 | 2000 | 0.9932 | 0.0086 | 0.1374 | 0.6167 | 0.1374 |
| 0.9839 | 0.2062 | 2100 | 0.9692 | 0.0086 | 0.1335 | 0.6206 | 0.1335 |
| 1.024 | 0.2160 | 2200 | 0.9491 | 0.0086 | 0.1533 | 0.6323 | 0.1533 |
| 1.0411 | 0.2258 | 2300 | 0.9453 | 0.0086 | 0.1455 | 0.6293 | 0.1455 |
| 0.8448 | 0.2356 | 2400 | 0.9300 | 0.0086 | 0.1564 | 0.6409 | 0.1564 |
| 0.8783 | 0.2454 | 2500 | 0.9057 | 0.0086 | 0.1543 | 0.6470 | 0.1543 |
| 0.912 | 0.2553 | 2600 | 0.9013 | 0.0086 | 0.1570 | 0.6472 | 0.1570 |
| 0.9678 | 0.2651 | 2700 | 0.8889 | 0.0086 | 0.1722 | 0.6538 | 0.1722 |
| 0.8489 | 0.2749 | 2800 | 0.8712 | 0.0086 | 0.1740 | 0.6597 | 0.1740 |
| 0.8058 | 0.2847 | 2900 | 0.8650 | 0.0086 | 0.1759 | 0.6586 | 0.1759 |
| 0.836 | 0.2945 | 3000 | 0.8692 | 0.0086 | 0.1772 | 0.6602 | 0.1772 |
| 0.8624 | 0.3043 | 3100 | 0.8441 | 0.0086 | 0.1829 | 0.6699 | 0.1829 |
| 0.8044 | 0.3142 | 3200 | 0.8342 | 0.0086 | 0.1927 | 0.6742 | 0.1927 |
| 0.8515 | 0.3240 | 3300 | 0.8218 | 0.0086 | 0.2025 | 0.6790 | 0.2025 |
| 0.785 | 0.3338 | 3400 | 0.8334 | 0.0086 | 0.1852 | 0.6718 | 0.1852 |
| 0.7539 | 0.3436 | 3500 | 0.8343 | 0.0086 | 0.1853 | 0.6710 | 0.1853 |
| 0.8563 | 0.3534 | 3600 | 0.8293 | 0.0086 | 0.1958 | 0.6778 | 0.1958 |
| 0.7276 | 0.3632 | 3700 | 0.8242 | 0.0086 | 0.1878 | 0.6749 | 0.1878 |
| 0.8719 | 0.3731 | 3800 | 0.8272 | 0.0086 | 0.1907 | 0.6747 | 0.1907 |
| 0.7652 | 0.3829 | 3900 | 0.8125 | 0.0086 | 0.1979 | 0.6780 | 0.1979 |
| 0.8551 | 0.3927 | 4000 | 0.8166 | 0.0086 | 0.1974 | 0.6793 | 0.1974 |
| 0.7497 | 0.4025 | 4100 | 0.7973 | 0.0086 | 0.2022 | 0.6851 | 0.2022 |
| 0.7228 | 0.4123 | 4200 | 0.8029 | 0.0086 | 0.1916 | 0.6789 | 0.1916 |
| 0.7847 | 0.4221 | 4300 | 0.7937 | 0.0086 | 0.2073 | 0.6881 | 0.2073 |
| 0.8106 | 0.4320 | 4400 | 0.8028 | 0.0086 | 0.2006 | 0.6820 | 0.2006 |
| 0.7863 | 0.4418 | 4500 | 0.7828 | 0.0086 | 0.2115 | 0.6905 | 0.2115 |
| 0.7327 | 0.4516 | 4600 | 0.7859 | 0.0086 | 0.2093 | 0.6890 | 0.2093 |
| 0.7728 | 0.4614 | 4700 | 0.7834 | 0.0086 | 0.2147 | 0.6906 | 0.2147 |
| 0.7996 | 0.4712 | 4800 | 0.7797 | 0.0086 | 0.2061 | 0.6922 | 0.2061 |
| 0.8005 | 0.4811 | 4900 | 0.7828 | 0.0086 | 0.2104 | 0.6895 | 0.2104 |
| 0.7035 | 0.4909 | 5000 | 0.7825 | 0.0086 | 0.2228 | 0.6935 | 0.2228 |
| 0.7859 | 0.5007 | 5100 | 0.7652 | 0.0086 | 0.2163 | 0.6973 | 0.2163 |
| 0.7345 | 0.5105 | 5200 | 0.7848 | 0.0086 | 0.2120 | 0.6895 | 0.2120 |
| 0.7561 | 0.5203 | 5300 | 0.7733 | 0.0086 | 0.2253 | 0.6948 | 0.2253 |
| 0.7839 | 0.5301 | 5400 | 0.7801 | 0.0086 | 0.2188 | 0.6930 | 0.2188 |
| 0.807 | 0.5400 | 5500 | 0.7754 | 0.0086 | 0.2241 | 0.6960 | 0.2241 |
| 0.7894 | 0.5498 | 5600 | 0.7638 | 0.0086 | 0.2271 | 0.6961 | 0.2271 |
| 0.7104 | 0.5596 | 5700 | 0.7821 | 0.0086 | 0.2165 | 0.6904 | 0.2165 |
| 0.7839 | 0.5694 | 5800 | 0.7691 | 0.0086 | 0.2203 | 0.6920 | 0.2203 |
| 0.8191 | 0.5792 | 5900 | 0.7924 | 0.0086 | 0.2134 | 0.6868 | 0.2134 |
| 0.7289 | 0.5890 | 6000 | 0.7563 | 0.0086 | 0.2373 | 0.7017 | 0.2373 |
| 0.7667 | 0.5989 | 6100 | 0.7570 | 0.0086 | 0.2311 | 0.7024 | 0.2311 |
| 0.7627 | 0.6087 | 6200 | 0.7529 | 0.0086 | 0.2289 | 0.7007 | 0.2289 |
| 0.7505 | 0.6185 | 6300 | 0.7473 | 0.0086 | 0.2362 | 0.7042 | 0.2362 |
| 0.6756 | 0.6283 | 6400 | 0.7554 | 0.0086 | 0.2291 | 0.7004 | 0.2291 |
| 0.7875 | 0.6381 | 6500 | 0.7550 | 0.0086 | 0.2375 | 0.7037 | 0.2375 |
| 0.8439 | 0.6479 | 6600 | 0.7563 | 0.0086 | 0.2221 | 0.6985 | 0.2221 |
| 0.7298 | 0.6578 | 6700 | 0.7474 | 0.0086 | 0.2350 | 0.7044 | 0.2350 |
| 0.7953 | 0.6676 | 6800 | 0.7520 | 0.0086 | 0.2290 | 0.7025 | 0.2290 |
| 0.6877 | 0.6774 | 6900 | 0.7492 | 0.0086 | 0.2304 | 0.7040 | 0.2304 |
| 0.7067 | 0.6872 | 7000 | 0.7363 | 0.0086 | 0.2388 | 0.7082 | 0.2388 |
| 0.7256 | 0.6970 | 7100 | 0.7433 | 0.0086 | 0.2421 | 0.7093 | 0.2421 |
| 0.6785 | 0.7069 | 7200 | 0.7389 | 0.0086 | 0.2449 | 0.7100 | 0.2449 |
| 0.7192 | 0.7167 | 7300 | 0.7431 | 0.0086 | 0.2426 | 0.7068 | 0.2426 |
| 0.7111 | 0.7265 | 7400 | 0.7374 | 0.0086 | 0.2438 | 0.7103 | 0.2438 |
| 0.6601 | 0.7363 | 7500 | 0.7386 | 0.0086 | 0.2464 | 0.7105 | 0.2464 |
| 0.8153 | 0.7461 | 7600 | 0.7251 | 0.0086 | 0.2505 | 0.7148 | 0.2505 |
| 0.7885 | 0.7559 | 7700 | 0.7344 | 0.0086 | 0.2528 | 0.7108 | 0.2528 |
| 0.7111 | 0.7658 | 7800 | 0.7409 | 0.0086 | 0.2422 | 0.7085 | 0.2422 |
| 0.6856 | 0.7756 | 7900 | 0.7448 | 0.0086 | 0.2442 | 0.7055 | 0.2442 |
| 0.7019 | 0.7854 | 8000 | 0.7214 | 0.0086 | 0.2508 | 0.7148 | 0.2508 |
| 0.6213 | 0.7952 | 8100 | 0.7206 | 0.0086 | 0.2547 | 0.7159 | 0.2547 |
| 0.7054 | 0.8050 | 8200 | 0.7320 | 0.0086 | 0.2468 | 0.7114 | 0.2468 |
| 0.6639 | 0.8148 | 8300 | 0.7443 | 0.0086 | 0.2485 | 0.7090 | 0.2485 |
| 0.788 | 0.8247 | 8400 | 0.7274 | 0.0086 | 0.2461 | 0.7118 | 0.2461 |
| 0.6754 | 0.8345 | 8500 | 0.7288 | 0.0086 | 0.2407 | 0.7091 | 0.2407 |
| 0.7268 | 0.8443 | 8600 | 0.7205 | 0.0086 | 0.2525 | 0.7140 | 0.2525 |
| 0.7173 | 0.8541 | 8700 | 0.7243 | 0.0086 | 0.2479 | 0.7138 | 0.2479 |
| 0.7146 | 0.8639 | 8800 | 0.7146 | 0.0086 | 0.2603 | 0.7159 | 0.2603 |
| 0.7047 | 0.8737 | 8900 | 0.7206 | 0.0086 | 0.2522 | 0.7147 | 0.2522 |
| 0.7468 | 0.8836 | 9000 | 0.7203 | 0.0086 | 0.2530 | 0.7169 | 0.2530 |
| 0.6902 | 0.8934 | 9100 | 0.7164 | 0.0086 | 0.2564 | 0.7170 | 0.2564 |
| 0.6852 | 0.9032 | 9200 | 0.7092 | 0.0086 | 0.2539 | 0.7176 | 0.2539 |
| 0.7086 | 0.9130 | 9300 | 0.7063 | 0.0086 | 0.2593 | 0.7186 | 0.2593 |
| 0.6501 | 0.9228 | 9400 | 0.7086 | 0.0086 | 0.2589 | 0.7193 | 0.2589 |
| 0.7028 | 0.9327 | 9500 | 0.7150 | 0.0086 | 0.2603 | 0.7183 | 0.2603 |
| 0.7217 | 0.9425 | 9600 | 0.7071 | 0.0086 | 0.2623 | 0.7212 | 0.2623 |
| 0.714 | 0.9523 | 9700 | 0.6963 | 0.0086 | 0.2723 | 0.7248 | 0.2723 |
| 0.682 | 0.9621 | 9800 | 0.7147 | 0.0086 | 0.2606 | 0.7180 | 0.2606 |
| 0.6879 | 0.9719 | 9900 | 0.7037 | 0.0086 | 0.2705 | 0.7240 | 0.2705 |
| 0.7236 | 0.9817 | 10000 | 0.7231 | 0.0086 | 0.2545 | 0.7127 | 0.2545 |
| 0.7024 | 0.9916 | 10100 | 0.7047 | 0.0086 | 0.2580 | 0.7200 | 0.2580 |
| 0.6224 | 1.0014 | 10200 | 0.7027 | 0.0086 | 0.2725 | 0.7236 | 0.2725 |
| 0.7081 | 1.0112 | 10300 | 0.7151 | 0.0086 | 0.2565 | 0.7171 | 0.2565 |
| 0.7366 | 1.0210 | 10400 | 0.6958 | 0.0086 | 0.2615 | 0.7232 | 0.2615 |
| 0.6681 | 1.0308 | 10500 | 0.7096 | 0.0086 | 0.2728 | 0.7215 | 0.2728 |
| 0.6881 | 1.0406 | 10600 | 0.7042 | 0.0086 | 0.2632 | 0.7232 | 0.2632 |
| 0.7179 | 1.0505 | 10700 | 0.6982 | 0.0086 | 0.2674 | 0.7230 | 0.2674 |
| 0.6991 | 1.0603 | 10800 | 0.7068 | 0.0086 | 0.2620 | 0.7192 | 0.2620 |
| 0.6631 | 1.0701 | 10900 | 0.7108 | 0.0086 | 0.2660 | 0.7206 | 0.2660 |
| 0.7591 | 1.0799 | 11000 | 0.7046 | 0.0086 | 0.2667 | 0.7241 | 0.2667 |
| 0.7069 | 1.0897 | 11100 | 0.7194 | 0.0086 | 0.2705 | 0.7184 | 0.2705 |
| 0.7639 | 1.0995 | 11200 | 0.7081 | 0.0086 | 0.2650 | 0.7218 | 0.2650 |
| 0.702 | 1.1094 | 11300 | 0.7015 | 0.0086 | 0.2652 | 0.7237 | 0.2652 |
| 0.7034 | 1.1192 | 11400 | 0.6927 | 0.0086 | 0.2764 | 0.7257 | 0.2764 |
| 0.6367 | 1.1290 | 11500 | 0.6942 | 0.0086 | 0.2770 | 0.7255 | 0.2770 |
| 0.6996 | 1.1388 | 11600 | 0.6947 | 0.0086 | 0.2750 | 0.7254 | 0.2750 |
| 0.7785 | 1.1486 | 11700 | 0.7048 | 0.0086 | 0.2701 | 0.7204 | 0.2701 |
| 0.69 | 1.1585 | 11800 | 0.7067 | 0.0086 | 0.2686 | 0.7200 | 0.2686 |
| 0.6748 | 1.1683 | 11900 | 0.6922 | 0.0086 | 0.2784 | 0.7278 | 0.2784 |
| 0.6499 | 1.1781 | 12000 | 0.7015 | 0.0086 | 0.2764 | 0.7260 | 0.2764 |
| 0.6821 | 1.1879 | 12100 | 0.6967 | 0.0086 | 0.2645 | 0.7224 | 0.2645 |
| 0.6897 | 1.1977 | 12200 | 0.6892 | 0.0086 | 0.2811 | 0.7295 | 0.2811 |
| 0.6871 | 1.2075 | 12300 | 0.6922 | 0.0086 | 0.2785 | 0.7282 | 0.2785 |
| 0.67 | 1.2174 | 12400 | 0.6886 | 0.0086 | 0.2774 | 0.7284 | 0.2774 |
| 0.7051 | 1.2272 | 12500 | 0.6811 | 0.0086 | 0.2836 | 0.7325 | 0.2836 |
| 0.6538 | 1.2370 | 12600 | 0.6935 | 0.0086 | 0.2810 | 0.7288 | 0.2810 |
| 0.6638 | 1.2468 | 12700 | 0.6872 | 0.0086 | 0.2730 | 0.7268 | 0.2730 |
| 0.7019 | 1.2566 | 12800 | 0.6861 | 0.0086 | 0.2779 | 0.7290 | 0.2779 |
| 0.6739 | 1.2664 | 12900 | 0.6917 | 0.0086 | 0.2747 | 0.7266 | 0.2747 |
| 0.6654 | 1.2763 | 13000 | 0.6806 | 0.0086 | 0.2834 | 0.7300 | 0.2834 |
| 0.7074 | 1.2861 | 13100 | 0.6819 | 0.0086 | 0.2810 | 0.7327 | 0.2810 |
| 0.7077 | 1.2959 | 13200 | 0.6929 | 0.0086 | 0.2728 | 0.7245 | 0.2728 |
| 0.6494 | 1.3057 | 13300 | 0.6893 | 0.0086 | 0.2790 | 0.7292 | 0.2790 |
| 0.6862 | 1.3155 | 13400 | 0.6846 | 0.0086 | 0.2783 | 0.7307 | 0.2783 |
| 0.6761 | 1.3253 | 13500 | 0.6890 | 0.0086 | 0.2750 | 0.7277 | 0.2750 |
| 0.6871 | 1.3352 | 13600 | 0.6831 | 0.0086 | 0.2767 | 0.7292 | 0.2767 |
| 0.6717 | 1.3450 | 13700 | 0.6843 | 0.0086 | 0.2727 | 0.7263 | 0.2727 |
| 0.7139 | 1.3548 | 13800 | 0.6769 | 0.0086 | 0.2830 | 0.7317 | 0.2830 |
| 0.6296 | 1.3646 | 13900 | 0.6863 | 0.0086 | 0.2850 | 0.7312 | 0.2850 |
| 0.6813 | 1.3744 | 14000 | 0.6898 | 0.0086 | 0.2781 | 0.7280 | 0.2781 |
| 0.6626 | 1.3843 | 14100 | 0.6847 | 0.0086 | 0.2832 | 0.7319 | 0.2832 |
| 0.6717 | 1.3941 | 14200 | 0.6848 | 0.0086 | 0.2853 | 0.7311 | 0.2853 |
| 0.6675 | 1.4039 | 14300 | 0.6751 | 0.0086 | 0.2920 | 0.7339 | 0.2920 |
| 0.6248 | 1.4137 | 14400 | 0.6733 | 0.0086 | 0.2893 | 0.7366 | 0.2893 |
| 0.7265 | 1.4235 | 14500 | 0.6808 | 0.0086 | 0.2868 | 0.7323 | 0.2868 |
| 0.7149 | 1.4333 | 14600 | 0.6759 | 0.0086 | 0.2891 | 0.7332 | 0.2891 |
| 0.6071 | 1.4432 | 14700 | 0.6949 | 0.0086 | 0.2833 | 0.7274 | 0.2833 |
| 0.6737 | 1.4530 | 14800 | 0.6725 | 0.0086 | 0.2936 | 0.7367 | 0.2936 |
| 0.7388 | 1.4628 | 14900 | 0.6699 | 0.0086 | 0.2906 | 0.7366 | 0.2906 |
| 0.6418 | 1.4726 | 15000 | 0.6783 | 0.0086 | 0.2850 | 0.7329 | 0.2850 |
| 0.7086 | 1.4824 | 15100 | 0.6794 | 0.0086 | 0.2826 | 0.7306 | 0.2826 |
| 0.646 | 1.4922 | 15200 | 0.6731 | 0.0086 | 0.2814 | 0.7341 | 0.2814 |
| 0.6442 | 1.5021 | 15300 | 0.6708 | 0.0086 | 0.2952 | 0.7371 | 0.2952 |
| 0.6451 | 1.5119 | 15400 | 0.6723 | 0.0086 | 0.2868 | 0.7338 | 0.2868 |
| 0.7044 | 1.5217 | 15500 | 0.6749 | 0.0086 | 0.2902 | 0.7332 | 0.2902 |
| 0.6012 | 1.5315 | 15600 | 0.6633 | 0.0086 | 0.3042 | 0.7398 | 0.3042 |
| 0.6967 | 1.5413 | 15700 | 0.6782 | 0.0086 | 0.2883 | 0.7343 | 0.2883 |
| 0.6426 | 1.5511 | 15800 | 0.6714 | 0.0086 | 0.2904 | 0.7356 | 0.2904 |
| 0.5905 | 1.5610 | 15900 | 0.6691 | 0.0086 | 0.2922 | 0.7365 | 0.2922 |
| 0.6741 | 1.5708 | 16000 | 0.6652 | 0.0086 | 0.2965 | 0.7375 | 0.2965 |
| 0.6847 | 1.5806 | 16100 | 0.6817 | 0.0086 | 0.2906 | 0.7337 | 0.2906 |
| 0.714 | 1.5904 | 16200 | 0.6625 | 0.0086 | 0.2953 | 0.7376 | 0.2953 |
| 0.6933 | 1.6002 | 16300 | 0.6659 | 0.0086 | 0.2957 | 0.7389 | 0.2957 |
| 0.6825 | 1.6101 | 16400 | 0.6700 | 0.0086 | 0.2936 | 0.7362 | 0.2936 |
| 0.6597 | 1.6199 | 16500 | 0.6695 | 0.0086 | 0.2926 | 0.7364 | 0.2926 |
| 0.6371 | 1.6297 | 16600 | 0.6673 | 0.0086 | 0.2921 | 0.7358 | 0.2921 |
| 0.6487 | 1.6395 | 16700 | 0.6683 | 0.0086 | 0.2905 | 0.7363 | 0.2905 |
| 0.6394 | 1.6493 | 16800 | 0.6698 | 0.0086 | 0.2997 | 0.7384 | 0.2997 |
| 0.6087 | 1.6591 | 16900 | 0.6653 | 0.0086 | 0.2971 | 0.7395 | 0.2971 |
| 0.6377 | 1.6690 | 17000 | 0.6645 | 0.0086 | 0.2953 | 0.7383 | 0.2953 |
| 0.6502 | 1.6788 | 17100 | 0.6598 | 0.0086 | 0.3020 | 0.7404 | 0.3020 |
| 0.6378 | 1.6886 | 17200 | 0.6758 | 0.0086 | 0.2955 | 0.7335 | 0.2955 |
| 0.6367 | 1.6984 | 17300 | 0.6650 | 0.0086 | 0.3042 | 0.7392 | 0.3042 |
| 0.6279 | 1.7082 | 17400 | 0.6673 | 0.0086 | 0.2937 | 0.7353 | 0.2937 |
| 0.6792 | 1.7180 | 17500 | 0.6627 | 0.0086 | 0.2971 | 0.7393 | 0.2971 |
| 0.6164 | 1.7279 | 17600 | 0.6641 | 0.0086 | 0.3006 | 0.7398 | 0.3006 |
| 0.7035 | 1.7377 | 17700 | 0.6619 | 0.0086 | 0.3043 | 0.7413 | 0.3043 |
| 0.6833 | 1.7475 | 17800 | 0.6678 | 0.0086 | 0.2979 | 0.7380 | 0.2979 |
| 0.6802 | 1.7573 | 17900 | 0.6650 | 0.0086 | 0.3007 | 0.7392 | 0.3007 |
| 0.6434 | 1.7671 | 18000 | 0.6658 | 0.0086 | 0.3017 | 0.7399 | 0.3017 |
| 0.6481 | 1.7769 | 18100 | 0.6555 | 0.0086 | 0.3074 | 0.7440 | 0.3074 |
| 0.6753 | 1.7868 | 18200 | 0.6710 | 0.0086 | 0.2969 | 0.7371 | 0.2969 |
| 0.7124 | 1.7966 | 18300 | 0.6606 | 0.0086 | 0.3011 | 0.7408 | 0.3011 |
| 0.6148 | 1.8064 | 18400 | 0.6656 | 0.0086 | 0.2975 | 0.7395 | 0.2975 |
| 0.656 | 1.8162 | 18500 | 0.6677 | 0.0086 | 0.2930 | 0.7371 | 0.2930 |
| 0.6465 | 1.8260 | 18600 | 0.6570 | 0.0086 | 0.3054 | 0.7421 | 0.3054 |
| 0.7047 | 1.8359 | 18700 | 0.6605 | 0.0086 | 0.2995 | 0.7393 | 0.2995 |
| 0.581 | 1.8457 | 18800 | 0.6618 | 0.0086 | 0.2980 | 0.7410 | 0.2980 |
| 0.5702 | 1.8555 | 18900 | 0.6465 | 0.0086 | 0.3109 | 0.7448 | 0.3109 |
| 0.6844 | 1.8653 | 19000 | 0.6571 | 0.0086 | 0.3028 | 0.7405 | 0.3028 |
| 0.6136 | 1.8751 | 19100 | 0.6460 | 0.0086 | 0.3080 | 0.7437 | 0.3080 |
| 0.6142 | 1.8849 | 19200 | 0.6570 | 0.0086 | 0.2999 | 0.7414 | 0.2999 |
| 0.739 | 1.8948 | 19300 | 0.6567 | 0.0086 | 0.3018 | 0.7420 | 0.3018 |
| 0.6359 | 1.9046 | 19400 | 0.6588 | 0.0086 | 0.3021 | 0.7404 | 0.3021 |
| 0.6352 | 1.9144 | 19500 | 0.6617 | 0.0086 | 0.2946 | 0.7389 | 0.2946 |
| 0.6775 | 1.9242 | 19600 | 0.6547 | 0.0086 | 0.3048 | 0.7411 | 0.3048 |
| 0.6773 | 1.9340 | 19700 | 0.6570 | 0.0086 | 0.3075 | 0.7405 | 0.3075 |
| 0.6461 | 1.9438 | 19800 | 0.6610 | 0.0086 | 0.3027 | 0.7400 | 0.3027 |
| 0.609 | 1.9537 | 19900 | 0.6527 | 0.0086 | 0.3095 | 0.7429 | 0.3095 |
| 0.617 | 1.9635 | 20000 | 0.6515 | 0.0086 | 0.3062 | 0.7440 | 0.3062 |
| 0.6755 | 1.9733 | 20100 | 0.6508 | 0.0086 | 0.3100 | 0.7429 | 0.3100 |
| 0.6929 | 1.9831 | 20200 | 0.6550 | 0.0086 | 0.3054 | 0.7438 | 0.3054 |
| 0.5971 | 1.9929 | 20300 | 0.6548 | 0.0086 | 0.2997 | 0.7426 | 0.2997 |
| 0.6625 | 2.0027 | 20400 | 0.6433 | 0.0086 | 0.3086 | 0.7456 | 0.3086 |
| 0.5759 | 2.0126 | 20500 | 0.6572 | 0.0086 | 0.3004 | 0.7392 | 0.3004 |
| 0.6804 | 2.0224 | 20600 | 0.6482 | 0.0086 | 0.3140 | 0.7455 | 0.3140 |
| 0.5674 | 2.0322 | 20700 | 0.6473 | 0.0086 | 0.3057 | 0.7452 | 0.3057 |
| 0.6234 | 2.0420 | 20800 | 0.6484 | 0.0086 | 0.3046 | 0.7435 | 0.3046 |
| 0.6884 | 2.0518 | 20900 | 0.6465 | 0.0086 | 0.3064 | 0.7450 | 0.3064 |
| 0.5904 | 2.0617 | 21000 | 0.6528 | 0.0086 | 0.3045 | 0.7433 | 0.3045 |
| 0.7058 | 2.0715 | 21100 | 0.6542 | 0.0086 | 0.3071 | 0.7438 | 0.3071 |
| 0.7093 | 2.0813 | 21200 | 0.6704 | 0.0086 | 0.2872 | 0.7355 | 0.2872 |
| 0.6866 | 2.0911 | 21300 | 0.6541 | 0.0086 | 0.3099 | 0.7430 | 0.3099 |
| 0.6481 | 2.1009 | 21400 | 0.6522 | 0.0086 | 0.3113 | 0.7444 | 0.3113 |
| 0.6671 | 2.1107 | 21500 | 0.6533 | 0.0086 | 0.3144 | 0.7442 | 0.3144 |
| 0.6214 | 2.1206 | 21600 | 0.6448 | 0.0086 | 0.3167 | 0.7462 | 0.3167 |
| 0.6669 | 2.1304 | 21700 | 0.6567 | 0.0086 | 0.3128 | 0.7444 | 0.3128 |
| 0.6161 | 2.1402 | 21800 | 0.6628 | 0.0086 | 0.2983 | 0.7381 | 0.2983 |
| 0.6114 | 2.1500 | 21900 | 0.6457 | 0.0086 | 0.3143 | 0.7468 | 0.3143 |
| 0.606 | 2.1598 | 22000 | 0.6544 | 0.0086 | 0.2930 | 0.7406 | 0.2930 |
| 0.6178 | 2.1696 | 22100 | 0.6427 | 0.0086 | 0.3059 | 0.7445 | 0.3059 |
| 0.6035 | 2.1795 | 22200 | 0.6485 | 0.0086 | 0.3094 | 0.7450 | 0.3094 |
| 0.6935 | 2.1893 | 22300 | 0.6507 | 0.0086 | 0.3079 | 0.7420 | 0.3079 |
| 0.7044 | 2.1991 | 22400 | 0.6572 | 0.0086 | 0.2964 | 0.7409 | 0.2964 |
| 0.6044 | 2.2089 | 22500 | 0.6503 | 0.0086 | 0.3055 | 0.7428 | 0.3055 |
| 0.6211 | 2.2187 | 22600 | 0.6615 | 0.0086 | 0.3095 | 0.7424 | 0.3095 |
| 0.652 | 2.2285 | 22700 | 0.6636 | 0.0086 | 0.2990 | 0.7375 | 0.2990 |
| 0.6864 | 2.2384 | 22800 | 0.6525 | 0.0086 | 0.3096 | 0.7462 | 0.3096 |
| 0.6061 | 2.2482 | 22900 | 0.6345 | 0.0086 | 0.3251 | 0.7514 | 0.3251 |
| 0.5898 | 2.2580 | 23000 | 0.6446 | 0.0086 | 0.3131 | 0.7480 | 0.3131 |
| 0.6624 | 2.2678 | 23100 | 0.6449 | 0.0086 | 0.3124 | 0.7460 | 0.3124 |
| 0.5887 | 2.2776 | 23200 | 0.6488 | 0.0086 | 0.3079 | 0.7452 | 0.3079 |
| 0.6406 | 2.2875 | 23300 | 0.6454 | 0.0086 | 0.3074 | 0.7460 | 0.3074 |
| 0.6178 | 2.2973 | 23400 | 0.6440 | 0.0086 | 0.3117 | 0.7482 | 0.3117 |
| 0.6863 | 2.3071 | 23500 | 0.6487 | 0.0086 | 0.2995 | 0.7413 | 0.2995 |
| 0.5959 | 2.3169 | 23600 | 0.6514 | 0.0086 | 0.3136 | 0.7455 | 0.3136 |
| 0.6634 | 2.3267 | 23700 | 0.6630 | 0.0086 | 0.2979 | 0.7405 | 0.2979 |
| 0.6479 | 2.3365 | 23800 | 0.6395 | 0.0086 | 0.3094 | 0.7469 | 0.3094 |
| 0.6202 | 2.3464 | 23900 | 0.6365 | 0.0086 | 0.3167 | 0.7477 | 0.3167 |
| 0.6391 | 2.3562 | 24000 | 0.6458 | 0.0086 | 0.3125 | 0.7451 | 0.3125 |
| 0.6121 | 2.3660 | 24100 | 0.6394 | 0.0086 | 0.3134 | 0.7487 | 0.3134 |
| 0.6527 | 2.3758 | 24200 | 0.6383 | 0.0086 | 0.3185 | 0.7483 | 0.3185 |
| 0.6274 | 2.3856 | 24300 | 0.6390 | 0.0086 | 0.3220 | 0.7483 | 0.3220 |
| 0.6875 | 2.3954 | 24400 | 0.6506 | 0.0086 | 0.3068 | 0.7434 | 0.3068 |
| 0.6303 | 2.4053 | 24500 | 0.6440 | 0.0086 | 0.3126 | 0.7465 | 0.3126 |
| 0.5843 | 2.4151 | 24600 | 0.6467 | 0.0086 | 0.3114 | 0.7464 | 0.3114 |
| 0.6428 | 2.4249 | 24700 | 0.6383 | 0.0086 | 0.3245 | 0.7511 | 0.3245 |
| 0.6056 | 2.4347 | 24800 | 0.6429 | 0.0086 | 0.3146 | 0.7478 | 0.3146 |
| 0.5889 | 2.4445 | 24900 | 0.6556 | 0.0086 | 0.3088 | 0.7429 | 0.3088 |
| 0.6037 | 2.4543 | 25000 | 0.6539 | 0.0086 | 0.3097 | 0.7445 | 0.3097 |
| 0.6562 | 2.4642 | 25100 | 0.6552 | 0.0086 | 0.3054 | 0.7411 | 0.3054 |
| 0.6968 | 2.4740 | 25200 | 0.6472 | 0.0086 | 0.3117 | 0.7433 | 0.3117 |
| 0.6475 | 2.4838 | 25300 | 0.6379 | 0.0086 | 0.3241 | 0.7515 | 0.3241 |
| 0.5411 | 2.4936 | 25400 | 0.6487 | 0.0086 | 0.3198 | 0.7482 | 0.3198 |
| 0.6338 | 2.5034 | 25500 | 0.6486 | 0.0086 | 0.3087 | 0.7440 | 0.3087 |
| 0.6153 | 2.5133 | 25600 | 0.6346 | 0.0086 | 0.3247 | 0.7513 | 0.3247 |
| 0.6295 | 2.5231 | 25700 | 0.6433 | 0.0086 | 0.3131 | 0.7463 | 0.3131 |
| 0.647 | 2.5329 | 25800 | 0.6393 | 0.0086 | 0.3157 | 0.7487 | 0.3157 |
| 0.6655 | 2.5427 | 25900 | 0.6511 | 0.0086 | 0.3035 | 0.7432 | 0.3035 |
| 0.6389 | 2.5525 | 26000 | 0.6407 | 0.0086 | 0.3126 | 0.7476 | 0.3126 |
| 0.6466 | 2.5623 | 26100 | 0.6542 | 0.0086 | 0.3013 | 0.7436 | 0.3013 |
| 0.6278 | 2.5722 | 26200 | 0.6501 | 0.0086 | 0.3174 | 0.7471 | 0.3174 |
| 0.6777 | 2.5820 | 26300 | 0.6440 | 0.0086 | 0.3108 | 0.7461 | 0.3108 |
| 0.675 | 2.5918 | 26400 | 0.7039 | 0.0086 | 0.2808 | 0.7271 | 0.2808 |
| 0.5784 | 2.6016 | 26500 | 0.6319 | 0.0086 | 0.3187 | 0.7525 | 0.3187 |
| 0.5799 | 2.6114 | 26600 | 0.6425 | 0.0086 | 0.3162 | 0.7489 | 0.3162 |
| 0.6387 | 2.6212 | 26700 | 0.6425 | 0.0086 | 0.3114 | 0.7461 | 0.3114 |
| 0.6148 | 2.6311 | 26800 | 0.6359 | 0.0086 | 0.3225 | 0.7514 | 0.3225 |
| 0.642 | 2.6409 | 26900 | 0.6517 | 0.0086 | 0.3148 | 0.7477 | 0.3148 |
| 0.693 | 2.6507 | 27000 | 0.6410 | 0.0086 | 0.3181 | 0.7483 | 0.3181 |
| 0.5909 | 2.6605 | 27100 | 0.6392 | 0.0086 | 0.3163 | 0.7483 | 0.3163 |
| 0.6181 | 2.6703 | 27200 | 0.6393 | 0.0086 | 0.3230 | 0.7502 | 0.3230 |
| 0.6054 | 2.6801 | 27300 | 0.6406 | 0.0086 | 0.3164 | 0.7495 | 0.3164 |
| 0.6204 | 2.6900 | 27400 | 0.6434 | 0.0086 | 0.3183 | 0.7491 | 0.3183 |
| 0.6243 | 2.6998 | 27500 | 0.6329 | 0.0086 | 0.3207 | 0.7504 | 0.3207 |
| 0.6229 | 2.7096 | 27600 | 0.6475 | 0.0086 | 0.2990 | 0.7423 | 0.2990 |
| 0.6266 | 2.7194 | 27700 | 0.6295 | 0.0086 | 0.3206 | 0.7521 | 0.3206 |
| 0.6114 | 2.7292 | 27800 | 0.6369 | 0.0086 | 0.3223 | 0.7486 | 0.3223 |
| 0.6293 | 2.7391 | 27900 | 0.6518 | 0.0086 | 0.3102 | 0.7452 | 0.3102 |
| 0.6384 | 2.7489 | 28000 | 0.6277 | 0.0086 | 0.3260 | 0.7547 | 0.3260 |
| 0.562 | 2.7587 | 28100 | 0.6382 | 0.0086 | 0.3225 | 0.7500 | 0.3225 |
| 0.5943 | 2.7685 | 28200 | 0.6374 | 0.0086 | 0.3122 | 0.7484 | 0.3122 |
| 0.6021 | 2.7783 | 28300 | 0.6378 | 0.0086 | 0.3147 | 0.7480 | 0.3147 |
| 0.6254 | 2.7881 | 28400 | 0.6394 | 0.0086 | 0.3204 | 0.7497 | 0.3204 |
| 0.5927 | 2.7980 | 28500 | 0.6364 | 0.0086 | 0.3194 | 0.7505 | 0.3194 |
| 0.6458 | 2.8078 | 28600 | 0.6401 | 0.0086 | 0.3210 | 0.7510 | 0.3210 |
| 0.5987 | 2.8176 | 28700 | 0.6387 | 0.0086 | 0.3172 | 0.7493 | 0.3172 |
| 0.6138 | 2.8274 | 28800 | 0.6323 | 0.0086 | 0.3178 | 0.7513 | 0.3178 |
| 0.7018 | 2.8372 | 28900 | 0.6313 | 0.0086 | 0.3278 | 0.7544 | 0.3278 |
| 0.5963 | 2.8470 | 29000 | 0.6363 | 0.0086 | 0.3182 | 0.7498 | 0.3182 |
| 0.6068 | 2.8569 | 29100 | 0.6301 | 0.0086 | 0.3258 | 0.7543 | 0.3258 |
| 0.6323 | 2.8667 | 29200 | 0.6318 | 0.0086 | 0.3202 | 0.7515 | 0.3202 |
| 0.6109 | 2.8765 | 29300 | 0.6360 | 0.0086 | 0.3135 | 0.7506 | 0.3135 |
| 0.5366 | 2.8863 | 29400 | 0.6317 | 0.0086 | 0.3209 | 0.7532 | 0.3209 |
| 0.5891 | 2.8961 | 29500 | 0.6396 | 0.0086 | 0.3247 | 0.7510 | 0.3247 |
| 0.6369 | 2.9059 | 29600 | 0.6447 | 0.0086 | 0.3172 | 0.7481 | 0.3172 |
| 0.6215 | 2.9158 | 29700 | 0.6435 | 0.0086 | 0.3104 | 0.7473 | 0.3104 |
| 0.5796 | 2.9256 | 29800 | 0.6325 | 0.0086 | 0.3216 | 0.7515 | 0.3216 |
| 0.5961 | 2.9354 | 29900 | 0.6326 | 0.0086 | 0.3185 | 0.7512 | 0.3185 |
| 0.6348 | 2.9452 | 30000 | 0.6420 | 0.0086 | 0.3226 | 0.7490 | 0.3226 |
| 0.6075 | 2.9550 | 30100 | 0.6309 | 0.0086 | 0.3270 | 0.7528 | 0.3270 |
| 0.6128 | 2.9649 | 30200 | 0.6244 | 0.0086 | 0.3280 | 0.7534 | 0.3280 |
| 0.6271 | 2.9747 | 30300 | 0.6311 | 0.0086 | 0.3183 | 0.7508 | 0.3183 |
| 0.6499 | 2.9845 | 30400 | 0.6325 | 0.0086 | 0.3258 | 0.7516 | 0.3258 |
| 0.7241 | 2.9943 | 30500 | 0.6272 | 0.0086 | 0.3220 | 0.7540 | 0.3220 |
| 0.7342 | 3.0041 | 30600 | 0.6301 | 0.0086 | 0.3250 | 0.7520 | 0.3250 |
| 0.6141 | 3.0139 | 30700 | 0.6290 | 0.0086 | 0.3289 | 0.7555 | 0.3289 |
| 0.6286 | 3.0238 | 30800 | 0.6386 | 0.0086 | 0.3112 | 0.7482 | 0.3112 |
| 0.7168 | 3.0336 | 30900 | 0.6307 | 0.0086 | 0.3280 | 0.7535 | 0.3280 |
| 0.6267 | 3.0434 | 31000 | 0.6348 | 0.0086 | 0.3246 | 0.7516 | 0.3246 |
| 0.6754 | 3.0532 | 31100 | 0.6369 | 0.0086 | 0.3227 | 0.7502 | 0.3227 |
| 0.6442 | 3.0630 | 31200 | 0.6256 | 0.0086 | 0.3269 | 0.7526 | 0.3269 |
| 0.621 | 3.0728 | 31300 | 0.6245 | 0.0086 | 0.3312 | 0.7539 | 0.3312 |
| 0.6641 | 3.0827 | 31400 | 0.6275 | 0.0086 | 0.3233 | 0.7531 | 0.3233 |
| 0.6074 | 3.0925 | 31500 | 0.6295 | 0.0086 | 0.3231 | 0.7526 | 0.3231 |
| 0.5997 | 3.1023 | 31600 | 0.6262 | 0.0086 | 0.3243 | 0.7541 | 0.3243 |
| 0.5985 | 3.1121 | 31700 | 0.6281 | 0.0086 | 0.3234 | 0.7521 | 0.3234 |
| 0.6224 | 3.1219 | 31800 | 0.6291 | 0.0086 | 0.3213 | 0.7520 | 0.3213 |
| 0.5988 | 3.1317 | 31900 | 0.6260 | 0.0086 | 0.3353 | 0.7552 | 0.3353 |
| 0.6372 | 3.1416 | 32000 | 0.6295 | 0.0086 | 0.3212 | 0.7515 | 0.3212 |
| 0.6432 | 3.1514 | 32100 | 0.6359 | 0.0086 | 0.3165 | 0.7503 | 0.3165 |
| 0.6639 | 3.1612 | 32200 | 0.6317 | 0.0086 | 0.3231 | 0.7528 | 0.3231 |
| 0.6649 | 3.1710 | 32300 | 0.6274 | 0.0086 | 0.3204 | 0.7526 | 0.3204 |
| 0.6454 | 3.1808 | 32400 | 0.6247 | 0.0086 | 0.3214 | 0.7527 | 0.3214 |
| 0.6535 | 3.1907 | 32500 | 0.6296 | 0.0086 | 0.3275 | 0.7525 | 0.3275 |
| 0.6824 | 3.2005 | 32600 | 0.6279 | 0.0086 | 0.3259 | 0.7547 | 0.3259 |
| 0.6055 | 3.2103 | 32700 | 0.6287 | 0.0086 | 0.3315 | 0.7544 | 0.3315 |
| 0.6149 | 3.2201 | 32800 | 0.6251 | 0.0086 | 0.3256 | 0.7524 | 0.3256 |
| 0.6575 | 3.2299 | 32900 | 0.6326 | 0.0086 | 0.3211 | 0.7520 | 0.3211 |
| 0.5945 | 3.2397 | 33000 | 0.6307 | 0.0086 | 0.3260 | 0.7533 | 0.3260 |
| 0.6324 | 3.2496 | 33100 | 0.6260 | 0.0086 | 0.3240 | 0.7543 | 0.3240 |
| 0.6308 | 3.2594 | 33200 | 0.6230 | 0.0086 | 0.3240 | 0.7563 | 0.3240 |
| 0.5727 | 3.2692 | 33300 | 0.6271 | 0.0086 | 0.3307 | 0.7529 | 0.3307 |
| 0.6216 | 3.2790 | 33400 | 0.6216 | 0.0086 | 0.3335 | 0.7561 | 0.3335 |
| 0.5931 | 3.2888 | 33500 | 0.6329 | 0.0086 | 0.3227 | 0.7510 | 0.3227 |
| 0.6986 | 3.2986 | 33600 | 0.6285 | 0.0086 | 0.3267 | 0.7540 | 0.3267 |
| 0.5884 | 3.3085 | 33700 | 0.6244 | 0.0086 | 0.3242 | 0.7551 | 0.3242 |
| 0.6978 | 3.3183 | 33800 | 0.6264 | 0.0086 | 0.3332 | 0.7544 | 0.3332 |
| 0.6321 | 3.3281 | 33900 | 0.6191 | 0.0086 | 0.3306 | 0.7559 | 0.3306 |
| 0.6489 | 3.3379 | 34000 | 0.6314 | 0.0086 | 0.3256 | 0.7523 | 0.3256 |
| 0.6165 | 3.3477 | 34100 | 0.6354 | 0.0086 | 0.3288 | 0.7523 | 0.3288 |
| 0.593 | 3.3575 | 34200 | 0.6151 | 0.0086 | 0.3327 | 0.7580 | 0.3327 |
| 0.6133 | 3.3674 | 34300 | 0.6349 | 0.0086 | 0.3220 | 0.7506 | 0.3220 |
| 0.601 | 3.3772 | 34400 | 0.6277 | 0.0086 | 0.3257 | 0.7545 | 0.3257 |
| 0.6228 | 3.3870 | 34500 | 0.6283 | 0.0086 | 0.3258 | 0.7530 | 0.3258 |
| 0.5581 | 3.3968 | 34600 | 0.6314 | 0.0086 | 0.3250 | 0.7532 | 0.3250 |
| 0.5727 | 3.4066 | 34700 | 0.6284 | 0.0086 | 0.3274 | 0.7532 | 0.3274 |
| 0.6318 | 3.4165 | 34800 | 0.6225 | 0.0086 | 0.3259 | 0.7547 | 0.3259 |
| 0.6408 | 3.4263 | 34900 | 0.6169 | 0.0086 | 0.3342 | 0.7579 | 0.3342 |
| 0.6644 | 3.4361 | 35000 | 0.6223 | 0.0086 | 0.3270 | 0.7550 | 0.3270 |
| 0.5617 | 3.4459 | 35100 | 0.6247 | 0.0086 | 0.3224 | 0.7523 | 0.3224 |
| 0.6184 | 3.4557 | 35200 | 0.6307 | 0.0086 | 0.3175 | 0.7531 | 0.3175 |
| 0.5904 | 3.4655 | 35300 | 0.6291 | 0.0086 | 0.3295 | 0.7544 | 0.3295 |
| 0.5808 | 3.4754 | 35400 | 0.6134 | 0.0086 | 0.3356 | 0.7592 | 0.3356 |
| 0.6185 | 3.4852 | 35500 | 0.6243 | 0.0086 | 0.3216 | 0.7532 | 0.3216 |
| 0.619 | 3.4950 | 35600 | 0.6243 | 0.0086 | 0.3292 | 0.7533 | 0.3292 |
| 0.6291 | 3.5048 | 35700 | 0.6270 | 0.0086 | 0.3336 | 0.7561 | 0.3336 |
| 0.6426 | 3.5146 | 35800 | 0.6201 | 0.0086 | 0.3245 | 0.7540 | 0.3245 |
| 0.6253 | 3.5244 | 35900 | 0.6189 | 0.0086 | 0.3360 | 0.7574 | 0.3360 |
| 0.579 | 3.5343 | 36000 | 0.6217 | 0.0086 | 0.3329 | 0.7549 | 0.3329 |
| 0.5749 | 3.5441 | 36100 | 0.6211 | 0.0086 | 0.3333 | 0.7566 | 0.3333 |
| 0.6792 | 3.5539 | 36200 | 0.6333 | 0.0086 | 0.3195 | 0.7488 | 0.3195 |
| 0.5553 | 3.5637 | 36300 | 0.6318 | 0.0086 | 0.3220 | 0.7518 | 0.3220 |
| 0.6074 | 3.5735 | 36400 | 0.6371 | 0.0086 | 0.3325 | 0.7525 | 0.3325 |
| 0.6514 | 3.5833 | 36500 | 0.6171 | 0.0086 | 0.3298 | 0.7559 | 0.3298 |
| 0.6312 | 3.5932 | 36600 | 0.6255 | 0.0086 | 0.3314 | 0.7528 | 0.3314 |
| 0.5982 | 3.6030 | 36700 | 0.6163 | 0.0086 | 0.3397 | 0.7600 | 0.3397 |
| 0.6956 | 3.6128 | 36800 | 0.6194 | 0.0086 | 0.3352 | 0.7596 | 0.3352 |
| 0.5358 | 3.6226 | 36900 | 0.6254 | 0.0086 | 0.3317 | 0.7563 | 0.3317 |
| 0.5568 | 3.6324 | 37000 | 0.6210 | 0.0086 | 0.3258 | 0.7556 | 0.3258 |
| 0.6064 | 3.6423 | 37100 | 0.6217 | 0.0086 | 0.3344 | 0.7548 | 0.3344 |
| 0.5905 | 3.6521 | 37200 | 0.6207 | 0.0086 | 0.3238 | 0.7549 | 0.3238 |
| 0.6099 | 3.6619 | 37300 | 0.6185 | 0.0086 | 0.3314 | 0.7562 | 0.3314 |
| 0.6042 | 3.6717 | 37400 | 0.6180 | 0.0086 | 0.3368 | 0.7585 | 0.3368 |
| 0.6655 | 3.6815 | 37500 | 0.6173 | 0.0086 | 0.3291 | 0.7574 | 0.3291 |
| 0.5984 | 3.6913 | 37600 | 0.6227 | 0.0086 | 0.3333 | 0.7570 | 0.3333 |
| 0.6124 | 3.7012 | 37700 | 0.6392 | 0.0086 | 0.3122 | 0.7488 | 0.3122 |
| 0.6289 | 3.7110 | 37800 | 0.6201 | 0.0086 | 0.3302 | 0.7555 | 0.3302 |
| 0.5921 | 3.7208 | 37900 | 0.6159 | 0.0086 | 0.3359 | 0.7577 | 0.3359 |
| 0.5599 | 3.7306 | 38000 | 0.6200 | 0.0086 | 0.3307 | 0.7554 | 0.3307 |
| 0.6032 | 3.7404 | 38100 | 0.6209 | 0.0086 | 0.3316 | 0.7556 | 0.3316 |
| 0.5903 | 3.7502 | 38200 | 0.6192 | 0.0086 | 0.3354 | 0.7560 | 0.3354 |
| 0.6303 | 3.7601 | 38300 | 0.6234 | 0.0086 | 0.3402 | 0.7587 | 0.3402 |
| 0.692 | 3.7699 | 38400 | 0.6188 | 0.0086 | 0.3300 | 0.7556 | 0.3300 |
| 0.6642 | 3.7797 | 38500 | 0.6186 | 0.0086 | 0.3328 | 0.7573 | 0.3328 |
| 0.6828 | 3.7895 | 38600 | 0.6297 | 0.0086 | 0.3275 | 0.7533 | 0.3275 |
| 0.5568 | 3.7993 | 38700 | 0.6184 | 0.0086 | 0.3330 | 0.7571 | 0.3330 |
| 0.6665 | 3.8091 | 38800 | 0.6226 | 0.0086 | 0.3372 | 0.7561 | 0.3372 |
| 0.5939 | 3.8190 | 38900 | 0.6160 | 0.0086 | 0.3289 | 0.7576 | 0.3289 |
| 0.6243 | 3.8288 | 39000 | 0.6210 | 0.0086 | 0.3345 | 0.7590 | 0.3345 |
| 0.6478 | 3.8386 | 39100 | 0.6151 | 0.0086 | 0.3408 | 0.7596 | 0.3408 |
| 0.6703 | 3.8484 | 39200 | 0.6220 | 0.0086 | 0.3304 | 0.7568 | 0.3304 |
| 0.5973 | 3.8582 | 39300 | 0.6112 | 0.0086 | 0.3394 | 0.7620 | 0.3394 |
| 0.6219 | 3.8681 | 39400 | 0.6164 | 0.0086 | 0.3406 | 0.7595 | 0.3406 |
| 0.6838 | 3.8779 | 39500 | 0.6331 | 0.0086 | 0.3251 | 0.7527 | 0.3251 |
| 0.6345 | 3.8877 | 39600 | 0.6278 | 0.0086 | 0.3291 | 0.7529 | 0.3291 |
| 0.6009 | 3.8975 | 39700 | 0.6269 | 0.0086 | 0.3314 | 0.7547 | 0.3314 |
| 0.6099 | 3.9073 | 39800 | 0.6143 | 0.0086 | 0.3394 | 0.7591 | 0.3394 |
| 0.621 | 3.9171 | 39900 | 0.6127 | 0.0086 | 0.3374 | 0.7604 | 0.3374 |
| 0.7027 | 3.9270 | 40000 | 0.6209 | 0.0086 | 0.3253 | 0.7541 | 0.3253 |
| 0.5991 | 3.9368 | 40100 | 0.6309 | 0.0086 | 0.3248 | 0.7529 | 0.3248 |
| 0.6413 | 3.9466 | 40200 | 0.6209 | 0.0086 | 0.3319 | 0.7569 | 0.3319 |
| 0.624 | 3.9564 | 40300 | 0.6172 | 0.0086 | 0.3340 | 0.7575 | 0.3340 |
| 0.6397 | 3.9662 | 40400 | 0.6172 | 0.0086 | 0.3367 | 0.7577 | 0.3367 |
| 0.6325 | 3.9760 | 40500 | 0.6227 | 0.0086 | 0.3378 | 0.7568 | 0.3378 |
| 0.6255 | 3.9859 | 40600 | 0.6081 | 0.0086 | 0.3423 | 0.7610 | 0.3423 |
| 0.7132 | 3.9957 | 40700 | 0.6238 | 0.0086 | 0.3326 | 0.7568 | 0.3326 |
| 0.6054 | 4.0055 | 40800 | 0.6181 | 0.0086 | 0.3389 | 0.7598 | 0.3389 |
| 0.5804 | 4.0153 | 40900 | 0.6175 | 0.0086 | 0.3242 | 0.7564 | 0.3242 |
| 0.6081 | 4.0251 | 41000 | 0.6130 | 0.0086 | 0.3316 | 0.7584 | 0.3316 |
| 0.6502 | 4.0349 | 41100 | 0.6204 | 0.0086 | 0.3328 | 0.7568 | 0.3328 |
| 0.5431 | 4.0448 | 41200 | 0.6155 | 0.0086 | 0.3425 | 0.7606 | 0.3425 |
| 0.5856 | 4.0546 | 41300 | 0.6188 | 0.0086 | 0.3363 | 0.7574 | 0.3363 |
| 0.6155 | 4.0644 | 41400 | 0.6159 | 0.0086 | 0.3361 | 0.7596 | 0.3361 |
| 0.594 | 4.0742 | 41500 | 0.6203 | 0.0086 | 0.3375 | 0.7574 | 0.3375 |
| 0.5639 | 4.0840 | 41600 | 0.6090 | 0.0086 | 0.3463 | 0.7621 | 0.3463 |
| 0.6101 | 4.0939 | 41700 | 0.6171 | 0.0086 | 0.3291 | 0.7585 | 0.3291 |
| 0.5606 | 4.1037 | 41800 | 0.6124 | 0.0086 | 0.3324 | 0.7594 | 0.3324 |
| 0.573 | 4.1135 | 41900 | 0.6121 | 0.0086 | 0.3381 | 0.7613 | 0.3381 |
| 0.5933 | 4.1233 | 42000 | 0.6058 | 0.0086 | 0.3394 | 0.7620 | 0.3394 |
| 0.564 | 4.1331 | 42100 | 0.6202 | 0.0086 | 0.3306 | 0.7567 | 0.3306 |
| 0.5657 | 4.1429 | 42200 | 0.6111 | 0.0086 | 0.3476 | 0.7626 | 0.3476 |
| 0.6831 | 4.1528 | 42300 | 0.6190 | 0.0086 | 0.3338 | 0.7574 | 0.3338 |
| 0.6247 | 4.1626 | 42400 | 0.6146 | 0.0086 | 0.3363 | 0.7586 | 0.3363 |
| 0.5744 | 4.1724 | 42500 | 0.6080 | 0.0086 | 0.3402 | 0.7625 | 0.3402 |
| 0.6673 | 4.1822 | 42600 | 0.6197 | 0.0086 | 0.3327 | 0.7575 | 0.3327 |
| 0.6368 | 4.1920 | 42700 | 0.6141 | 0.0086 | 0.3372 | 0.7599 | 0.3372 |
| 0.5965 | 4.2018 | 42800 | 0.6219 | 0.0086 | 0.3291 | 0.7557 | 0.3291 |
| 0.6001 | 4.2117 | 42900 | 0.6141 | 0.0086 | 0.3390 | 0.7603 | 0.3390 |
| 0.6449 | 4.2215 | 43000 | 0.6235 | 0.0086 | 0.3331 | 0.7567 | 0.3331 |
| 0.6381 | 4.2313 | 43100 | 0.6148 | 0.0086 | 0.3338 | 0.7597 | 0.3338 |
| 0.6426 | 4.2411 | 43200 | 0.6049 | 0.0086 | 0.3412 | 0.7613 | 0.3412 |
| 0.5596 | 4.2509 | 43300 | 0.6105 | 0.0086 | 0.3375 | 0.7606 | 0.3375 |
| 0.5768 | 4.2608 | 43400 | 0.6222 | 0.0086 | 0.3278 | 0.7553 | 0.3278 |
| 0.6451 | 4.2706 | 43500 | 0.6137 | 0.0086 | 0.3369 | 0.7591 | 0.3369 |
| 0.5864 | 4.2804 | 43600 | 0.6148 | 0.0086 | 0.3334 | 0.7590 | 0.3334 |
| 0.5822 | 4.2902 | 43700 | 0.6028 | 0.0086 | 0.3430 | 0.7638 | 0.3430 |
| 0.6527 | 4.3000 | 43800 | 0.6095 | 0.0086 | 0.3373 | 0.7588 | 0.3373 |
| 0.7008 | 4.3098 | 43900 | 0.6193 | 0.0086 | 0.3338 | 0.7564 | 0.3338 |
| 0.5279 | 4.3197 | 44000 | 0.6061 | 0.0086 | 0.3485 | 0.7632 | 0.3485 |
| 0.5885 | 4.3295 | 44100 | 0.6144 | 0.0086 | 0.3372 | 0.7588 | 0.3372 |
| 0.5261 | 4.3393 | 44200 | 0.6154 | 0.0086 | 0.3354 | 0.7579 | 0.3354 |
| 0.6226 | 4.3491 | 44300 | 0.6145 | 0.0086 | 0.3380 | 0.7588 | 0.3380 |
| 0.576 | 4.3589 | 44400 | 0.6124 | 0.0086 | 0.3358 | 0.7595 | 0.3358 |
| 0.6455 | 4.3687 | 44500 | 0.6099 | 0.0086 | 0.3413 | 0.7611 | 0.3413 |
| 0.6287 | 4.3786 | 44600 | 0.6069 | 0.0086 | 0.3422 | 0.7611 | 0.3422 |
| 0.6038 | 4.3884 | 44700 | 0.6081 | 0.0086 | 0.3441 | 0.7611 | 0.3441 |
| 0.6558 | 4.3982 | 44800 | 0.6119 | 0.0086 | 0.3450 | 0.7616 | 0.3450 |
| 0.6699 | 4.4080 | 44900 | 0.6243 | 0.0086 | 0.3343 | 0.7563 | 0.3343 |
| 0.5422 | 4.4178 | 45000 | 0.6063 | 0.0086 | 0.3393 | 0.7627 | 0.3393 |
| 0.659 | 4.4276 | 45100 | 0.6144 | 0.0086 | 0.3385 | 0.7582 | 0.3385 |
| 0.6124 | 4.4375 | 45200 | 0.6233 | 0.0086 | 0.3271 | 0.7556 | 0.3271 |
| 0.625 | 4.4473 | 45300 | 0.6142 | 0.0086 | 0.3431 | 0.7605 | 0.3431 |
| 0.5892 | 4.4571 | 45400 | 0.6110 | 0.0086 | 0.3420 | 0.7611 | 0.3420 |
| 0.5941 | 4.4669 | 45500 | 0.6123 | 0.0086 | 0.3331 | 0.7607 | 0.3331 |
| 0.6259 | 4.4767 | 45600 | 0.6241 | 0.0086 | 0.3342 | 0.7567 | 0.3342 |
| 0.6079 | 4.4866 | 45700 | 0.6133 | 0.0086 | 0.3345 | 0.7592 | 0.3345 |
| 0.6241 | 4.4964 | 45800 | 0.6102 | 0.0086 | 0.3428 | 0.7624 | 0.3428 |
| 0.6058 | 4.5062 | 45900 | 0.6149 | 0.0086 | 0.3395 | 0.7592 | 0.3395 |
| 0.5642 | 4.5160 | 46000 | 0.6140 | 0.0086 | 0.3389 | 0.7615 | 0.3389 |
| 0.6282 | 4.5258 | 46100 | 0.6189 | 0.0086 | 0.3331 | 0.7584 | 0.3331 |
| 0.5885 | 4.5356 | 46200 | 0.6321 | 0.0086 | 0.3258 | 0.7538 | 0.3258 |
| 0.5897 | 4.5455 | 46300 | 0.6094 | 0.0086 | 0.3372 | 0.7603 | 0.3372 |
| 0.6497 | 4.5553 | 46400 | 0.6160 | 0.0086 | 0.3354 | 0.7579 | 0.3354 |
| 0.6238 | 4.5651 | 46500 | 0.6149 | 0.0086 | 0.3308 | 0.7586 | 0.3308 |
| 0.6 | 4.5749 | 46600 | 0.6033 | 0.0086 | 0.3424 | 0.7622 | 0.3424 |
| 0.6107 | 4.5847 | 46700 | 0.6067 | 0.0086 | 0.3401 | 0.7614 | 0.3401 |
| 0.5665 | 4.5945 | 46800 | 0.6116 | 0.0086 | 0.3448 | 0.7629 | 0.3448 |
| 0.6129 | 4.6044 | 46900 | 0.6079 | 0.0086 | 0.3420 | 0.7623 | 0.3420 |
| 0.6378 | 4.6142 | 47000 | 0.6505 | 0.0086 | 0.3203 | 0.7494 | 0.3203 |
| 0.5778 | 4.6240 | 47100 | 0.6141 | 0.0086 | 0.3392 | 0.7607 | 0.3392 |
| 0.5997 | 4.6338 | 47200 | 0.6099 | 0.0086 | 0.3418 | 0.7611 | 0.3418 |
| 0.6807 | 4.6436 | 47300 | 0.6094 | 0.0086 | 0.3335 | 0.7608 | 0.3335 |
| 0.6218 | 4.6534 | 47400 | 0.6185 | 0.0086 | 0.3393 | 0.7605 | 0.3393 |
| 0.5647 | 4.6633 | 47500 | 0.6059 | 0.0086 | 0.3416 | 0.7624 | 0.3416 |
| 0.6388 | 4.6731 | 47600 | 0.6188 | 0.0086 | 0.3391 | 0.7590 | 0.3391 |
| 0.5666 | 4.6829 | 47700 | 0.6284 | 0.0086 | 0.3287 | 0.7549 | 0.3287 |
| 0.5354 | 4.6927 | 47800 | 0.6084 | 0.0086 | 0.3398 | 0.7611 | 0.3398 |
| 0.5492 | 4.7025 | 47900 | 0.6106 | 0.0086 | 0.3414 | 0.7598 | 0.3414 |
| 0.5916 | 4.7124 | 48000 | 0.6086 | 0.0086 | 0.3461 | 0.7618 | 0.3461 |
| 0.5775 | 4.7222 | 48100 | 0.6012 | 0.0086 | 0.3408 | 0.7635 | 0.3408 |
| 0.6448 | 4.7320 | 48200 | 0.6089 | 0.0086 | 0.3461 | 0.7620 | 0.3461 |
| 0.6334 | 4.7418 | 48300 | 0.6112 | 0.0086 | 0.3436 | 0.7613 | 0.3436 |
| 0.5845 | 4.7516 | 48400 | 0.6071 | 0.0086 | 0.3432 | 0.7626 | 0.3432 |
| 0.6117 | 4.7614 | 48500 | 0.6101 | 0.0086 | 0.3381 | 0.7606 | 0.3381 |
| 0.5826 | 4.7713 | 48600 | 0.6209 | 0.0086 | 0.3336 | 0.7567 | 0.3336 |
| 0.5915 | 4.7811 | 48700 | 0.6224 | 0.0086 | 0.3331 | 0.7567 | 0.3331 |
| 0.6079 | 4.7909 | 48800 | 0.6121 | 0.0086 | 0.3354 | 0.7574 | 0.3354 |
| 0.6398 | 4.8007 | 48900 | 0.6082 | 0.0086 | 0.3393 | 0.7621 | 0.3393 |
| 0.6582 | 4.8105 | 49000 | 0.6132 | 0.0086 | 0.3383 | 0.7597 | 0.3383 |
| 0.5721 | 4.8203 | 49100 | 0.6052 | 0.0086 | 0.3428 | 0.7612 | 0.3428 |
| 0.5446 | 4.8302 | 49200 | 0.6050 | 0.0086 | 0.3399 | 0.7616 | 0.3399 |
| 0.6274 | 4.8400 | 49300 | 0.6048 | 0.0086 | 0.3420 | 0.7623 | 0.3420 |
| 0.5768 | 4.8498 | 49400 | 0.6091 | 0.0086 | 0.3419 | 0.7605 | 0.3419 |
| 0.5624 | 4.8596 | 49500 | 0.6086 | 0.0086 | 0.3453 | 0.7622 | 0.3453 |
| 0.5282 | 4.8694 | 49600 | 0.6024 | 0.0086 | 0.3450 | 0.7631 | 0.3450 |
| 0.624 | 4.8792 | 49700 | 0.6019 | 0.0086 | 0.3475 | 0.7644 | 0.3475 |
| 0.6453 | 4.8891 | 49800 | 0.6112 | 0.0086 | 0.3374 | 0.7587 | 0.3374 |
| 0.6002 | 4.8989 | 49900 | 0.6050 | 0.0086 | 0.3405 | 0.7618 | 0.3405 |
| 0.6535 | 4.9087 | 50000 | 0.6079 | 0.0086 | 0.3452 | 0.7625 | 0.3452 |
| 0.581 | 4.9185 | 50100 | 0.6062 | 0.0086 | 0.3385 | 0.7624 | 0.3385 |
| 0.5543 | 4.9283 | 50200 | 0.6163 | 0.0086 | 0.3400 | 0.7601 | 0.3400 |
| 0.613 | 4.9382 | 50300 | 0.6018 | 0.0086 | 0.3444 | 0.7638 | 0.3444 |
| 0.6704 | 4.9480 | 50400 | 0.6099 | 0.0086 | 0.3398 | 0.7597 | 0.3398 |
| 0.5886 | 4.9578 | 50500 | 0.6127 | 0.0086 | 0.3347 | 0.7586 | 0.3347 |
| 0.5684 | 4.9676 | 50600 | 0.6142 | 0.0086 | 0.3307 | 0.7573 | 0.3307 |
| 0.5771 | 4.9774 | 50700 | 0.6121 | 0.0086 | 0.3393 | 0.7596 | 0.3393 |
| 0.5673 | 4.9872 | 50800 | 0.6106 | 0.0086 | 0.3418 | 0.7615 | 0.3418 |
| 0.6015 | 4.9971 | 50900 | 0.6081 | 0.0086 | 0.3424 | 0.7606 | 0.3424 |
| 0.5331 | 5.0069 | 51000 | 0.6120 | 0.0086 | 0.3414 | 0.7605 | 0.3414 |
| 0.5991 | 5.0167 | 51100 | 0.6109 | 0.0086 | 0.3363 | 0.7601 | 0.3363 |
| 0.5822 | 5.0265 | 51200 | 0.6110 | 0.0086 | 0.3427 | 0.7615 | 0.3427 |
| 0.623 | 5.0363 | 51300 | 0.6073 | 0.0086 | 0.3450 | 0.7643 | 0.3450 |
| 0.65 | 5.0461 | 51400 | 0.6152 | 0.0086 | 0.3333 | 0.7594 | 0.3333 |
| 0.5721 | 5.0560 | 51500 | 0.6006 | 0.0086 | 0.3492 | 0.7658 | 0.3492 |
| 0.618 | 5.0658 | 51600 | 0.6044 | 0.0086 | 0.3390 | 0.7613 | 0.3390 |
| 0.5617 | 5.0756 | 51700 | 0.6076 | 0.0086 | 0.3461 | 0.7622 | 0.3461 |
| 0.6361 | 5.0854 | 51800 | 0.6057 | 0.0086 | 0.3470 | 0.7638 | 0.3470 |
| 0.5826 | 5.0952 | 51900 | 0.5970 | 0.0086 | 0.3438 | 0.7653 | 0.3438 |
| 0.5781 | 5.1050 | 52000 | 0.6025 | 0.0086 | 0.3414 | 0.7633 | 0.3414 |
| 0.6573 | 5.1149 | 52100 | 0.6040 | 0.0086 | 0.3443 | 0.7636 | 0.3443 |
| 0.6252 | 5.1247 | 52200 | 0.6062 | 0.0086 | 0.3476 | 0.7638 | 0.3476 |
| 0.6614 | 5.1345 | 52300 | 0.6083 | 0.0086 | 0.3450 | 0.7618 | 0.3450 |
| 0.6139 | 5.1443 | 52400 | 0.6022 | 0.0086 | 0.3463 | 0.7636 | 0.3463 |
| 0.5606 | 5.1541 | 52500 | 0.6101 | 0.0086 | 0.3399 | 0.7599 | 0.3399 |
| 0.593 | 5.1640 | 52600 | 0.6016 | 0.0086 | 0.3483 | 0.7645 | 0.3483 |
| 0.6151 | 5.1738 | 52700 | 0.6036 | 0.0086 | 0.3514 | 0.7648 | 0.3514 |
| 0.6368 | 5.1836 | 52800 | 0.6018 | 0.0086 | 0.3401 | 0.7609 | 0.3401 |
| 0.5631 | 5.1934 | 52900 | 0.6036 | 0.0086 | 0.3463 | 0.7634 | 0.3463 |
| 0.5767 | 5.2032 | 53000 | 0.5991 | 0.0086 | 0.3479 | 0.7638 | 0.3479 |
| 0.6535 | 5.2130 | 53100 | 0.6100 | 0.0086 | 0.3430 | 0.7610 | 0.3430 |
| 0.6193 | 5.2229 | 53200 | 0.6066 | 0.0086 | 0.3447 | 0.7617 | 0.3447 |
| 0.5957 | 5.2327 | 53300 | 0.6078 | 0.0086 | 0.3398 | 0.7617 | 0.3398 |
| 0.5539 | 5.2425 | 53400 | 0.6139 | 0.0086 | 0.3459 | 0.7603 | 0.3459 |
| 0.6232 | 5.2523 | 53500 | 0.6078 | 0.0086 | 0.3372 | 0.7610 | 0.3372 |
| 0.6099 | 5.2621 | 53600 | 0.5966 | 0.0086 | 0.3527 | 0.7666 | 0.3527 |
| 0.5619 | 5.2719 | 53700 | 0.6077 | 0.0086 | 0.3355 | 0.7598 | 0.3355 |
| 0.6722 | 5.2818 | 53800 | 0.6247 | 0.0086 | 0.3291 | 0.7554 | 0.3291 |
| 0.61 | 5.2916 | 53900 | 0.6103 | 0.0086 | 0.3441 | 0.7620 | 0.3441 |
| 0.5403 | 5.3014 | 54000 | 0.6085 | 0.0086 | 0.3436 | 0.7620 | 0.3436 |
| 0.5585 | 5.3112 | 54100 | 0.6024 | 0.0086 | 0.3443 | 0.7623 | 0.3443 |
| 0.6503 | 5.3210 | 54200 | 0.6026 | 0.0086 | 0.3432 | 0.7626 | 0.3432 |
| 0.5583 | 5.3308 | 54300 | 0.6085 | 0.0086 | 0.3346 | 0.7607 | 0.3346 |
| 0.6347 | 5.3407 | 54400 | 0.6057 | 0.0086 | 0.3409 | 0.7623 | 0.3409 |
| 0.5882 | 5.3505 | 54500 | 0.6048 | 0.0086 | 0.3367 | 0.7606 | 0.3367 |
| 0.5565 | 5.3603 | 54600 | 0.6056 | 0.0086 | 0.3473 | 0.7648 | 0.3473 |
| 0.5465 | 5.3701 | 54700 | 0.6017 | 0.0086 | 0.3559 | 0.7650 | 0.3559 |
| 0.5752 | 5.3799 | 54800 | 0.6077 | 0.0086 | 0.3498 | 0.7642 | 0.3498 |
| 0.6498 | 5.3898 | 54900 | 0.6008 | 0.0086 | 0.3488 | 0.7645 | 0.3488 |
| 0.5355 | 5.3996 | 55000 | 0.5994 | 0.0086 | 0.3421 | 0.7652 | 0.3421 |
| 0.6006 | 5.4094 | 55100 | 0.6091 | 0.0086 | 0.3365 | 0.7609 | 0.3365 |
| 0.5346 | 5.4192 | 55200 | 0.6074 | 0.0086 | 0.3432 | 0.7637 | 0.3432 |
| 0.5513 | 5.4290 | 55300 | 0.6062 | 0.0086 | 0.3443 | 0.7624 | 0.3443 |
| 0.6061 | 5.4388 | 55400 | 0.6012 | 0.0086 | 0.3532 | 0.7665 | 0.3532 |
| 0.6046 | 5.4487 | 55500 | 0.6111 | 0.0086 | 0.3452 | 0.7618 | 0.3452 |
| 0.6116 | 5.4585 | 55600 | 0.6074 | 0.0086 | 0.3392 | 0.7613 | 0.3392 |
| 0.6833 | 5.4683 | 55700 | 0.6053 | 0.0086 | 0.3401 | 0.7610 | 0.3401 |
| 0.576 | 5.4781 | 55800 | 0.6129 | 0.0086 | 0.3358 | 0.7596 | 0.3358 |
| 0.5749 | 5.4879 | 55900 | 0.6046 | 0.0086 | 0.3399 | 0.7639 | 0.3399 |
| 0.5885 | 5.4977 | 56000 | 0.6122 | 0.0086 | 0.3388 | 0.7612 | 0.3388 |
| 0.633 | 5.5076 | 56100 | 0.6033 | 0.0086 | 0.3490 | 0.7640 | 0.3490 |
| 0.5744 | 5.5174 | 56200 | 0.5985 | 0.0086 | 0.3376 | 0.7631 | 0.3376 |
| 0.6265 | 5.5272 | 56300 | 0.5994 | 0.0086 | 0.3454 | 0.7642 | 0.3454 |
| 0.5889 | 5.5370 | 56400 | 0.6049 | 0.0086 | 0.3425 | 0.7622 | 0.3425 |
| 0.64 | 5.5468 | 56500 | 0.6027 | 0.0086 | 0.3456 | 0.7636 | 0.3456 |
| 0.6046 | 5.5566 | 56600 | 0.6017 | 0.0086 | 0.3416 | 0.7631 | 0.3416 |
| 0.5951 | 5.5665 | 56700 | 0.6019 | 0.0086 | 0.3429 | 0.7628 | 0.3429 |
| 0.6434 | 5.5763 | 56800 | 0.5970 | 0.0086 | 0.3518 | 0.7665 | 0.3518 |
| 0.5773 | 5.5861 | 56900 | 0.6162 | 0.0086 | 0.3327 | 0.7583 | 0.3327 |
| 0.5599 | 5.5959 | 57000 | 0.6009 | 0.0086 | 0.3484 | 0.7644 | 0.3484 |
| 0.574 | 5.6057 | 57100 | 0.5941 | 0.0086 | 0.3394 | 0.7643 | 0.3394 |
| 0.535 | 5.6156 | 57200 | 0.6007 | 0.0086 | 0.3376 | 0.7612 | 0.3376 |
| 0.6018 | 5.6254 | 57300 | 0.6109 | 0.0086 | 0.3355 | 0.7600 | 0.3355 |
| 0.6263 | 5.6352 | 57400 | 0.6033 | 0.0086 | 0.3407 | 0.7618 | 0.3407 |
| 0.5863 | 5.6450 | 57500 | 0.5982 | 0.0086 | 0.3501 | 0.7648 | 0.3501 |
| 0.5242 | 5.6548 | 57600 | 0.6032 | 0.0086 | 0.3510 | 0.7646 | 0.3510 |
| 0.6277 | 5.6646 | 57700 | 0.6070 | 0.0086 | 0.3389 | 0.7622 | 0.3389 |
| 0.6128 | 5.6745 | 57800 | 0.6077 | 0.0086 | 0.3490 | 0.7640 | 0.3490 |
| 0.6368 | 5.6843 | 57900 | 0.6009 | 0.0086 | 0.3519 | 0.7660 | 0.3519 |
| 0.5409 | 5.6941 | 58000 | 0.6059 | 0.0086 | 0.3435 | 0.7636 | 0.3435 |
| 0.5696 | 5.7039 | 58100 | 0.6064 | 0.0086 | 0.3435 | 0.7635 | 0.3435 |
| 0.6092 | 5.7137 | 58200 | 0.6047 | 0.0086 | 0.3345 | 0.7610 | 0.3345 |
| 0.5849 | 5.7235 | 58300 | 0.5971 | 0.0086 | 0.3432 | 0.7628 | 0.3432 |
| 0.6596 | 5.7334 | 58400 | 0.5974 | 0.0086 | 0.3479 | 0.7682 | 0.3479 |
| 0.589 | 5.7432 | 58500 | 0.5935 | 0.0086 | 0.3504 | 0.7669 | 0.3504 |
| 0.6453 | 5.7530 | 58600 | 0.6040 | 0.0086 | 0.3416 | 0.7631 | 0.3416 |
| 0.5853 | 5.7628 | 58700 | 0.6003 | 0.0086 | 0.3514 | 0.7665 | 0.3514 |
| 0.6131 | 5.7726 | 58800 | 0.6109 | 0.0086 | 0.3459 | 0.7618 | 0.3459 |
| 0.5561 | 5.7824 | 58900 | 0.6016 | 0.0086 | 0.3410 | 0.7638 | 0.3410 |
| 0.567 | 5.7923 | 59000 | 0.5965 | 0.0086 | 0.3487 | 0.7652 | 0.3487 |
| 0.5979 | 5.8021 | 59100 | 0.6007 | 0.0086 | 0.3463 | 0.7644 | 0.3463 |
| 0.5803 | 5.8119 | 59200 | 0.5978 | 0.0086 | 0.3541 | 0.7655 | 0.3541 |
| 0.6249 | 5.8217 | 59300 | 0.6143 | 0.0086 | 0.3376 | 0.7617 | 0.3376 |
| 0.5465 | 5.8315 | 59400 | 0.5971 | 0.0086 | 0.3468 | 0.7649 | 0.3468 |
| 0.6989 | 5.8414 | 59500 | 0.6131 | 0.0086 | 0.3427 | 0.7607 | 0.3427 |
| 0.6143 | 5.8512 | 59600 | 0.6098 | 0.0086 | 0.3339 | 0.7593 | 0.3339 |
| 0.6605 | 5.8610 | 59700 | 0.6150 | 0.0086 | 0.3383 | 0.7612 | 0.3383 |
| 0.6123 | 5.8708 | 59800 | 0.5957 | 0.0086 | 0.3453 | 0.7649 | 0.3453 |
| 0.5687 | 5.8806 | 59900 | 0.6024 | 0.0086 | 0.3490 | 0.7636 | 0.3490 |
| 0.5671 | 5.8904 | 60000 | 0.6030 | 0.0086 | 0.3417 | 0.7634 | 0.3417 |
| 0.5914 | 5.9003 | 60100 | 0.6028 | 0.0086 | 0.3547 | 0.7664 | 0.3547 |
| 0.6167 | 5.9101 | 60200 | 0.6021 | 0.0086 | 0.3415 | 0.7626 | 0.3415 |
| 0.6099 | 5.9199 | 60300 | 0.6087 | 0.0086 | 0.3407 | 0.7627 | 0.3407 |
| 0.5963 | 5.9297 | 60400 | 0.6090 | 0.0086 | 0.3444 | 0.7624 | 0.3444 |
| 0.5501 | 5.9395 | 60500 | 0.6071 | 0.0086 | 0.3399 | 0.7622 | 0.3399 |
| 0.5458 | 5.9493 | 60600 | 0.6018 | 0.0086 | 0.3486 | 0.7647 | 0.3486 |
| 0.6142 | 5.9592 | 60700 | 0.6001 | 0.0086 | 0.3443 | 0.7654 | 0.3443 |
| 0.5448 | 5.9690 | 60800 | 0.6028 | 0.0086 | 0.3525 | 0.7640 | 0.3525 |
| 0.6307 | 5.9788 | 60900 | 0.5984 | 0.0086 | 0.3561 | 0.7680 | 0.3561 |
| 0.5923 | 5.9886 | 61000 | 0.6066 | 0.0086 | 0.3476 | 0.7630 | 0.3476 |
| 0.6139 | 5.9984 | 61100 | 0.6105 | 0.0086 | 0.3431 | 0.7620 | 0.3431 |
| 0.5237 | 6.0082 | 61200 | 0.5978 | 0.0086 | 0.3475 | 0.7652 | 0.3475 |
| 0.5155 | 6.0181 | 61300 | 0.6017 | 0.0086 | 0.3452 | 0.7641 | 0.3452 |
| 0.5353 | 6.0279 | 61400 | 0.5948 | 0.0086 | 0.3454 | 0.7651 | 0.3454 |
| 0.6021 | 6.0377 | 61500 | 0.5954 | 0.0086 | 0.3533 | 0.7684 | 0.3533 |
| 0.5652 | 6.0475 | 61600 | 0.6104 | 0.0086 | 0.3390 | 0.7617 | 0.3390 |
| 0.5795 | 6.0573 | 61700 | 0.5970 | 0.0086 | 0.3532 | 0.7664 | 0.3532 |
| 0.5414 | 6.0672 | 61800 | 0.6050 | 0.0086 | 0.3409 | 0.7628 | 0.3409 |
| 0.6404 | 6.0770 | 61900 | 0.6055 | 0.0086 | 0.3401 | 0.7614 | 0.3401 |
| 0.6101 | 6.0868 | 62000 | 0.6077 | 0.0086 | 0.3451 | 0.7646 | 0.3451 |
| 0.5883 | 6.0966 | 62100 | 0.5963 | 0.0086 | 0.3574 | 0.7656 | 0.3574 |
| 0.6232 | 6.1064 | 62200 | 0.5926 | 0.0086 | 0.3528 | 0.7669 | 0.3528 |
| 0.5768 | 6.1162 | 62300 | 0.5978 | 0.0086 | 0.3505 | 0.7649 | 0.3505 |
| 0.5955 | 6.1261 | 62400 | 0.5948 | 0.0086 | 0.3470 | 0.7662 | 0.3470 |
| 0.5537 | 6.1359 | 62500 | 0.6058 | 0.0086 | 0.3425 | 0.7636 | 0.3425 |
| 0.6022 | 6.1457 | 62600 | 0.6098 | 0.0086 | 0.3460 | 0.7625 | 0.3460 |
| 0.5593 | 6.1555 | 62700 | 0.5982 | 0.0086 | 0.3501 | 0.7681 | 0.3501 |
| 0.5907 | 6.1653 | 62800 | 0.6009 | 0.0086 | 0.3467 | 0.7632 | 0.3467 |
| 0.6055 | 6.1751 | 62900 | 0.6120 | 0.0086 | 0.3414 | 0.7615 | 0.3414 |
| 0.5671 | 6.1850 | 63000 | 0.5926 | 0.0086 | 0.3445 | 0.7650 | 0.3445 |
| 0.6491 | 6.1948 | 63100 | 0.6015 | 0.0086 | 0.3462 | 0.7653 | 0.3462 |
| 0.5969 | 6.2046 | 63200 | 0.6068 | 0.0086 | 0.3474 | 0.7638 | 0.3474 |
| 0.588 | 6.2144 | 63300 | 0.5964 | 0.0086 | 0.3470 | 0.7670 | 0.3470 |
| 0.6095 | 6.2242 | 63400 | 0.6026 | 0.0086 | 0.3409 | 0.7629 | 0.3409 |
| 0.5764 | 6.2340 | 63500 | 0.6048 | 0.0086 | 0.3425 | 0.7630 | 0.3425 |
| 0.5593 | 6.2439 | 63600 | 0.5952 | 0.0086 | 0.3450 | 0.7655 | 0.3450 |
| 0.6257 | 6.2537 | 63700 | 0.6123 | 0.0086 | 0.3427 | 0.7615 | 0.3427 |
| 0.5877 | 6.2635 | 63800 | 0.5927 | 0.0086 | 0.3516 | 0.7677 | 0.3516 |
| 0.6055 | 6.2733 | 63900 | 0.5970 | 0.0086 | 0.3523 | 0.7660 | 0.3523 |
| 0.6661 | 6.2831 | 64000 | 0.5939 | 0.0086 | 0.3580 | 0.7693 | 0.3580 |
| 0.5649 | 6.2930 | 64100 | 0.5995 | 0.0086 | 0.3460 | 0.7639 | 0.3460 |
| 0.5717 | 6.3028 | 64200 | 0.5948 | 0.0086 | 0.3516 | 0.7664 | 0.3516 |
| 0.5785 | 6.3126 | 64300 | 0.6019 | 0.0086 | 0.3553 | 0.7658 | 0.3553 |
| 0.5516 | 6.3224 | 64400 | 0.5879 | 0.0086 | 0.3580 | 0.7696 | 0.3580 |
| 0.586 | 6.3322 | 64500 | 0.6082 | 0.0086 | 0.3450 | 0.7635 | 0.3450 |
| 0.6076 | 6.3420 | 64600 | 0.5920 | 0.0086 | 0.3537 | 0.7678 | 0.3537 |
| 0.5573 | 6.3519 | 64700 | 0.5887 | 0.0086 | 0.3530 | 0.7693 | 0.3530 |
| 0.5897 | 6.3617 | 64800 | 0.5964 | 0.0086 | 0.3543 | 0.7674 | 0.3543 |
| 0.5995 | 6.3715 | 64900 | 0.5972 | 0.0086 | 0.3455 | 0.7661 | 0.3455 |
| 0.6352 | 6.3813 | 65000 | 0.5914 | 0.0086 | 0.3603 | 0.7686 | 0.3603 |
| 0.5732 | 6.3911 | 65100 | 0.5934 | 0.0086 | 0.3494 | 0.7661 | 0.3494 |
| 0.624 | 6.4009 | 65200 | 0.5948 | 0.0086 | 0.3537 | 0.7660 | 0.3537 |
| 0.6041 | 6.4108 | 65300 | 0.6086 | 0.0086 | 0.3528 | 0.7644 | 0.3528 |
| 0.624 | 6.4206 | 65400 | 0.5966 | 0.0086 | 0.3420 | 0.7646 | 0.3420 |
| 0.5989 | 6.4304 | 65500 | 0.5963 | 0.0086 | 0.3541 | 0.7686 | 0.3541 |
| 0.5905 | 6.4402 | 65600 | 0.5970 | 0.0086 | 0.3552 | 0.7661 | 0.3552 |
| 0.6476 | 6.4500 | 65700 | 0.5944 | 0.0086 | 0.3525 | 0.7666 | 0.3525 |
| 0.5435 | 6.4598 | 65800 | 0.5927 | 0.0086 | 0.3559 | 0.7685 | 0.3559 |
| 0.6189 | 6.4697 | 65900 | 0.5945 | 0.0086 | 0.3448 | 0.7654 | 0.3448 |
| 0.5674 | 6.4795 | 66000 | 0.5980 | 0.0086 | 0.3461 | 0.7651 | 0.3461 |
| 0.5489 | 6.4893 | 66100 | 0.5955 | 0.0086 | 0.3467 | 0.7655 | 0.3467 |
| 0.5846 | 6.4991 | 66200 | 0.5933 | 0.0086 | 0.3498 | 0.7668 | 0.3498 |
| 0.6233 | 6.5089 | 66300 | 0.5973 | 0.0086 | 0.3541 | 0.7664 | 0.3541 |
| 0.5745 | 6.5188 | 66400 | 0.6044 | 0.0086 | 0.3379 | 0.7605 | 0.3379 |
| 0.5808 | 6.5286 | 66500 | 0.5958 | 0.0086 | 0.3476 | 0.7668 | 0.3476 |
| 0.568 | 6.5384 | 66600 | 0.5892 | 0.0086 | 0.3528 | 0.7684 | 0.3528 |
| 0.5974 | 6.5482 | 66700 | 0.5941 | 0.0086 | 0.3516 | 0.7675 | 0.3516 |
| 0.5878 | 6.5580 | 66800 | 0.5955 | 0.0086 | 0.3491 | 0.7674 | 0.3491 |
| 0.6682 | 6.5678 | 66900 | 0.5953 | 0.0086 | 0.3548 | 0.7671 | 0.3548 |
| 0.6099 | 6.5777 | 67000 | 0.6048 | 0.0086 | 0.3430 | 0.7654 | 0.3430 |
| 0.6265 | 6.5875 | 67100 | 0.5982 | 0.0086 | 0.3434 | 0.7641 | 0.3434 |
| 0.6171 | 6.5973 | 67200 | 0.6086 | 0.0086 | 0.3406 | 0.7608 | 0.3406 |
| 0.5814 | 6.6071 | 67300 | 0.5947 | 0.0086 | 0.3528 | 0.7674 | 0.3528 |
| 0.5707 | 6.6169 | 67400 | 0.5945 | 0.0086 | 0.3563 | 0.7675 | 0.3563 |
| 0.6171 | 6.6267 | 67500 | 0.5859 | 0.0086 | 0.3561 | 0.7702 | 0.3561 |
| 0.5979 | 6.6366 | 67600 | 0.5888 | 0.0086 | 0.3501 | 0.7681 | 0.3501 |
| 0.705 | 6.6464 | 67700 | 0.5955 | 0.0086 | 0.3504 | 0.7673 | 0.3504 |
| 0.5427 | 6.6562 | 67800 | 0.5961 | 0.0086 | 0.3491 | 0.7653 | 0.3491 |
| 0.5668 | 6.6660 | 67900 | 0.5942 | 0.0086 | 0.3574 | 0.7683 | 0.3574 |
| 0.6164 | 6.6758 | 68000 | 0.6038 | 0.0086 | 0.3369 | 0.7630 | 0.3369 |
| 0.5457 | 6.6856 | 68100 | 0.5943 | 0.0086 | 0.3621 | 0.7696 | 0.3621 |
| 0.5495 | 6.6955 | 68200 | 0.5950 | 0.0086 | 0.3522 | 0.7667 | 0.3522 |
| 0.5794 | 6.7053 | 68300 | 0.6088 | 0.0086 | 0.3363 | 0.7590 | 0.3363 |
| 0.5564 | 6.7151 | 68400 | 0.5978 | 0.0086 | 0.3494 | 0.7650 | 0.3494 |
| 0.6342 | 6.7249 | 68500 | 0.5965 | 0.0086 | 0.3480 | 0.7660 | 0.3480 |
| 0.5781 | 6.7347 | 68600 | 0.5870 | 0.0086 | 0.3539 | 0.7702 | 0.3539 |
| 0.4772 | 6.7446 | 68700 | 0.5964 | 0.0086 | 0.3513 | 0.7662 | 0.3513 |
| 0.5988 | 6.7544 | 68800 | 0.5956 | 0.0086 | 0.3474 | 0.7652 | 0.3474 |
| 0.5904 | 6.7642 | 68900 | 0.5871 | 0.0086 | 0.3578 | 0.7698 | 0.3578 |
| 0.6189 | 6.7740 | 69000 | 0.5882 | 0.0086 | 0.3605 | 0.7709 | 0.3605 |
| 0.5626 | 6.7838 | 69100 | 0.5967 | 0.0086 | 0.3566 | 0.7687 | 0.3566 |
| 0.6542 | 6.7936 | 69200 | 0.5934 | 0.0086 | 0.3505 | 0.7674 | 0.3505 |
| 0.5397 | 6.8035 | 69300 | 0.6012 | 0.0086 | 0.3499 | 0.7657 | 0.3499 |
| 0.644 | 6.8133 | 69400 | 0.6028 | 0.0086 | 0.3518 | 0.7647 | 0.3518 |
| 0.6231 | 6.8231 | 69500 | 0.6077 | 0.0086 | 0.3485 | 0.7636 | 0.3485 |
| 0.6159 | 6.8329 | 69600 | 0.6202 | 0.0086 | 0.3363 | 0.7583 | 0.3363 |
| 0.6497 | 6.8427 | 69700 | 0.6063 | 0.0086 | 0.3483 | 0.7640 | 0.3483 |
| 0.5618 | 6.8525 | 69800 | 0.5967 | 0.0086 | 0.3524 | 0.7663 | 0.3524 |
| 0.5196 | 6.8624 | 69900 | 0.5989 | 0.0086 | 0.3512 | 0.7652 | 0.3512 |
| 0.6337 | 6.8722 | 70000 | 0.5913 | 0.0086 | 0.3574 | 0.7697 | 0.3574 |
| 0.5716 | 6.8820 | 70100 | 0.5926 | 0.0086 | 0.3609 | 0.7703 | 0.3609 |
| 0.576 | 6.8918 | 70200 | 0.5926 | 0.0086 | 0.3509 | 0.7674 | 0.3509 |
| 0.571 | 6.9016 | 70300 | 0.5962 | 0.0086 | 0.3603 | 0.7693 | 0.3603 |
| 0.6006 | 6.9114 | 70400 | 0.5896 | 0.0086 | 0.3587 | 0.7685 | 0.3587 |
| 0.5712 | 6.9213 | 70500 | 0.5916 | 0.0086 | 0.3567 | 0.7684 | 0.3567 |
| 0.5858 | 6.9311 | 70600 | 0.5915 | 0.0086 | 0.3520 | 0.7663 | 0.3520 |
| 0.5905 | 6.9409 | 70700 | 0.5915 | 0.0086 | 0.3494 | 0.7661 | 0.3494 |
| 0.5847 | 6.9507 | 70800 | 0.5878 | 0.0086 | 0.3570 | 0.7692 | 0.3570 |
| 0.5519 | 6.9605 | 70900 | 0.5914 | 0.0086 | 0.3562 | 0.7687 | 0.3562 |
| 0.6569 | 6.9704 | 71000 | 0.5931 | 0.0086 | 0.3500 | 0.7675 | 0.3500 |
| 0.6167 | 6.9802 | 71100 | 0.5855 | 0.0086 | 0.3582 | 0.7703 | 0.3582 |
| 0.6062 | 6.9900 | 71200 | 0.5914 | 0.0086 | 0.3523 | 0.7680 | 0.3523 |
| 0.5836 | 6.9998 | 71300 | 0.5929 | 0.0086 | 0.3552 | 0.7684 | 0.3552 |
| 0.5238 | 7.0096 | 71400 | 0.6047 | 0.0086 | 0.3515 | 0.7648 | 0.3515 |
| 0.5477 | 7.0194 | 71500 | 0.5894 | 0.0086 | 0.3610 | 0.7702 | 0.3610 |
| 0.5009 | 7.0293 | 71600 | 0.5858 | 0.0086 | 0.3586 | 0.7704 | 0.3586 |
| 0.5508 | 7.0391 | 71700 | 0.5895 | 0.0086 | 0.3530 | 0.7684 | 0.3530 |
| 0.5757 | 7.0489 | 71800 | 0.5910 | 0.0086 | 0.3545 | 0.7689 | 0.3545 |
| 0.6301 | 7.0587 | 71900 | 0.5939 | 0.0086 | 0.3535 | 0.7681 | 0.3535 |
| 0.5702 | 7.0685 | 72000 | 0.5921 | 0.0086 | 0.3560 | 0.7699 | 0.3560 |
| 0.6324 | 7.0783 | 72100 | 0.5873 | 0.0086 | 0.3598 | 0.7724 | 0.3598 |
| 0.6174 | 7.0882 | 72200 | 0.5878 | 0.0086 | 0.3561 | 0.7705 | 0.3561 |
| 0.582 | 7.0980 | 72300 | 0.6042 | 0.0086 | 0.3475 | 0.7647 | 0.3475 |
| 0.6208 | 7.1078 | 72400 | 0.5887 | 0.0086 | 0.3627 | 0.7705 | 0.3627 |
| 0.5802 | 7.1176 | 72500 | 0.5923 | 0.0086 | 0.3505 | 0.7674 | 0.3505 |
| 0.572 | 7.1274 | 72600 | 0.5859 | 0.0086 | 0.3597 | 0.7704 | 0.3597 |
| 0.5382 | 7.1372 | 72700 | 0.5974 | 0.0086 | 0.3578 | 0.7680 | 0.3578 |
| 0.5877 | 7.1471 | 72800 | 0.5815 | 0.0086 | 0.3574 | 0.7705 | 0.3574 |
| 0.5633 | 7.1569 | 72900 | 0.5914 | 0.0086 | 0.3553 | 0.7686 | 0.3553 |
| 0.6295 | 7.1667 | 73000 | 0.5918 | 0.0086 | 0.3459 | 0.7678 | 0.3459 |
| 0.5891 | 7.1765 | 73100 | 0.5863 | 0.0086 | 0.3620 | 0.7709 | 0.3620 |
| 0.6128 | 7.1863 | 73200 | 0.5900 | 0.0086 | 0.3552 | 0.7694 | 0.3552 |
| 0.5989 | 7.1962 | 73300 | 0.5926 | 0.0086 | 0.3584 | 0.7681 | 0.3584 |
| 0.5607 | 7.2060 | 73400 | 0.5867 | 0.0086 | 0.3557 | 0.7700 | 0.3557 |
| 0.5966 | 7.2158 | 73500 | 0.5878 | 0.0086 | 0.3563 | 0.7689 | 0.3563 |
| 0.6647 | 7.2256 | 73600 | 0.6094 | 0.0086 | 0.3471 | 0.7629 | 0.3471 |
| 0.6499 | 7.2354 | 73700 | 0.5923 | 0.0086 | 0.3527 | 0.7678 | 0.3527 |
| 0.573 | 7.2452 | 73800 | 0.5867 | 0.0086 | 0.3599 | 0.7708 | 0.3599 |
| 0.5666 | 7.2551 | 73900 | 0.5903 | 0.0086 | 0.3556 | 0.7689 | 0.3556 |
| 0.5647 | 7.2649 | 74000 | 0.5872 | 0.0086 | 0.3575 | 0.7698 | 0.3575 |
| 0.6188 | 7.2747 | 74100 | 0.5942 | 0.0086 | 0.3454 | 0.7662 | 0.3454 |
| 0.5774 | 7.2845 | 74200 | 0.5868 | 0.0086 | 0.3628 | 0.7725 | 0.3628 |
| 0.6064 | 7.2943 | 74300 | 0.5929 | 0.0086 | 0.3473 | 0.7664 | 0.3473 |
| 0.492 | 7.3041 | 74400 | 0.5950 | 0.0086 | 0.3524 | 0.7665 | 0.3524 |
| 0.5333 | 7.3140 | 74500 | 0.5831 | 0.0086 | 0.3601 | 0.7719 | 0.3601 |
| 0.5254 | 7.3238 | 74600 | 0.5866 | 0.0086 | 0.3621 | 0.7702 | 0.3621 |
| 0.6001 | 7.3336 | 74700 | 0.5940 | 0.0086 | 0.3472 | 0.7675 | 0.3472 |
| 0.5299 | 7.3434 | 74800 | 0.5916 | 0.0086 | 0.3584 | 0.7696 | 0.3584 |
| 0.5574 | 7.3532 | 74900 | 0.5912 | 0.0086 | 0.3515 | 0.7678 | 0.3515 |
| 0.6757 | 7.3630 | 75000 | 0.5929 | 0.0086 | 0.3550 | 0.7671 | 0.3550 |
| 0.6406 | 7.3729 | 75100 | 0.5881 | 0.0086 | 0.3574 | 0.7696 | 0.3574 |
| 0.5522 | 7.3827 | 75200 | 0.5907 | 0.0086 | 0.3612 | 0.7707 | 0.3612 |
| 0.6441 | 7.3925 | 75300 | 0.5912 | 0.0086 | 0.3540 | 0.7685 | 0.3540 |
| 0.6 | 7.4023 | 75400 | 0.5934 | 0.0086 | 0.3512 | 0.7671 | 0.3512 |
| 0.5934 | 7.4121 | 75500 | 0.5913 | 0.0086 | 0.3509 | 0.7670 | 0.3509 |
| 0.603 | 7.4220 | 75600 | 0.5859 | 0.0086 | 0.3621 | 0.7715 | 0.3621 |
| 0.5952 | 7.4318 | 75700 | 0.5926 | 0.0086 | 0.3602 | 0.7686 | 0.3602 |
| 0.6199 | 7.4416 | 75800 | 0.5878 | 0.0086 | 0.3560 | 0.7684 | 0.3560 |
| 0.6554 | 7.4514 | 75900 | 0.5865 | 0.0086 | 0.3616 | 0.7703 | 0.3616 |
| 0.6334 | 7.4612 | 76000 | 0.5952 | 0.0086 | 0.3577 | 0.7687 | 0.3577 |
| 0.5947 | 7.4710 | 76100 | 0.5892 | 0.0086 | 0.3600 | 0.7708 | 0.3600 |
| 0.5357 | 7.4809 | 76200 | 0.5959 | 0.0086 | 0.3502 | 0.7660 | 0.3502 |
| 0.6013 | 7.4907 | 76300 | 0.5896 | 0.0086 | 0.3552 | 0.7706 | 0.3552 |
| 0.5504 | 7.5005 | 76400 | 0.5898 | 0.0086 | 0.3525 | 0.7683 | 0.3525 |
| 0.5427 | 7.5103 | 76500 | 0.5874 | 0.0086 | 0.3582 | 0.7700 | 0.3582 |
| 0.5804 | 7.5201 | 76600 | 0.5888 | 0.0086 | 0.3549 | 0.7705 | 0.3549 |
| 0.57 | 7.5299 | 76700 | 0.5918 | 0.0086 | 0.3579 | 0.7699 | 0.3579 |
| 0.5929 | 7.5398 | 76800 | 0.5840 | 0.0086 | 0.3603 | 0.7716 | 0.3603 |
| 0.6013 | 7.5496 | 76900 | 0.5924 | 0.0086 | 0.3594 | 0.7701 | 0.3594 |
| 0.5881 | 7.5594 | 77000 | 0.5921 | 0.0086 | 0.3592 | 0.7699 | 0.3592 |
| 0.5505 | 7.5692 | 77100 | 0.5871 | 0.0086 | 0.3623 | 0.7700 | 0.3623 |
| 0.5413 | 7.5790 | 77200 | 0.5886 | 0.0086 | 0.3604 | 0.7713 | 0.3604 |
| 0.5669 | 7.5888 | 77300 | 0.5888 | 0.0086 | 0.3476 | 0.7671 | 0.3476 |
| 0.5455 | 7.5987 | 77400 | 0.5910 | 0.0086 | 0.3534 | 0.7704 | 0.3534 |
| 0.6402 | 7.6085 | 77500 | 0.5878 | 0.0086 | 0.3519 | 0.7693 | 0.3519 |
| 0.6044 | 7.6183 | 77600 | 0.5832 | 0.0086 | 0.3583 | 0.7708 | 0.3583 |
| 0.5031 | 7.6281 | 77700 | 0.5930 | 0.0086 | 0.3516 | 0.7680 | 0.3516 |
| 0.6125 | 7.6379 | 77800 | 0.5875 | 0.0086 | 0.3633 | 0.7711 | 0.3633 |
| 0.5633 | 7.6478 | 77900 | 0.5934 | 0.0086 | 0.3465 | 0.7672 | 0.3465 |
| 0.5994 | 7.6576 | 78000 | 0.5883 | 0.0086 | 0.3554 | 0.7690 | 0.3554 |
| 0.5849 | 7.6674 | 78100 | 0.5916 | 0.0086 | 0.3585 | 0.7696 | 0.3585 |
| 0.5268 | 7.6772 | 78200 | 0.5952 | 0.0086 | 0.3597 | 0.7697 | 0.3597 |
| 0.5745 | 7.6870 | 78300 | 0.5826 | 0.0086 | 0.3581 | 0.7717 | 0.3581 |
| 0.5543 | 7.6968 | 78400 | 0.5950 | 0.0086 | 0.3530 | 0.7651 | 0.3530 |
| 0.5975 | 7.7067 | 78500 | 0.5898 | 0.0086 | 0.3511 | 0.7669 | 0.3511 |
| 0.5825 | 7.7165 | 78600 | 0.5894 | 0.0086 | 0.3623 | 0.7719 | 0.3623 |
| 0.5566 | 7.7263 | 78700 | 0.5929 | 0.0086 | 0.3504 | 0.7668 | 0.3504 |
| 0.5857 | 7.7361 | 78800 | 0.5813 | 0.0086 | 0.3595 | 0.7712 | 0.3595 |
| 0.62 | 7.7459 | 78900 | 0.5933 | 0.0086 | 0.3513 | 0.7670 | 0.3513 |
| 0.5486 | 7.7557 | 79000 | 0.5952 | 0.0086 | 0.3524 | 0.7668 | 0.3524 |
| 0.6074 | 7.7656 | 79100 | 0.5829 | 0.0086 | 0.3658 | 0.7729 | 0.3658 |
| 0.5707 | 7.7754 | 79200 | 0.5991 | 0.0086 | 0.3558 | 0.7674 | 0.3558 |
| 0.5961 | 7.7852 | 79300 | 0.5828 | 0.0086 | 0.3546 | 0.7715 | 0.3546 |
| 0.5388 | 7.7950 | 79400 | 0.5819 | 0.0086 | 0.3641 | 0.7717 | 0.3641 |
| 0.5751 | 7.8048 | 79500 | 0.5891 | 0.0086 | 0.3488 | 0.7677 | 0.3488 |
| 0.5864 | 7.8146 | 79600 | 0.5882 | 0.0086 | 0.3596 | 0.7692 | 0.3596 |
| 0.587 | 7.8245 | 79700 | 0.5874 | 0.0086 | 0.3615 | 0.7714 | 0.3615 |
| 0.5909 | 7.8343 | 79800 | 0.5838 | 0.0086 | 0.3597 | 0.7711 | 0.3597 |
| 0.5886 | 7.8441 | 79900 | 0.5956 | 0.0086 | 0.3525 | 0.7663 | 0.3525 |
| 0.5286 | 7.8539 | 80000 | 0.5894 | 0.0086 | 0.3526 | 0.7683 | 0.3526 |
| 0.5899 | 7.8637 | 80100 | 0.5905 | 0.0086 | 0.3465 | 0.7675 | 0.3465 |
| 0.492 | 7.8736 | 80200 | 0.5877 | 0.0086 | 0.3573 | 0.7701 | 0.3573 |
| 0.5326 | 7.8834 | 80300 | 0.5854 | 0.0086 | 0.3646 | 0.7717 | 0.3646 |
| 0.6815 | 7.8932 | 80400 | 0.5972 | 0.0086 | 0.3476 | 0.7658 | 0.3476 |
| 0.5531 | 7.9030 | 80500 | 0.5865 | 0.0086 | 0.3564 | 0.7690 | 0.3564 |
| 0.5357 | 7.9128 | 80600 | 0.5967 | 0.0086 | 0.3469 | 0.7658 | 0.3469 |
| 0.5807 | 7.9226 | 80700 | 0.5861 | 0.0086 | 0.3532 | 0.7694 | 0.3532 |
| 0.5946 | 7.9325 | 80800 | 0.5826 | 0.0086 | 0.3581 | 0.7701 | 0.3581 |
| 0.6202 | 7.9423 | 80900 | 0.5818 | 0.0086 | 0.3631 | 0.7722 | 0.3631 |
| 0.5944 | 7.9521 | 81000 | 0.5837 | 0.0086 | 0.3612 | 0.7715 | 0.3612 |
| 0.5202 | 7.9619 | 81100 | 0.5876 | 0.0086 | 0.3595 | 0.7698 | 0.3595 |
| 0.5982 | 7.9717 | 81200 | 0.5858 | 0.0086 | 0.3581 | 0.7697 | 0.3581 |
| 0.5979 | 7.9815 | 81300 | 0.5933 | 0.0086 | 0.3546 | 0.7666 | 0.3546 |
| 0.5333 | 7.9914 | 81400 | 0.5850 | 0.0086 | 0.3561 | 0.7705 | 0.3561 |
| 0.5663 | 8.0012 | 81500 | 0.5838 | 0.0086 | 0.3595 | 0.7718 | 0.3595 |
| 0.5212 | 8.0110 | 81600 | 0.5862 | 0.0086 | 0.3533 | 0.7687 | 0.3533 |
| 0.5368 | 8.0208 | 81700 | 0.5830 | 0.0086 | 0.3532 | 0.7703 | 0.3532 |
| 0.5592 | 8.0306 | 81800 | 0.5915 | 0.0086 | 0.3577 | 0.7687 | 0.3577 |
| 0.5379 | 8.0404 | 81900 | 0.5902 | 0.0086 | 0.3587 | 0.7699 | 0.3587 |
| 0.5923 | 8.0503 | 82000 | 0.5867 | 0.0086 | 0.3617 | 0.7700 | 0.3617 |
| 0.6002 | 8.0601 | 82100 | 0.6034 | 0.0086 | 0.3596 | 0.7671 | 0.3596 |
| 0.5325 | 8.0699 | 82200 | 0.5851 | 0.0086 | 0.3568 | 0.7704 | 0.3568 |
| 0.4816 | 8.0797 | 82300 | 0.5951 | 0.0086 | 0.3584 | 0.7683 | 0.3584 |
| 0.6248 | 8.0895 | 82400 | 0.5835 | 0.0086 | 0.3579 | 0.7710 | 0.3579 |
| 0.576 | 8.0994 | 82500 | 0.6037 | 0.0086 | 0.3488 | 0.7647 | 0.3488 |
| 0.5566 | 8.1092 | 82600 | 0.5937 | 0.0086 | 0.3515 | 0.7669 | 0.3515 |
| 0.604 | 8.1190 | 82700 | 0.5864 | 0.0086 | 0.3563 | 0.7707 | 0.3563 |
| 0.6502 | 8.1288 | 82800 | 0.6010 | 0.0086 | 0.3472 | 0.7640 | 0.3472 |
| 0.5729 | 8.1386 | 82900 | 0.5842 | 0.0086 | 0.3570 | 0.7702 | 0.3570 |
| 0.5656 | 8.1484 | 83000 | 0.5814 | 0.0086 | 0.3619 | 0.7729 | 0.3619 |
| 0.6284 | 8.1583 | 83100 | 0.5960 | 0.0086 | 0.3576 | 0.7655 | 0.3576 |
| 0.579 | 8.1681 | 83200 | 0.5877 | 0.0086 | 0.3608 | 0.7712 | 0.3608 |
| 0.5517 | 8.1779 | 83300 | 0.5916 | 0.0086 | 0.3532 | 0.7676 | 0.3532 |
| 0.5575 | 8.1877 | 83400 | 0.5836 | 0.0086 | 0.3604 | 0.7715 | 0.3604 |
| 0.4976 | 8.1975 | 83500 | 0.5898 | 0.0086 | 0.3561 | 0.7699 | 0.3561 |
| 0.5681 | 8.2073 | 83600 | 0.5899 | 0.0086 | 0.3610 | 0.7707 | 0.3610 |
| 0.5526 | 8.2172 | 83700 | 0.5809 | 0.0086 | 0.3656 | 0.7729 | 0.3656 |
| 0.6385 | 8.2270 | 83800 | 0.5972 | 0.0086 | 0.3488 | 0.7661 | 0.3488 |
| 0.4887 | 8.2368 | 83900 | 0.5899 | 0.0086 | 0.3592 | 0.7697 | 0.3592 |
| 0.5925 | 8.2466 | 84000 | 0.6034 | 0.0086 | 0.3551 | 0.7654 | 0.3551 |
| 0.5207 | 8.2564 | 84100 | 0.5802 | 0.0086 | 0.3670 | 0.7732 | 0.3670 |
| 0.5194 | 8.2662 | 84200 | 0.5875 | 0.0086 | 0.3612 | 0.7695 | 0.3612 |
| 0.5728 | 8.2761 | 84300 | 0.5818 | 0.0086 | 0.3648 | 0.7728 | 0.3648 |
| 0.6193 | 8.2859 | 84400 | 0.5887 | 0.0086 | 0.3605 | 0.7703 | 0.3605 |
| 0.6311 | 8.2957 | 84500 | 0.5873 | 0.0086 | 0.3499 | 0.7691 | 0.3499 |
| 0.5772 | 8.3055 | 84600 | 0.5867 | 0.0086 | 0.3563 | 0.7701 | 0.3563 |
| 0.571 | 8.3153 | 84700 | 0.5889 | 0.0086 | 0.3565 | 0.7703 | 0.3565 |
| 0.5568 | 8.3252 | 84800 | 0.5880 | 0.0086 | 0.3588 | 0.7715 | 0.3588 |
| 0.5999 | 8.3350 | 84900 | 0.5819 | 0.0086 | 0.3617 | 0.7712 | 0.3617 |
| 0.597 | 8.3448 | 85000 | 0.5798 | 0.0086 | 0.3642 | 0.7721 | 0.3642 |
| 0.5151 | 8.3546 | 85100 | 0.5820 | 0.0086 | 0.3598 | 0.7715 | 0.3598 |
| 0.5999 | 8.3644 | 85200 | 0.5862 | 0.0086 | 0.3533 | 0.7694 | 0.3533 |
| 0.5282 | 8.3742 | 85300 | 0.5841 | 0.0086 | 0.3568 | 0.7705 | 0.3568 |
| 0.5648 | 8.3841 | 85400 | 0.5839 | 0.0086 | 0.3551 | 0.7707 | 0.3551 |
| 0.5371 | 8.3939 | 85500 | 0.5882 | 0.0086 | 0.3610 | 0.7708 | 0.3610 |
| 0.6224 | 8.4037 | 85600 | 0.5870 | 0.0086 | 0.3467 | 0.7679 | 0.3467 |
| 0.5703 | 8.4135 | 85700 | 0.5854 | 0.0086 | 0.3625 | 0.7720 | 0.3625 |
| 0.562 | 8.4233 | 85800 | 0.5944 | 0.0086 | 0.3575 | 0.7690 | 0.3575 |
| 0.5535 | 8.4331 | 85900 | 0.5814 | 0.0086 | 0.3643 | 0.7728 | 0.3643 |
| 0.5649 | 8.4430 | 86000 | 0.5852 | 0.0086 | 0.3508 | 0.7703 | 0.3508 |
| 0.6245 | 8.4528 | 86100 | 0.5755 | 0.0086 | 0.3632 | 0.7741 | 0.3632 |
| 0.5627 | 8.4626 | 86200 | 0.5830 | 0.0086 | 0.3586 | 0.7710 | 0.3586 |
| 0.5904 | 8.4724 | 86300 | 0.5809 | 0.0086 | 0.3601 | 0.7730 | 0.3601 |
| 0.5634 | 8.4822 | 86400 | 0.5855 | 0.0086 | 0.3590 | 0.7716 | 0.3590 |
| 0.5655 | 8.4920 | 86500 | 0.5911 | 0.0086 | 0.3534 | 0.7690 | 0.3534 |
| 0.6366 | 8.5019 | 86600 | 0.5825 | 0.0086 | 0.3630 | 0.7736 | 0.3630 |
| 0.5838 | 8.5117 | 86700 | 0.5855 | 0.0086 | 0.3639 | 0.7718 | 0.3639 |
| 0.5548 | 8.5215 | 86800 | 0.5798 | 0.0086 | 0.3656 | 0.7738 | 0.3656 |
| 0.5033 | 8.5313 | 86900 | 0.5776 | 0.0086 | 0.3673 | 0.7754 | 0.3673 |
| 0.6673 | 8.5411 | 87000 | 0.5884 | 0.0086 | 0.3549 | 0.7704 | 0.3549 |
| 0.5491 | 8.5510 | 87100 | 0.5892 | 0.0086 | 0.3574 | 0.7680 | 0.3574 |
| 0.5848 | 8.5608 | 87200 | 0.5985 | 0.0086 | 0.3525 | 0.7671 | 0.3525 |
| 0.6011 | 8.5706 | 87300 | 0.5908 | 0.0086 | 0.3605 | 0.7698 | 0.3605 |
| 0.5886 | 8.5804 | 87400 | 0.5852 | 0.0086 | 0.3529 | 0.7692 | 0.3529 |
| 0.5758 | 8.5902 | 87500 | 0.5836 | 0.0086 | 0.3667 | 0.7735 | 0.3667 |
| 0.5647 | 8.6000 | 87600 | 0.5861 | 0.0086 | 0.3632 | 0.7708 | 0.3632 |
| 0.5686 | 8.6099 | 87700 | 0.5818 | 0.0086 | 0.3689 | 0.7745 | 0.3689 |
| 0.5792 | 8.6197 | 87800 | 0.5883 | 0.0086 | 0.3585 | 0.7709 | 0.3585 |
| 0.5647 | 8.6295 | 87900 | 0.5908 | 0.0086 | 0.3611 | 0.7711 | 0.3611 |
| 0.5667 | 8.6393 | 88000 | 0.5807 | 0.0086 | 0.3680 | 0.7745 | 0.3680 |
| 0.579 | 8.6491 | 88100 | 0.5777 | 0.0086 | 0.3652 | 0.7747 | 0.3652 |
| 0.5553 | 8.6589 | 88200 | 0.5802 | 0.0086 | 0.3576 | 0.7719 | 0.3576 |
| 0.585 | 8.6688 | 88300 | 0.5905 | 0.0086 | 0.3599 | 0.7717 | 0.3599 |
| 0.5563 | 8.6786 | 88400 | 0.5784 | 0.0086 | 0.3600 | 0.7728 | 0.3600 |
| 0.5916 | 8.6884 | 88500 | 0.5820 | 0.0086 | 0.3576 | 0.7712 | 0.3576 |
| 0.5878 | 8.6982 | 88600 | 0.5904 | 0.0086 | 0.3537 | 0.7683 | 0.3537 |
| 0.5155 | 8.7080 | 88700 | 0.5894 | 0.0086 | 0.3545 | 0.7692 | 0.3545 |
| 0.629 | 8.7178 | 88800 | 0.5865 | 0.0086 | 0.3635 | 0.7720 | 0.3635 |
| 0.5567 | 8.7277 | 88900 | 0.5906 | 0.0086 | 0.3552 | 0.7687 | 0.3552 |
| 0.55 | 8.7375 | 89000 | 0.5838 | 0.0086 | 0.3660 | 0.7729 | 0.3660 |
| 0.542 | 8.7473 | 89100 | 0.5813 | 0.0086 | 0.3688 | 0.7720 | 0.3688 |
| 0.5736 | 8.7571 | 89200 | 0.6036 | 0.0086 | 0.3412 | 0.7631 | 0.3412 |
| 0.5241 | 8.7669 | 89300 | 0.5859 | 0.0086 | 0.3582 | 0.7720 | 0.3582 |
| 0.5664 | 8.7768 | 89400 | 0.5858 | 0.0086 | 0.3554 | 0.7702 | 0.3554 |
| 0.5501 | 8.7866 | 89500 | 0.5787 | 0.0086 | 0.3655 | 0.7733 | 0.3655 |
| 0.5268 | 8.7964 | 89600 | 0.5803 | 0.0086 | 0.3522 | 0.7706 | 0.3522 |
| 0.5877 | 8.8062 | 89700 | 0.5834 | 0.0086 | 0.3559 | 0.7708 | 0.3559 |
| 0.5644 | 8.8160 | 89800 | 0.5791 | 0.0086 | 0.3651 | 0.7738 | 0.3651 |
| 0.5808 | 8.8258 | 89900 | 0.5888 | 0.0086 | 0.3585 | 0.7706 | 0.3585 |
| 0.5461 | 8.8357 | 90000 | 0.5769 | 0.0086 | 0.3688 | 0.7738 | 0.3688 |
| 0.579 | 8.8455 | 90100 | 0.5864 | 0.0086 | 0.3573 | 0.7696 | 0.3573 |
| 0.5929 | 8.8553 | 90200 | 0.5796 | 0.0086 | 0.3666 | 0.7735 | 0.3666 |
| 0.5289 | 8.8651 | 90300 | 0.5917 | 0.0086 | 0.3608 | 0.7704 | 0.3608 |
| 0.5678 | 8.8749 | 90400 | 0.5768 | 0.0086 | 0.3626 | 0.7746 | 0.3626 |
| 0.6038 | 8.8847 | 90500 | 0.5865 | 0.0086 | 0.3579 | 0.7709 | 0.3579 |
| 0.5807 | 8.8946 | 90600 | 0.5828 | 0.0086 | 0.3610 | 0.7702 | 0.3610 |
| 0.5073 | 8.9044 | 90700 | 0.5804 | 0.0086 | 0.3637 | 0.7722 | 0.3637 |
| 0.5829 | 8.9142 | 90800 | 0.5815 | 0.0086 | 0.3623 | 0.7728 | 0.3623 |
| 0.6192 | 8.9240 | 90900 | 0.5850 | 0.0086 | 0.3577 | 0.7700 | 0.3577 |
| 0.5808 | 8.9338 | 91000 | 0.5825 | 0.0086 | 0.3665 | 0.7739 | 0.3665 |
| 0.5747 | 8.9436 | 91100 | 0.5877 | 0.0086 | 0.3537 | 0.7678 | 0.3537 |
| 0.5755 | 8.9535 | 91200 | 0.5819 | 0.0086 | 0.3610 | 0.7713 | 0.3610 |
| 0.5642 | 8.9633 | 91300 | 0.5841 | 0.0086 | 0.3566 | 0.7693 | 0.3566 |
| 0.6357 | 8.9731 | 91400 | 0.5900 | 0.0086 | 0.3558 | 0.7690 | 0.3558 |
| 0.5033 | 8.9829 | 91500 | 0.5813 | 0.0086 | 0.3617 | 0.7712 | 0.3617 |
| 0.5957 | 8.9927 | 91600 | 0.5851 | 0.0086 | 0.3618 | 0.7711 | 0.3618 |
| 0.5486 | 9.0026 | 91700 | 0.5830 | 0.0086 | 0.3640 | 0.7725 | 0.3640 |
| 0.5454 | 9.0124 | 91800 | 0.5814 | 0.0086 | 0.3637 | 0.7726 | 0.3637 |
| 0.5726 | 9.0222 | 91900 | 0.5862 | 0.0086 | 0.3485 | 0.7685 | 0.3485 |
| 0.6183 | 9.0320 | 92000 | 0.5870 | 0.0086 | 0.3662 | 0.7718 | 0.3662 |
| 0.4955 | 9.0418 | 92100 | 0.5836 | 0.0086 | 0.3595 | 0.7720 | 0.3595 |
| 0.5987 | 9.0516 | 92200 | 0.5818 | 0.0086 | 0.3605 | 0.7728 | 0.3605 |
| 0.5863 | 9.0615 | 92300 | 0.5784 | 0.0086 | 0.3657 | 0.7737 | 0.3657 |
| 0.5728 | 9.0713 | 92400 | 0.5812 | 0.0086 | 0.3602 | 0.7705 | 0.3602 |
| 0.5719 | 9.0811 | 92500 | 0.5782 | 0.0086 | 0.3643 | 0.7738 | 0.3643 |
| 0.6093 | 9.0909 | 92600 | 0.5833 | 0.0086 | 0.3661 | 0.7721 | 0.3661 |
| 0.5676 | 9.1007 | 92700 | 0.5804 | 0.0086 | 0.3623 | 0.7734 | 0.3623 |
| 0.4827 | 9.1105 | 92800 | 0.5854 | 0.0086 | 0.3621 | 0.7708 | 0.3621 |
| 0.5191 | 9.1204 | 92900 | 0.5804 | 0.0086 | 0.3666 | 0.7735 | 0.3666 |
| 0.6233 | 9.1302 | 93000 | 0.5832 | 0.0086 | 0.3574 | 0.7717 | 0.3574 |
| 0.5379 | 9.1400 | 93100 | 0.5892 | 0.0086 | 0.3586 | 0.7704 | 0.3586 |
| 0.5764 | 9.1498 | 93200 | 0.5754 | 0.0086 | 0.3682 | 0.7756 | 0.3682 |
| 0.5547 | 9.1596 | 93300 | 0.5772 | 0.0086 | 0.3639 | 0.7737 | 0.3639 |
| 0.659 | 9.1694 | 93400 | 0.5792 | 0.0086 | 0.3675 | 0.7753 | 0.3675 |
| 0.5287 | 9.1793 | 93500 | 0.5892 | 0.0086 | 0.3588 | 0.7701 | 0.3588 |
| 0.5285 | 9.1891 | 93600 | 0.5747 | 0.0086 | 0.3664 | 0.7738 | 0.3664 |
| 0.5826 | 9.1989 | 93700 | 0.5869 | 0.0086 | 0.3510 | 0.7691 | 0.3510 |
| 0.5742 | 9.2087 | 93800 | 0.5823 | 0.0086 | 0.3582 | 0.7713 | 0.3582 |
| 0.6075 | 9.2185 | 93900 | 0.5807 | 0.0086 | 0.3657 | 0.7724 | 0.3657 |
| 0.5149 | 9.2284 | 94000 | 0.5806 | 0.0086 | 0.3693 | 0.7744 | 0.3693 |
| 0.6354 | 9.2382 | 94100 | 0.5806 | 0.0086 | 0.3639 | 0.7723 | 0.3639 |
| 0.6343 | 9.2480 | 94200 | 0.5996 | 0.0086 | 0.3469 | 0.7668 | 0.3469 |
| 0.5372 | 9.2578 | 94300 | 0.5778 | 0.0086 | 0.3668 | 0.7734 | 0.3668 |
| 0.608 | 9.2676 | 94400 | 0.5792 | 0.0086 | 0.3644 | 0.7739 | 0.3644 |
| 0.5976 | 9.2774 | 94500 | 0.5863 | 0.0086 | 0.3603 | 0.7713 | 0.3603 |
| 0.4705 | 9.2873 | 94600 | 0.5827 | 0.0086 | 0.3565 | 0.7708 | 0.3565 |
| 0.5795 | 9.2971 | 94700 | 0.5765 | 0.0086 | 0.3643 | 0.7748 | 0.3643 |
| 0.5827 | 9.3069 | 94800 | 0.5856 | 0.0086 | 0.3603 | 0.7715 | 0.3603 |
| 0.6143 | 9.3167 | 94900 | 0.5898 | 0.0086 | 0.3636 | 0.7706 | 0.3636 |
| 0.611 | 9.3265 | 95000 | 0.5897 | 0.0086 | 0.3568 | 0.7694 | 0.3568 |
| 0.5746 | 9.3363 | 95100 | 0.5769 | 0.0086 | 0.3679 | 0.7744 | 0.3679 |
| 0.5539 | 9.3462 | 95200 | 0.5822 | 0.0086 | 0.3666 | 0.7720 | 0.3666 |
| 0.5411 | 9.3560 | 95300 | 0.5746 | 0.0086 | 0.3679 | 0.7741 | 0.3679 |
| 0.5035 | 9.3658 | 95400 | 0.5910 | 0.0086 | 0.3509 | 0.7680 | 0.3509 |
| 0.5591 | 9.3756 | 95500 | 0.5787 | 0.0086 | 0.3667 | 0.7741 | 0.3667 |
| 0.5605 | 9.3854 | 95600 | 0.5817 | 0.0086 | 0.3702 | 0.7747 | 0.3702 |
| 0.5283 | 9.3952 | 95700 | 0.5835 | 0.0086 | 0.3606 | 0.7719 | 0.3606 |
| 0.559 | 9.4051 | 95800 | 0.5946 | 0.0086 | 0.3503 | 0.7672 | 0.3503 |
| 0.6014 | 9.4149 | 95900 | 0.5752 | 0.0086 | 0.3680 | 0.7752 | 0.3680 |
| 0.5891 | 9.4247 | 96000 | 0.5857 | 0.0086 | 0.3576 | 0.7707 | 0.3576 |
| 0.5368 | 9.4345 | 96100 | 0.5804 | 0.0086 | 0.3614 | 0.7705 | 0.3614 |
| 0.5964 | 9.4443 | 96200 | 0.5784 | 0.0086 | 0.3716 | 0.7764 | 0.3716 |
| 0.5579 | 9.4542 | 96300 | 0.5797 | 0.0086 | 0.3551 | 0.7712 | 0.3551 |
| 0.5549 | 9.4640 | 96400 | 0.5776 | 0.0086 | 0.3627 | 0.7726 | 0.3627 |
| 0.5043 | 9.4738 | 96500 | 0.5794 | 0.0086 | 0.3639 | 0.7733 | 0.3639 |
| 0.4903 | 9.4836 | 96600 | 0.5715 | 0.0086 | 0.3708 | 0.7763 | 0.3708 |
| 0.4918 | 9.4934 | 96700 | 0.5792 | 0.0086 | 0.3568 | 0.7713 | 0.3568 |
| 0.5173 | 9.5032 | 96800 | 0.5762 | 0.0086 | 0.3651 | 0.7750 | 0.3651 |
| 0.6168 | 9.5131 | 96900 | 0.5872 | 0.0086 | 0.3673 | 0.7732 | 0.3673 |
| 0.615 | 9.5229 | 97000 | 0.5829 | 0.0086 | 0.3543 | 0.7706 | 0.3543 |
| 0.5807 | 9.5327 | 97100 | 0.5783 | 0.0086 | 0.3670 | 0.7750 | 0.3670 |
| 0.5916 | 9.5425 | 97200 | 0.5802 | 0.0086 | 0.3661 | 0.7746 | 0.3661 |
| 0.5418 | 9.5523 | 97300 | 0.5783 | 0.0086 | 0.3693 | 0.7745 | 0.3693 |
| 0.5179 | 9.5621 | 97400 | 0.5737 | 0.0086 | 0.3677 | 0.7767 | 0.3677 |
| 0.5485 | 9.5720 | 97500 | 0.5725 | 0.0086 | 0.3659 | 0.7753 | 0.3659 |
| 0.5694 | 9.5818 | 97600 | 0.5751 | 0.0086 | 0.3761 | 0.7780 | 0.3761 |
| 0.6151 | 9.5916 | 97700 | 0.5894 | 0.0086 | 0.3592 | 0.7717 | 0.3592 |
| 0.5531 | 9.6014 | 97800 | 0.5812 | 0.0086 | 0.3598 | 0.7716 | 0.3598 |
| 0.4994 | 9.6112 | 97900 | 0.5778 | 0.0086 | 0.3690 | 0.7761 | 0.3690 |
| 0.6247 | 9.6210 | 98000 | 0.5801 | 0.0086 | 0.3606 | 0.7732 | 0.3606 |
| 0.5134 | 9.6309 | 98100 | 0.5735 | 0.0086 | 0.3649 | 0.7741 | 0.3649 |
| 0.5421 | 9.6407 | 98200 | 0.5803 | 0.0086 | 0.3648 | 0.7731 | 0.3648 |
| 0.5954 | 9.6505 | 98300 | 0.5801 | 0.0086 | 0.3628 | 0.7746 | 0.3628 |
| 0.5203 | 9.6603 | 98400 | 0.5800 | 0.0086 | 0.3704 | 0.7739 | 0.3704 |
| 0.5634 | 9.6701 | 98500 | 0.5774 | 0.0086 | 0.3691 | 0.7737 | 0.3691 |
| 0.5799 | 9.6800 | 98600 | 0.5792 | 0.0086 | 0.3620 | 0.7735 | 0.3620 |
| 0.6255 | 9.6898 | 98700 | 0.5833 | 0.0086 | 0.3616 | 0.7734 | 0.3616 |
| 0.592 | 9.6996 | 98800 | 0.5890 | 0.0086 | 0.3625 | 0.7700 | 0.3625 |
| 0.5488 | 9.7094 | 98900 | 0.5820 | 0.0086 | 0.3575 | 0.7711 | 0.3575 |
| 0.6108 | 9.7192 | 99000 | 0.5832 | 0.0086 | 0.3592 | 0.7717 | 0.3592 |
| 0.6151 | 9.7290 | 99100 | 0.5724 | 0.0086 | 0.3674 | 0.7771 | 0.3674 |
| 0.4952 | 9.7389 | 99200 | 0.5845 | 0.0086 | 0.3670 | 0.7740 | 0.3670 |
| 0.5787 | 9.7487 | 99300 | 0.5850 | 0.0086 | 0.3633 | 0.7723 | 0.3633 |
| 0.6172 | 9.7585 | 99400 | 0.5832 | 0.0086 | 0.3634 | 0.7735 | 0.3634 |
| 0.6034 | 9.7683 | 99500 | 0.5836 | 0.0086 | 0.3670 | 0.7725 | 0.3670 |
| 0.6173 | 9.7781 | 99600 | 0.5858 | 0.0086 | 0.3668 | 0.7726 | 0.3668 |
| 0.5204 | 9.7879 | 99700 | 0.5798 | 0.0086 | 0.3664 | 0.7743 | 0.3664 |
| 0.5861 | 9.7978 | 99800 | 0.5819 | 0.0086 | 0.3626 | 0.7725 | 0.3626 |
| 0.5464 | 9.8076 | 99900 | 0.5877 | 0.0086 | 0.3596 | 0.7712 | 0.3596 |
| 0.5543 | 9.8174 | 100000 | 0.5799 | 0.0086 | 0.3675 | 0.7732 | 0.3675 |
| 0.5813 | 9.8272 | 100100 | 0.5786 | 0.0086 | 0.3635 | 0.7735 | 0.3635 |
| 0.5963 | 9.8370 | 100200 | 0.5840 | 0.0086 | 0.3561 | 0.7720 | 0.3561 |
| 0.5383 | 9.8468 | 100300 | 0.5760 | 0.0086 | 0.3655 | 0.7755 | 0.3655 |
| 0.5232 | 9.8567 | 100400 | 0.5735 | 0.0086 | 0.3684 | 0.7736 | 0.3684 |
| 0.5705 | 9.8665 | 100500 | 0.5781 | 0.0086 | 0.3632 | 0.7733 | 0.3632 |
| 0.5621 | 9.8763 | 100600 | 0.5823 | 0.0086 | 0.3652 | 0.7720 | 0.3652 |
| 0.5866 | 9.8861 | 100700 | 0.5788 | 0.0086 | 0.3591 | 0.7732 | 0.3591 |
| 0.5527 | 9.8959 | 100800 | 0.5790 | 0.0086 | 0.3663 | 0.7740 | 0.3663 |
| 0.5793 | 9.9058 | 100900 | 0.5696 | 0.0086 | 0.3705 | 0.7758 | 0.3705 |
| 0.5732 | 9.9156 | 101000 | 0.5717 | 0.0086 | 0.3626 | 0.7754 | 0.3626 |
| 0.5246 | 9.9254 | 101100 | 0.5733 | 0.0086 | 0.3666 | 0.7746 | 0.3666 |
| 0.5928 | 9.9352 | 101200 | 0.5766 | 0.0086 | 0.3726 | 0.7770 | 0.3726 |
| 0.5826 | 9.9450 | 101300 | 0.5775 | 0.0086 | 0.3689 | 0.7754 | 0.3689 |
| 0.5403 | 9.9548 | 101400 | 0.5816 | 0.0086 | 0.3638 | 0.7717 | 0.3638 |
| 0.5788 | 9.9647 | 101500 | 0.5774 | 0.0086 | 0.3604 | 0.7728 | 0.3604 |
| 0.6247 | 9.9745 | 101600 | 0.5797 | 0.0086 | 0.3612 | 0.7732 | 0.3612 |
| 0.6211 | 9.9843 | 101700 | 0.5730 | 0.0086 | 0.3695 | 0.7764 | 0.3695 |
| 0.5625 | 9.9941 | 101800 | 0.5792 | 0.0086 | 0.3710 | 0.7746 | 0.3710 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
mlx-community/Cydonia-24B-v3.1-bf16
|
mlx-community
| 2025-06-25T02:55:35Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"mistral",
"text-generation",
"base_model:TheDrummer/Cydonia-24B-v3.1",
"base_model:finetune:TheDrummer/Cydonia-24B-v3.1",
"region:us"
] |
text-generation
| 2025-06-25T02:41:48Z |
---
base_model: TheDrummer/Cydonia-24B-v3.1
tags:
- mlx
library_name: mlx
pipeline_tag: text-generation
---
# mlx-community/Cydonia-24B-v3.1-bf16
This model [mlx-community/Cydonia-24B-v3.1-bf16](https://huggingface.co/mlx-community/Cydonia-24B-v3.1-bf16) was
converted to MLX format from [TheDrummer/Cydonia-24B-v3.1](https://huggingface.co/TheDrummer/Cydonia-24B-v3.1)
using mlx-lm version **0.25.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Cydonia-24B-v3.1-bf16")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
AlekseyCalvin/Glasnost_v2_wan_14b_80sUSSRvhsCollageStyle
|
AlekseyCalvin
| 2025-06-25T02:54:23Z | 0 | 0 | null |
[
"image-to-video",
"lora",
"text-to-video",
"video",
"video-generation",
"en",
"zh",
"ru",
"base_model:Wan-AI/Wan2.1-T2V-14B-Diffusers",
"base_model:adapter:Wan-AI/Wan2.1-T2V-14B-Diffusers",
"license:apache-2.0",
"region:us"
] |
text-to-video
| 2025-06-25T01:05:56Z |
---
license: apache-2.0
language:
- en
- zh
- ru
tags:
- image-to-video
- lora
- text-to-video
- video
- video-generation
base_model: "Wan-AI/Wan2.1-T2V-14B-Diffusers"
pipeline_tag: text-to-video
widget:
- text: >-
[GLASNOST] style...
output:
url: videos/1.mp4
- text: >-
[GLASNOST] style...
output:
url: videos/3.mp4
- text: >-
[GLASNOST] style...
output:
url: videos/4.mp4
- text: >-
[GLASNOST] style...
output:
url: videos/5.mp4
- text: >-
[GLASNOST] style...
output:
url: videos/6.mp4
- text: >-
[GLASNOST] style...
output:
url: videos/2.mp4
instance_prompt: GLASNOST style Perestroika-era 1980s Soviet detailed experimental arthouse film sequence. On top left is ____ , on top right is ____, on bottom left is ____, on bottom right is ____, video filmed in the USSR during the perestroika era, featuring several concurrent clips of 16mm footage as a thematically-unified cinematographic collage of several distinct scenes, vintage Soviet television, underground cinema, radical metamodernist cinepoetry, from an award-winning real life raw mixed media conceptual sots art video filmed in the USSR during the Perestroika era, Leningrad punk, Moscow conceptualism
---
# GLASNOST V.2: 80s Soviet Art-Video Collage
***Style/Context Low Rank Adaptor (LoRA)*** <br>
***For Wan2.1 14B T2V & I2V Base Models*** <br>
**Stylers of Kinema Historical LoRAs** <br>
**|||||||| By SilverAgePoets.com ||||||||**
<Gallery />
## About this LoRA
This is a Rank 16/Alpha 64 LoRA for the Wan2.1 14b video generation model. <br>
It may be used to generate several distinct scene-windows-concepts within a single clip (not unlike the well-known ZOOM LoRA). <br>
We've found that given certain prompting styles and LoRA strength modifications may enable controlled gradations of inter-cohesion between the scenes. <br>
It was trained on 100+ manually edited (by us) collages/montages, largely using the same clips and frames used to train the other GLASNOST LoRA (V.1), but with some additions specific to this variant. <br>
These clips & frames were sourced by us from a variety of iconic 1980s Perestroika-era Soviet films, tv shows, concerts, & music videos. <br>
Overall, the sources for this version of GLASNOST lean further into the realm of underground/countercultural/art film territories, with some Leningrad Metamodernist, Moscow Conceptualist, as well as all sorts of Soviet rock influences represented. <br>
The captions this time around should enable this LoRA to exhibit slighly better knowledge (than V.1) of names like Yegor Letov, Viktor Tsoy, Yanka Dyaghileva, or bands Auctyon, KINO, Nol, and others. <br>
This adapter can be used with Wan as well as Skyreels via diffusers or ComfyUI or DrawThings, etc... <br>
This LoRA works well with both CausVid & Self-Forcing distillation quick inference adapters. <br>
It also works fairly well in combos w/ other LoRAs. <br>
**Get creative with these!**
## Trigger words
You should use `GLASNOST style Perestroika-era 1980s Soviet detailed experimental arthouse film sequence. On top left is ____ , on top right is ____, on bottom left is ____, on bottom right is ____, video filmed in the USSR during the perestroika era, featuring several concurrent clips of 16mm footage as a thematically-unified cinematographic collage of several distinct scenes, vintage Soviet television, underground cinema, radical metamodernist cinepoetry, from an award-winning real life raw mixed media conceptual sots art video filmed in the USSR during the Perestroika era, Leningrad punk, Moscow conceptualism`, etc, to revive one of these more recent gestalts of futures no-longer-past! <br>
### Using with Diffusers
```py
pip install git+https://github.com/huggingface/diffusers.git
```
```py
import torch
from diffusers.utils import export_to_video
from diffusers import AutoencoderKLWan, WanPipeline
from diffusers.schedulers.scheduling_unipc_multistep import UniPCMultistepScheduler
model_id = "wavespeed/Wan2.1-T2V-14B-Diffusers-fp16"
vae = AutoencoderKLWan.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32)
pipe = WanPipeline.from_pretrained(model_id, vae=vae, torch_dtype=torch.bfloat16)
flow_shift = 3.0 # 5.0 for 720P, 3.0 for 480P
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config, flow_shift=flow_shift)
pipe.to("cuda")
pipe.load_lora_weights("AlekseyCalvin/Glasnost_v1_wan_14b_USSR80sTVstyle")
pipe.enable_model_cpu_offload() #for low-vram environments
prompt = "GLASNOST style Perestroika-era 1980s Soviet detailed experimental arthouse film sequence. On top left is ____ , on top right is ____, on bottom left is ____, on bottom right is ____, video filmed in the USSR during the perestroika era, featuring several concurrent clips of 16mm footage as a thematically-unified cinematographic collage of several distinct scenes, vintage Soviet television, underground cinema, radical metamodernist cinepoetry, from an award-winning real life raw mixed media conceptual sots art video filmed in the USSR during the Perestroika era, Leningrad punk, Moscow conceptualism"
negative_prompt = "overexposed, static, blurred, subtitles, images, static, worst, low, JPEG compression residue, incomplete, extra fingers, poorly drawn, poorly drawn, deformed, disfigured, misshapen, fused, still picture, backwards"
output = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
height=480,
width=832,
num_frames=81,
guidance_scale=5.0,
).frames[0]
export_to_video(output, "output.mp4", fps=16)
```
## Training details
- Steps: 4000
- Learning rate: 0.0002
- LoRA rank: 16 dim, 64 alpha
## Contribute your own examples
You can use the [community tab](https://huggingface.co/AlekseyCalvin/Glasnost_v1_wan_14b_USSR80sTVstyle/discussions) to add videos that show off what youโve made with this LoRA.
|
Yuichi1218/Llama-3.1-Non-filter-Lafeak64-8B-chatvector
|
Yuichi1218
| 2025-06-25T02:53:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T02:47:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Ash2749/qwen_3_14B_acot_extes
|
Ash2749
| 2025-06-25T02:51:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T02:45:54Z |
---
base_model: unsloth/qwen3-14b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Ash2749
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
JFernandoGRE/llama31_8b_augmenteddemocracy_dpo2_questions_50_critsupport
|
JFernandoGRE
| 2025-06-25T02:44:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:JFernandoGRE/llama31_8b_augmenteddemocracy_sft_questions_50_critsupport",
"base_model:finetune:JFernandoGRE/llama31_8b_augmenteddemocracy_sft_questions_50_critsupport",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T02:39:47Z |
---
base_model: JFernandoGRE/llama31_8b_augmenteddemocracy_sft_questions_50_critsupport
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** JFernandoGRE
- **License:** apache-2.0
- **Finetuned from model :** JFernandoGRE/llama31_8b_augmenteddemocracy_sft_questions_50_critsupport
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sonhask/meta-Llama-3.1-8B-Instruct-bnb-4bit
|
sonhask
| 2025-06-25T02:44:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-25T02:42:11Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sonhask
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
NTIS/hf_gemma3_21-checkpoint-128000
|
NTIS
| 2025-06-25T02:44:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"pytorch",
"causal-lm",
"ko",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T02:42:23Z |
---
license: apache-2.0
language:
- ko
- en
tags:
- text-generation
- pytorch
- causal-lm
library_name: transformers
---
# hf_gemma3_21-checkpoint-128000
์ด ๋ชจ๋ธ์ ํ์ธํ๋๋ ์ธ์ด ๋ชจ๋ธ ์ฒดํฌํฌ์ธํธ์
๋๋ค.
## ๋ชจ๋ธ ์ ๋ณด
- **๋ฒ ์ด์ค ๋ชจ๋ธ**: hf_gemma3_21
- **์ฒดํฌํฌ์ธํธ**: checkpoint-128000
- **ํ์
**: Causal Language Model
- **๋ผ์ด์ ์ค**: Apache 2.0
## ์ฌ์ฉ ๋ฐฉ๋ฒ
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "NTIS/hf_gemma3_21-checkpoint-128000"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto"
)
# ํ
์คํธ ์์ฑ
text = "์๋
ํ์ธ์"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, do_sample=True, temperature=0.7)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
```
## ์ฃผ์์ฌํญ
- ์ด ๋ชจ๋ธ์ ์ฐ๊ตฌ/์คํ ๋ชฉ์ ์ผ๋ก ์ ๊ณต๋ฉ๋๋ค
- ์์
์ ์ฌ์ฉ ์ ์ ๋ผ์ด์ ์ค๋ฅผ ํ์ธํ์ธ์
|
yaobo2816/Qwen2.5-GRPO
|
yaobo2816
| 2025-06-25T02:44:26Z | 36 | 0 | null |
[
"gguf",
"qwen2",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:LooksJuicy/ruozhiba",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-3B-Instruct",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-05T16:25:05Z |
---
license: mit
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
base_model:
- Qwen/Qwen2.5-3B-Instruct
datasets:
- LooksJuicy/ruozhiba
---
The model will have GRPO response, such like deepseek R1 answer the question.
|
morning831/llama2_uuu_news_qlora
|
morning831
| 2025-06-25T02:43:28Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:43:28Z |
---
license: apache-2.0
---
|
zecaihong/3e7e19dc-0009-4038-bacf-b95d034953d3
|
zecaihong
| 2025-06-25T02:42:46Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Coder-7B",
"base_model:adapter:unsloth/Qwen2.5-Coder-7B",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:03:38Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Coder-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3e7e19dc-0009-4038-bacf-b95d034953d3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0.dev0`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Coder-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5686eaedee397c04_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_prompt: ''
debug: null
deepspeed: deepspeed_configs/zero2.json
early_stopping_patience: 3
eval_max_new_tokens: 1024
eval_steps: 100
eval_table_size: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
greater_is_better: false
group_by_length: false
hub_model_id: zecaihong/3e7e19dc-0009-4038-bacf-b95d034953d3
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0003
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: -1
metric_for_best_model: eval_loss
micro_batch_size: 8
mlflow_experiment_name: /data/datasets/5686eaedee397c04_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 6
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3e7e19dc-0009-4038-bacf-b95d034953d3
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 3e7e19dc-0009-4038-bacf-b95d034953d3
warmup_steps: 100
weight_decay: 0.001
xformers_attention: null
```
</details><br>
# 3e7e19dc-0009-4038-bacf-b95d034953d3
This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-7B](https://huggingface.co/unsloth/Qwen2.5-Coder-7B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7871
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 6.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0017 | 1 | 1.7215 |
| 0.9052 | 0.1742 | 100 | 0.9538 |
| 0.8624 | 0.3484 | 200 | 0.8870 |
| 0.882 | 0.5226 | 300 | 0.8583 |
| 0.856 | 0.6969 | 400 | 0.8366 |
| 0.7938 | 0.8711 | 500 | 0.8207 |
| 0.7321 | 1.0453 | 600 | 0.8126 |
| 0.7707 | 1.2195 | 700 | 0.8069 |
| 0.71 | 1.3937 | 800 | 0.8012 |
| 0.7139 | 1.5679 | 900 | 0.7931 |
| 0.7163 | 1.7422 | 1000 | 0.7870 |
| 0.7297 | 1.9164 | 1100 | 0.7843 |
| 0.6494 | 2.0906 | 1200 | 0.7919 |
| 0.6429 | 2.2648 | 1300 | 0.7931 |
| 0.6377 | 2.4390 | 1400 | 0.7871 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.3
- Pytorch 2.5.1+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
NTIS/hf_gemma3_21-checkpoint-127000
|
NTIS
| 2025-06-25T02:42:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"pytorch",
"causal-lm",
"ko",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T02:39:37Z |
---
license: apache-2.0
language:
- ko
- en
tags:
- text-generation
- pytorch
- causal-lm
library_name: transformers
---
# hf_gemma3_21-checkpoint-127000
์ด ๋ชจ๋ธ์ ํ์ธํ๋๋ ์ธ์ด ๋ชจ๋ธ ์ฒดํฌํฌ์ธํธ์
๋๋ค.
## ๋ชจ๋ธ ์ ๋ณด
- **๋ฒ ์ด์ค ๋ชจ๋ธ**: hf_gemma3_21
- **์ฒดํฌํฌ์ธํธ**: checkpoint-127000
- **ํ์
**: Causal Language Model
- **๋ผ์ด์ ์ค**: Apache 2.0
## ์ฌ์ฉ ๋ฐฉ๋ฒ
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "NTIS/hf_gemma3_21-checkpoint-127000"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto"
)
# ํ
์คํธ ์์ฑ
text = "์๋
ํ์ธ์"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, do_sample=True, temperature=0.7)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
```
## ์ฃผ์์ฌํญ
- ์ด ๋ชจ๋ธ์ ์ฐ๊ตฌ/์คํ ๋ชฉ์ ์ผ๋ก ์ ๊ณต๋ฉ๋๋ค
- ์์
์ ์ฌ์ฉ ์ ์ ๋ผ์ด์ ์ค๋ฅผ ํ์ธํ์ธ์
|
crosstar/mistral_5_CoT_few_shot
|
crosstar
| 2025-06-25T02:41:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-25T02:38:56Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
morning831/uuu_fine_tune_taipower
|
morning831
| 2025-06-25T02:40:51Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:40:51Z |
---
license: apache-2.0
---
|
fancyerii/q-FrozenLake-v1-4x4-noSlippery
|
fancyerii
| 2025-06-25T02:40:19Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-25T02:40:17Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="fancyerii/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
thanhh12/aya-expanse-8b-Q2_K-GGUF
|
thanhh12
| 2025-06-25T02:39:41Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"base_model:CohereLabs/aya-expanse-8b",
"base_model:quantized:CohereLabs/aya-expanse-8b",
"license:cc-by-nc-4.0",
"region:us",
"conversational"
] | null | 2025-06-25T02:39:27Z |
---
inference: false
library_name: transformers
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
license: cc-by-nc-4.0
extra_gated_prompt: By submitting this form, you agree to the [License Agreement](https://cohere.com/c4ai-cc-by-nc-license) and
acknowledge that the information you provide will be collected, used, and shared
in accordance with Cohereโs [Privacy Policy]( https://cohere.com/privacy). Youโll
receive email updates about C4AI and Cohere research, events, products and services.
You can unsubscribe at any time.
extra_gated_fields:
Name: text
Affiliation: text
Country: country
I agree to use this model for non-commercial use ONLY: checkbox
tags:
- llama-cpp
- gguf-my-repo
base_model: CohereLabs/aya-expanse-8b
---
# thanhh12/aya-expanse-8b-Q2_K-GGUF
This model was converted to GGUF format from [`CohereLabs/aya-expanse-8b`](https://huggingface.co/CohereLabs/aya-expanse-8b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/CohereLabs/aya-expanse-8b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo thanhh12/aya-expanse-8b-Q2_K-GGUF --hf-file aya-expanse-8b-q2_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo thanhh12/aya-expanse-8b-Q2_K-GGUF --hf-file aya-expanse-8b-q2_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo thanhh12/aya-expanse-8b-Q2_K-GGUF --hf-file aya-expanse-8b-q2_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo thanhh12/aya-expanse-8b-Q2_K-GGUF --hf-file aya-expanse-8b-q2_k.gguf -c 2048
```
|
NTIS/hf_gemma3_21-checkpoint-126000
|
NTIS
| 2025-06-25T02:39:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"pytorch",
"causal-lm",
"ko",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T02:37:16Z |
---
license: apache-2.0
language:
- ko
- en
tags:
- text-generation
- pytorch
- causal-lm
library_name: transformers
---
# hf_gemma3_21-checkpoint-126000
์ด ๋ชจ๋ธ์ ํ์ธํ๋๋ ์ธ์ด ๋ชจ๋ธ ์ฒดํฌํฌ์ธํธ์
๋๋ค.
## ๋ชจ๋ธ ์ ๋ณด
- **๋ฒ ์ด์ค ๋ชจ๋ธ**: hf_gemma3_21
- **์ฒดํฌํฌ์ธํธ**: checkpoint-126000
- **ํ์
**: Causal Language Model
- **๋ผ์ด์ ์ค**: Apache 2.0
## ์ฌ์ฉ ๋ฐฉ๋ฒ
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "NTIS/hf_gemma3_21-checkpoint-126000"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto"
)
# ํ
์คํธ ์์ฑ
text = "์๋
ํ์ธ์"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, do_sample=True, temperature=0.7)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
```
## ์ฃผ์์ฌํญ
- ์ด ๋ชจ๋ธ์ ์ฐ๊ตฌ/์คํ ๋ชฉ์ ์ผ๋ก ์ ๊ณต๋ฉ๋๋ค
- ์์
์ ์ฌ์ฉ ์ ์ ๋ผ์ด์ ์ค๋ฅผ ํ์ธํ์ธ์
|
chinyua/test
|
chinyua
| 2025-06-25T02:38:58Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:38:58Z |
---
license: apache-2.0
---
|
sergioalves/9d73281b-01e3-4c0b-832d-ac9ed96b4bcb
|
sergioalves
| 2025-06-25T02:38:31Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:samoline/c69dcff1-fd86-4697-8038-846c5db9095b",
"base_model:adapter:samoline/c69dcff1-fd86-4697-8038-846c5db9095b",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-25T02:30:41Z |
---
library_name: peft
base_model: samoline/c69dcff1-fd86-4697-8038-846c5db9095b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9d73281b-01e3-4c0b-832d-ac9ed96b4bcb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: samoline/c69dcff1-fd86-4697-8038-846c5db9095b
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 28572ecc5c12c5f8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: input
field_instruction: instruct
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.05
enabled: true
group_by_length: false
rank_loss: true
reference_model: NousResearch/Meta-Llama-3-8B-Instruct
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 0.9
group_by_length: false
hub_model_id: sergioalves/9d73281b-01e3-4c0b-832d-ac9ed96b4bcb
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 2.0e-05
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/28572ecc5c12c5f8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 383bde8b-0a10-4317-a5ad-edc0e1c7e587
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 383bde8b-0a10-4317-a5ad-edc0e1c7e587
warmup_steps: 10
weight_decay: 0.05
xformers_attention: false
```
</details><br>
# 9d73281b-01e3-4c0b-832d-ac9ed96b4bcb
This model is a fine-tuned version of [samoline/c69dcff1-fd86-4697-8038-846c5db9095b](https://huggingface.co/samoline/c69dcff1-fd86-4697-8038-846c5db9095b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0799
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.3927 | 0.0002 | 1 | 1.1791 |
| 1.0764 | 0.0117 | 50 | 1.0865 |
| 1.2093 | 0.0235 | 100 | 1.0799 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
ljnlonoljpiljm/siglip2-large-patch16-256-like-dislike-13
|
ljnlonoljpiljm
| 2025-06-25T02:38:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"siglip",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-06-25T02:37:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
thanhh12/aya-expanse-8b-Q3_K_M-GGUF
|
thanhh12
| 2025-06-25T02:37:47Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"base_model:CohereLabs/aya-expanse-8b",
"base_model:quantized:CohereLabs/aya-expanse-8b",
"license:cc-by-nc-4.0",
"region:us",
"conversational"
] | null | 2025-06-25T02:37:30Z |
---
inference: false
library_name: transformers
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
license: cc-by-nc-4.0
extra_gated_prompt: By submitting this form, you agree to the [License Agreement](https://cohere.com/c4ai-cc-by-nc-license) and
acknowledge that the information you provide will be collected, used, and shared
in accordance with Cohereโs [Privacy Policy]( https://cohere.com/privacy). Youโll
receive email updates about C4AI and Cohere research, events, products and services.
You can unsubscribe at any time.
extra_gated_fields:
Name: text
Affiliation: text
Country: country
I agree to use this model for non-commercial use ONLY: checkbox
tags:
- llama-cpp
- gguf-my-repo
base_model: CohereLabs/aya-expanse-8b
---
# thanhh12/aya-expanse-8b-Q3_K_M-GGUF
This model was converted to GGUF format from [`CohereLabs/aya-expanse-8b`](https://huggingface.co/CohereLabs/aya-expanse-8b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/CohereLabs/aya-expanse-8b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo thanhh12/aya-expanse-8b-Q3_K_M-GGUF --hf-file aya-expanse-8b-q3_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo thanhh12/aya-expanse-8b-Q3_K_M-GGUF --hf-file aya-expanse-8b-q3_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo thanhh12/aya-expanse-8b-Q3_K_M-GGUF --hf-file aya-expanse-8b-q3_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo thanhh12/aya-expanse-8b-Q3_K_M-GGUF --hf-file aya-expanse-8b-q3_k_m.gguf -c 2048
```
|
Yuichi1218/Llama-3.1-Non-filter-Lafeak64-8B
|
Yuichi1218
| 2025-06-25T02:37:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T02:06:37Z |
---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Yuichi1218
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
pennylin09/llama2_uuu_news_qlora
|
pennylin09
| 2025-06-25T02:37:44Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:37:44Z |
---
license: apache-2.0
---
|
NTIS/hf_gemma3_21-checkpoint-125000
|
NTIS
| 2025-06-25T02:37:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"pytorch",
"causal-lm",
"ko",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T02:34:56Z |
---
license: apache-2.0
language:
- ko
- en
tags:
- text-generation
- pytorch
- causal-lm
library_name: transformers
---
# hf_gemma3_21-checkpoint-125000
์ด ๋ชจ๋ธ์ ํ์ธํ๋๋ ์ธ์ด ๋ชจ๋ธ ์ฒดํฌํฌ์ธํธ์
๋๋ค.
## ๋ชจ๋ธ ์ ๋ณด
- **๋ฒ ์ด์ค ๋ชจ๋ธ**: hf_gemma3_21
- **์ฒดํฌํฌ์ธํธ**: checkpoint-125000
- **ํ์
**: Causal Language Model
- **๋ผ์ด์ ์ค**: Apache 2.0
## ์ฌ์ฉ ๋ฐฉ๋ฒ
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "NTIS/hf_gemma3_21-checkpoint-125000"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto"
)
# ํ
์คํธ ์์ฑ
text = "์๋
ํ์ธ์"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, do_sample=True, temperature=0.7)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
```
## ์ฃผ์์ฌํญ
- ์ด ๋ชจ๋ธ์ ์ฐ๊ตฌ/์คํ ๋ชฉ์ ์ผ๋ก ์ ๊ณต๋ฉ๋๋ค
- ์์
์ ์ฌ์ฉ ์ ์ ๋ผ์ด์ ์ค๋ฅผ ํ์ธํ์ธ์
|
13project/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-clawed_shrewd_starfish
|
13project
| 2025-06-25T02:35:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am clawed shrewd starfish",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T03:14:14Z |
---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-clawed_shrewd_starfish
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am clawed shrewd starfish
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-clawed_shrewd_starfish
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="13project/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-clawed_shrewd_starfish", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
hubble658/grpo-v1.1-merged
|
hubble658
| 2025-06-25T02:35:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-3B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-3B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T02:33:27Z |
---
base_model: unsloth/Qwen2.5-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** hubble658
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-3B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
NTIS/hf_gemma3_21-checkpoint-124000
|
NTIS
| 2025-06-25T02:34:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"pytorch",
"causal-lm",
"ko",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T02:32:32Z |
---
license: apache-2.0
language:
- ko
- en
tags:
- text-generation
- pytorch
- causal-lm
library_name: transformers
---
# hf_gemma3_21-checkpoint-124000
์ด ๋ชจ๋ธ์ ํ์ธํ๋๋ ์ธ์ด ๋ชจ๋ธ ์ฒดํฌํฌ์ธํธ์
๋๋ค.
## ๋ชจ๋ธ ์ ๋ณด
- **๋ฒ ์ด์ค ๋ชจ๋ธ**: hf_gemma3_21
- **์ฒดํฌํฌ์ธํธ**: checkpoint-124000
- **ํ์
**: Causal Language Model
- **๋ผ์ด์ ์ค**: Apache 2.0
## ์ฌ์ฉ ๋ฐฉ๋ฒ
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "NTIS/hf_gemma3_21-checkpoint-124000"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto"
)
# ํ
์คํธ ์์ฑ
text = "์๋
ํ์ธ์"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, do_sample=True, temperature=0.7)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
```
## ์ฃผ์์ฌํญ
- ์ด ๋ชจ๋ธ์ ์ฐ๊ตฌ/์คํ ๋ชฉ์ ์ผ๋ก ์ ๊ณต๋ฉ๋๋ค
- ์์
์ ์ฌ์ฉ ์ ์ ๋ผ์ด์ ์ค๋ฅผ ํ์ธํ์ธ์
|
vincrnt/tcp2023
|
vincrnt
| 2025-06-25T02:34:05Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:34:05Z |
---
license: apache-2.0
---
|
hasdal/f2202da8-d7d3-426d-98f0-6be926f849af
|
hasdal
| 2025-06-25T02:32:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"unsloth",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-25T01:48:26Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mlx-community/Llama-3.1-Swallow-8B-Instruct-v0.5-4bit
|
mlx-community
| 2025-06-25T02:32:18Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"ja",
"dataset:tokyotech-llm/lmsys-chat-1m-synth",
"dataset:lmsys/lmsys-chat-1m",
"base_model:tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.5",
"base_model:quantized:tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.5",
"license:llama3.3",
"license:gemma",
"4-bit",
"region:us"
] |
text-generation
| 2025-06-25T02:07:42Z |
---
language:
- en
- ja
library_name: mlx
pipeline_tag: text-generation
license:
- llama3.3
- gemma
model_type: llama
datasets:
- tokyotech-llm/lmsys-chat-1m-synth
- lmsys/lmsys-chat-1m
base_model: tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.5
tags:
- mlx
---
# mlx-community/Llama-3.1-Swallow-8B-Instruct-v0.5-4bit
This model [mlx-community/Llama-3.1-Swallow-8B-Instruct-v0.5-4bit](https://huggingface.co/mlx-community/Llama-3.1-Swallow-8B-Instruct-v0.5-4bit) was
converted to MLX format from [tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.5](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.5)
using mlx-lm version **0.25.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Llama-3.1-Swallow-8B-Instruct-v0.5-4bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
thanhh12/aya-expanse-8b-Q8_0-GGUF
|
thanhh12
| 2025-06-25T02:31:50Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"base_model:CohereLabs/aya-expanse-8b",
"base_model:quantized:CohereLabs/aya-expanse-8b",
"license:cc-by-nc-4.0",
"region:us",
"conversational"
] | null | 2025-06-25T02:31:22Z |
---
inference: false
library_name: transformers
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
license: cc-by-nc-4.0
extra_gated_prompt: By submitting this form, you agree to the [License Agreement](https://cohere.com/c4ai-cc-by-nc-license) and
acknowledge that the information you provide will be collected, used, and shared
in accordance with Cohereโs [Privacy Policy]( https://cohere.com/privacy). Youโll
receive email updates about C4AI and Cohere research, events, products and services.
You can unsubscribe at any time.
extra_gated_fields:
Name: text
Affiliation: text
Country: country
I agree to use this model for non-commercial use ONLY: checkbox
tags:
- llama-cpp
- gguf-my-repo
base_model: CohereLabs/aya-expanse-8b
---
# thanhh12/aya-expanse-8b-Q8_0-GGUF
This model was converted to GGUF format from [`CohereLabs/aya-expanse-8b`](https://huggingface.co/CohereLabs/aya-expanse-8b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/CohereLabs/aya-expanse-8b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo thanhh12/aya-expanse-8b-Q8_0-GGUF --hf-file aya-expanse-8b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo thanhh12/aya-expanse-8b-Q8_0-GGUF --hf-file aya-expanse-8b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo thanhh12/aya-expanse-8b-Q8_0-GGUF --hf-file aya-expanse-8b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo thanhh12/aya-expanse-8b-Q8_0-GGUF --hf-file aya-expanse-8b-q8_0.gguf -c 2048
```
|
hubble658/grpo-v0.1-merged
|
hubble658
| 2025-06-25T02:29:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-3B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-3B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T02:28:17Z |
---
base_model: unsloth/Qwen2.5-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** hubble658
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-3B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Kiwiciou/llama2_uuu_news_qlora
|
Kiwiciou
| 2025-06-25T02:28:09Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:28:09Z |
---
license: apache-2.0
---
|
sergioalves/3a46e530-930d-484d-97c2-eaf9352c4f47
|
sergioalves
| 2025-06-25T02:27:09Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:NousResearch/Nous-Capybara-7B-V1.9",
"base_model:quantized:NousResearch/Nous-Capybara-7B-V1.9",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-25T01:59:42Z |
---
base_model: NousResearch/Nous-Capybara-7B-V1.9
library_name: transformers
model_name: 3a46e530-930d-484d-97c2-eaf9352c4f47
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for 3a46e530-930d-484d-97c2-eaf9352c4f47
This model is a fine-tuned version of [NousResearch/Nous-Capybara-7B-V1.9](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1.9).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sergioalves/3a46e530-930d-484d-97c2-eaf9352c4f47", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-7/runs/ckmz8kq3)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
AlekseyCalvin/Glasnost_v1_wan_14b_USSR80sTVstyle
|
AlekseyCalvin
| 2025-06-25T02:27:01Z | 0 | 0 | null |
[
"image-to-video",
"lora",
"text-to-video",
"video",
"video-generation",
"en",
"zh",
"ru",
"base_model:Wan-AI/Wan2.1-T2V-14B-Diffusers",
"base_model:adapter:Wan-AI/Wan2.1-T2V-14B-Diffusers",
"license:apache-2.0",
"region:us"
] |
text-to-video
| 2025-06-21T14:20:32Z |
---
license: apache-2.0
language:
- en
- zh
- ru
tags:
- image-to-video
- lora
- text-to-video
- video
- video-generation
base_model: "Wan-AI/Wan2.1-T2V-14B-Diffusers"
pipeline_tag: text-to-video
widget:
- text: >-
[GLASNOST] style...
output:
url: videos/1.mp4
- text: >-
[GLASNOST] style...
output:
url: videos/3.mp4
- text: >-
[GLASNOST] style...
output:
url: videos/4.mp4
- text: >-
[GLASNOST] style...
output:
url: videos/5.mp4
- text: >-
[GLASNOST] style...
output:
url: videos/6.mp4
- text: >-
[GLASNOST] style...
output:
url: videos/2.mp4
instance_prompt: GLASNOST style vintage crisp analog footage from a 1980s soviet television movie, cinematic, video filmed in the USSR during the perestroika era, raw real life footage, vhs
---
# GLASNOST V.1: 80s USSR TV/Film
***Style/Context Low Rank Adaptor (LoRA)*** <br>
***For Wan2.1 14B T2V & I2V Base Models*** <br>
**Stylers of Kinema Historical LoRAs** <br>
**|||||||| By SilverAgePoets.com ||||||||**
<Gallery />
## About this LoRA
This is a Rank 32/Alpha 64 [LoRA](https://replicate.com/docs/guides/working-with-loras) for the Wan2.1 14b video generation model. <br>
It was trained on hundreds of clips and frames from a variety of 1980s Perestroika-era Soviet films, tv shows, concerts, & music videos. <br>
It can be used with diffusers or ComfyUI or DrawThings, etc... <br>
This LoRA works well with both CausVid & Self-Forcing distillation quick inference adapters. <br>
It also works fairly well in combos w/ other LoRAs. <br>
**Get creative with these!**
## Trigger words
You should use `GLASNOST style vintage crisp analog footage from a 1980s soviet television movie, cinematic, video filmed in the USSR during the perestroika era, raw real life footage, vhs`, etc, to ressurect one of these more recent gestalts of futures no-longer-past! <br>
### Using with Diffusers
```py
pip install git+https://github.com/huggingface/diffusers.git
```
```py
import torch
from diffusers.utils import export_to_video
from diffusers import AutoencoderKLWan, WanPipeline
from diffusers.schedulers.scheduling_unipc_multistep import UniPCMultistepScheduler
model_id = "wavespeed/Wan2.1-T2V-14B-Diffusers-fp16"
vae = AutoencoderKLWan.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32)
pipe = WanPipeline.from_pretrained(model_id, vae=vae, torch_dtype=torch.bfloat16)
flow_shift = 3.0 # 5.0 for 720P, 3.0 for 480P
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config, flow_shift=flow_shift)
pipe.to("cuda")
pipe.load_lora_weights("AlekseyCalvin/Glasnost_v1_wan_14b_USSR80sTVstyle")
pipe.enable_model_cpu_offload() #for low-vram environments
prompt = "GLASNOST style"
negative_prompt = "overexposed, static, blurred, subtitles, images, static, worst, low, JPEG compression residue, incomplete, extra fingers, poorly drawn, poorly drawn, deformed, disfigured, misshapen, fused, still picture, backwards"
output = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
height=480,
width=832,
num_frames=81,
guidance_scale=5.0,
).frames[0]
export_to_video(output, "output.mp4", fps=16)
```
## Training details
- Steps: 5000
- Learning rate: 0.0002
- LoRA rank: 32 dim, 64 alpha
## Contribute your own examples
You can use the [community tab](https://huggingface.co/AlekseyCalvin/Glasnost_v1_wan_14b_USSR80sTVstyle/discussions) to add videos that show off what youโve made with this LoRA.
|
ianwangnas/tcp2023
|
ianwangnas
| 2025-06-25T02:26:38Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:26:38Z |
---
license: apache-2.0
---
|
misaelpintado/FloatBin.AI
|
misaelpintado
| 2025-06-25T02:26:30Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:26:30Z |
---
license: apache-2.0
---
|
std10012/uuu_fine_tune_gpt2
|
std10012
| 2025-06-25T02:25:21Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:25:21Z |
---
license: apache-2.0
---
|
NTIS/hf_gemma3_21-checkpoint-120000
|
NTIS
| 2025-06-25T02:25:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"pytorch",
"causal-lm",
"ko",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T02:22:40Z |
---
license: apache-2.0
language:
- ko
- en
tags:
- text-generation
- pytorch
- causal-lm
library_name: transformers
---
# hf_gemma3_21-checkpoint-120000
์ด ๋ชจ๋ธ์ ํ์ธํ๋๋ ์ธ์ด ๋ชจ๋ธ ์ฒดํฌํฌ์ธํธ์
๋๋ค.
## ๋ชจ๋ธ ์ ๋ณด
- **๋ฒ ์ด์ค ๋ชจ๋ธ**: hf_gemma3_21
- **์ฒดํฌํฌ์ธํธ**: checkpoint-120000
- **ํ์
**: Causal Language Model
- **๋ผ์ด์ ์ค**: Apache 2.0
## ์ฌ์ฉ ๋ฐฉ๋ฒ
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "NTIS/hf_gemma3_21-checkpoint-120000"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto"
)
# ํ
์คํธ ์์ฑ
text = "์๋
ํ์ธ์"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, do_sample=True, temperature=0.7)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
```
## ์ฃผ์์ฌํญ
- ์ด ๋ชจ๋ธ์ ์ฐ๊ตฌ/์คํ ๋ชฉ์ ์ผ๋ก ์ ๊ณต๋ฉ๋๋ค
- ์์
์ ์ฌ์ฉ ์ ์ ๋ผ์ด์ ์ค๋ฅผ ํ์ธํ์ธ์
|
Daniel-xue/llama2_uuu_news_qlora
|
Daniel-xue
| 2025-06-25T02:24:39Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:24:39Z |
---
license: apache-2.0
---
|
elliotthwang/Kimlan-Phi-4-mini-instruct-tw
|
elliotthwang
| 2025-06-25T02:22:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T02:13:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
็น้ซไธญๆ ๅฎข่ฃฝๅ่จ็ทด loss: 0.0296
|
chenrm/qwen3-30b-a3b-abliterated-lora
|
chenrm
| 2025-06-25T02:22:39Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"gguf",
"mergekit",
"base_model:Qwen/Qwen3-30B-A3B",
"base_model:adapter:Qwen/Qwen3-30B-A3B",
"region:us"
] | null | 2025-06-25T02:22:10Z |
---
base_model:
- Qwen/Qwen3-30B-A3B
- mlabonne/Qwen3-30B-A3B-abliterated
library_name: peft
tags:
- mergekit
- peft
---
# qwen3-30b-a3b-abliterated-lora
This is a LoRA extracted from a language model. It was extracted using [mergekit](https://github.com/arcee-ai/mergekit).
## LoRA Details
This LoRA adapter was extracted from [mlabonne/Qwen3-30B-A3B-abliterated](https://huggingface.co/mlabonne/Qwen3-30B-A3B-abliterated) and uses [Qwen/Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) as a base.
### Parameters
The following command was used to extract this LoRA adapter:
```sh
/venv/main/bin/mergekit-extract-lora --model mlabonne/Qwen3-30B-A3B-abliterated --base-model Qwen/Qwen3-30B-A3B --out-path qwen3-30b-a3b-abliterated-lora --cuda --max-rank 4
```
|
NTIS/hf_gemma3_21-checkpoint-119000
|
NTIS
| 2025-06-25T02:22:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"pytorch",
"causal-lm",
"ko",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T02:20:11Z |
---
license: apache-2.0
language:
- ko
- en
tags:
- text-generation
- pytorch
- causal-lm
library_name: transformers
---
# hf_gemma3_21-checkpoint-119000
์ด ๋ชจ๋ธ์ ํ์ธํ๋๋ ์ธ์ด ๋ชจ๋ธ ์ฒดํฌํฌ์ธํธ์
๋๋ค.
## ๋ชจ๋ธ ์ ๋ณด
- **๋ฒ ์ด์ค ๋ชจ๋ธ**: hf_gemma3_21
- **์ฒดํฌํฌ์ธํธ**: checkpoint-119000
- **ํ์
**: Causal Language Model
- **๋ผ์ด์ ์ค**: Apache 2.0
## ์ฌ์ฉ ๋ฐฉ๋ฒ
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "NTIS/hf_gemma3_21-checkpoint-119000"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto"
)
# ํ
์คํธ ์์ฑ
text = "์๋
ํ์ธ์"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, do_sample=True, temperature=0.7)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
```
## ์ฃผ์์ฌํญ
- ์ด ๋ชจ๋ธ์ ์ฐ๊ตฌ/์คํ ๋ชฉ์ ์ผ๋ก ์ ๊ณต๋ฉ๋๋ค
- ์์
์ ์ฌ์ฉ ์ ์ ๋ผ์ด์ ์ค๋ฅผ ํ์ธํ์ธ์
|
Kiwiciou/tcp2023
|
Kiwiciou
| 2025-06-25T02:20:26Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:20:26Z |
---
license: apache-2.0
---
|
Stonersheart/tcp2023
|
Stonersheart
| 2025-06-25T02:20:23Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:20:22Z |
---
license: apache-2.0
---
|
eatim/tcp2023
|
eatim
| 2025-06-25T02:20:22Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:20:22Z |
---
license: apache-2.0
---
|
ar7w7in/gemma-3-text-4b-it-4bit
|
ar7w7in
| 2025-06-25T02:18:45Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"gemma3",
"text-generation",
"conversational",
"base_model:mlx-community/gemma-3-text-4b-it-4bit",
"base_model:quantized:mlx-community/gemma-3-text-4b-it-4bit",
"license:gemma",
"4-bit",
"region:us"
] |
text-generation
| 2025-06-25T02:16:12Z |
---
license: gemma
library_name: mlx
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, youโre required to review and
agree to Googleโs usage license. To do this, please ensure youโre logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: mlx-community/gemma-3-text-4b-it-4bit
tags:
- mlx
---
|
newtts2017/sbn0lt36
|
newtts2017
| 2025-06-25T02:18:17Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-25T02:08:58Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: sbn0lt36
---
# Sbn0Lt36
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `sbn0lt36` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "sbn0lt36",
"lora_weights": "https://huggingface.co/newtts2017/sbn0lt36/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('newtts2017/sbn0lt36', weight_name='lora.safetensors')
image = pipeline('sbn0lt36').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/newtts2017/sbn0lt36/discussions) to add images that show off what youโve made with this LoRA.
|
synkrotron/grasp_cube
|
synkrotron
| 2025-06-25T02:12:07Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2025-06-24T06:22:27Z |
---
license: mit
---
Models for [FlexUMI](https://github.com/CortexNest/FlexUMI)
|
hasdal/2400ab9a-7625-429f-9dea-1562c55e7556
|
hasdal
| 2025-06-25T02:10:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-25T02:03:28Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JulianChang/Phi-4-mini-reasoning-Q8_0-GGUF
|
JulianChang
| 2025-06-25T02:04:27Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"nlp",
"math",
"code",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:microsoft/Phi-4-mini-reasoning",
"base_model:quantized:microsoft/Phi-4-mini-reasoning",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-06-25T02:04:11Z |
---
language:
- en
library_name: transformers
license: mit
license_link: https://huggingface.co/microsoft/Phi-4-mini-instruct-reasoning/resolve/main/LICENSE
pipeline_tag: text-generation
tags:
- nlp
- math
- code
- llama-cpp
- gguf-my-repo
widget:
- messages:
- role: user
content: How to solve 3*x^2+4*x+5=1?
base_model: microsoft/Phi-4-mini-reasoning
---
# JulianChang/Phi-4-mini-reasoning-Q8_0-GGUF
This model was converted to GGUF format from [`microsoft/Phi-4-mini-reasoning`](https://huggingface.co/microsoft/Phi-4-mini-reasoning) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/microsoft/Phi-4-mini-reasoning) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo JulianChang/Phi-4-mini-reasoning-Q8_0-GGUF --hf-file phi-4-mini-reasoning-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo JulianChang/Phi-4-mini-reasoning-Q8_0-GGUF --hf-file phi-4-mini-reasoning-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo JulianChang/Phi-4-mini-reasoning-Q8_0-GGUF --hf-file phi-4-mini-reasoning-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo JulianChang/Phi-4-mini-reasoning-Q8_0-GGUF --hf-file phi-4-mini-reasoning-q8_0.gguf -c 2048
```
|
ambashs1/rocks-pebbles-stone-classification
|
ambashs1
| 2025-06-25T02:04:09Z | 0 | 0 | null |
[
"time-management",
"productivity",
"text-classification",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2025-06-25T02:02:58Z |
---
license: apache-2.0
pipeline_tag: text-classification
tags:
- time-management
- productivity
---
|
zecaihong/3e7e19dc-0008-4038-bacf-b95d034953d3
|
zecaihong
| 2025-06-25T02:03:30Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Coder-7B",
"base_model:adapter:unsloth/Qwen2.5-Coder-7B",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T01:10:49Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Coder-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3e7e19dc-0008-4038-bacf-b95d034953d3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0.dev0`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Coder-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5686eaedee397c04_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_prompt: ''
debug: null
deepspeed: deepspeed_configs/zero2.json
early_stopping_patience: 3
eval_max_new_tokens: 1024
eval_steps: 100
eval_table_size: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
greater_is_better: false
group_by_length: false
hub_model_id: zecaihong/3e7e19dc-0008-4038-bacf-b95d034953d3
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0003
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: -1
metric_for_best_model: eval_loss
micro_batch_size: 12
mlflow_experiment_name: /data/datasets/5686eaedee397c04_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 6
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3e7e19dc-0008-4038-bacf-b95d034953d3
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 3e7e19dc-0008-4038-bacf-b95d034953d3
warmup_steps: 100
weight_decay: 0.001
xformers_attention: null
```
</details><br>
# 3e7e19dc-0008-4038-bacf-b95d034953d3
This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-7B](https://huggingface.co/unsloth/Qwen2.5-Coder-7B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7892
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 192
- total_eval_batch_size: 96
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 6.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0026 | 1 | 1.5980 |
| 0.9177 | 0.2614 | 100 | 0.9130 |
| 0.8841 | 0.5229 | 200 | 0.8498 |
| 0.8175 | 0.7843 | 300 | 0.8225 |
| 0.7432 | 1.0444 | 400 | 0.8072 |
| 0.7652 | 1.3059 | 500 | 0.7970 |
| 0.7343 | 1.5673 | 600 | 0.7872 |
| 0.7365 | 1.8288 | 700 | 0.7771 |
| 0.6479 | 2.0889 | 800 | 0.7855 |
| 0.6718 | 2.3503 | 900 | 0.7833 |
| 0.672 | 2.6118 | 1000 | 0.7753 |
| 0.6859 | 2.8732 | 1100 | 0.7718 |
| 0.565 | 3.1333 | 1200 | 0.7968 |
| 0.5416 | 3.3948 | 1300 | 0.7945 |
| 0.5761 | 3.6562 | 1400 | 0.7892 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.3
- Pytorch 2.5.1+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
yahayaha223/5b960f4d-986a-46ee-95aa-a9f358f95552
|
yahayaha223
| 2025-06-25T02:01:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"unsloth",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-24T16:11:08Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/GLM-4-32B-Base-32K-i1-GGUF
|
mradermacher
| 2025-06-25T02:00:22Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"zh",
"en",
"base_model:arcee-ai/GLM-4-32B-Base-32K",
"base_model:quantized:arcee-ai/GLM-4-32B-Base-32K",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-06-24T19:16:01Z |
---
base_model: arcee-ai/GLM-4-32B-Base-32K
language:
- zh
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/arcee-ai/GLM-4-32B-Base-32K
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/GLM-4-32B-Base-32K-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-Base-32K-i1-GGUF/resolve/main/GLM-4-32B-Base-32K.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-Base-32K-i1-GGUF/resolve/main/GLM-4-32B-Base-32K.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-Base-32K-i1-GGUF/resolve/main/GLM-4-32B-Base-32K.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-Base-32K-i1-GGUF/resolve/main/GLM-4-32B-Base-32K.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-Base-32K-i1-GGUF/resolve/main/GLM-4-32B-Base-32K.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-Base-32K-i1-GGUF/resolve/main/GLM-4-32B-Base-32K.i1-IQ2_M.gguf) | i1-IQ2_M | 11.4 | |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-Base-32K-i1-GGUF/resolve/main/GLM-4-32B-Base-32K.i1-Q2_K_S.gguf) | i1-Q2_K_S | 11.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-Base-32K-i1-GGUF/resolve/main/GLM-4-32B-Base-32K.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-Base-32K-i1-GGUF/resolve/main/GLM-4-32B-Base-32K.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-Base-32K-i1-GGUF/resolve/main/GLM-4-32B-Base-32K.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-Base-32K-i1-GGUF/resolve/main/GLM-4-32B-Base-32K.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-Base-32K-i1-GGUF/resolve/main/GLM-4-32B-Base-32K.i1-IQ3_S.gguf) | i1-IQ3_S | 14.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-Base-32K-i1-GGUF/resolve/main/GLM-4-32B-Base-32K.i1-IQ3_M.gguf) | i1-IQ3_M | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-Base-32K-i1-GGUF/resolve/main/GLM-4-32B-Base-32K.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-Base-32K-i1-GGUF/resolve/main/GLM-4-32B-Base-32K.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-Base-32K-i1-GGUF/resolve/main/GLM-4-32B-Base-32K.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.7 | |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-Base-32K-i1-GGUF/resolve/main/GLM-4-32B-Base-32K.i1-Q4_0.gguf) | i1-Q4_0 | 18.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-Base-32K-i1-GGUF/resolve/main/GLM-4-32B-Base-32K.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-Base-32K-i1-GGUF/resolve/main/GLM-4-32B-Base-32K.i1-Q4_K_M.gguf) | i1-Q4_K_M | 19.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-Base-32K-i1-GGUF/resolve/main/GLM-4-32B-Base-32K.i1-Q4_1.gguf) | i1-Q4_1 | 20.6 | |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-Base-32K-i1-GGUF/resolve/main/GLM-4-32B-Base-32K.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.6 | |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-Base-32K-i1-GGUF/resolve/main/GLM-4-32B-Base-32K.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.2 | |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-Base-32K-i1-GGUF/resolve/main/GLM-4-32B-Base-32K.i1-Q6_K.gguf) | i1-Q6_K | 26.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
NTIS/hf_gemma3_2-checkpoint-107000
|
NTIS
| 2025-06-25T01:58:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"pytorch",
"causal-lm",
"ko",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-24T05:26:14Z |
---
license: apache-2.0
language:
- ko
- en
tags:
- text-generation
- pytorch
- causal-lm
library_name: transformers
---
# hf_gemma3_2-checkpoint-107000
์ด ๋ชจ๋ธ์ ํ์ธํ๋๋ ์ธ์ด ๋ชจ๋ธ ์ฒดํฌํฌ์ธํธ์
๋๋ค.
## ๋ชจ๋ธ ์ ๋ณด
- **๋ฒ ์ด์ค ๋ชจ๋ธ**: hf_gemma3_2
- **์ฒดํฌํฌ์ธํธ**: checkpoint-107000
- **ํ์
**: Causal Language Model
- **๋ผ์ด์ ์ค**: Apache 2.0
## ์ฌ์ฉ ๋ฐฉ๋ฒ
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "NTIS/hf_gemma3_2-checkpoint-107000"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto"
)
# ํ
์คํธ ์์ฑ
text = "์๋
ํ์ธ์"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, do_sample=True, temperature=0.7)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
```
## ์ฃผ์์ฌํญ
- ์ด ๋ชจ๋ธ์ ์ฐ๊ตฌ/์คํ ๋ชฉ์ ์ผ๋ก ์ ๊ณต๋ฉ๋๋ค
- ์์
์ ์ฌ์ฉ ์ ์ ๋ผ์ด์ ์ค๋ฅผ ํ์ธํ์ธ์
|
yashmahe2018/math-error-classification-gguf
|
yashmahe2018
| 2025-06-25T01:58:19Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-25T01:57:48Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** yashmahe2018
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hasdal/53565329-6445-47b8-92e1-60ad8031a6cb
|
hasdal
| 2025-06-25T01:57:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-25T01:46:46Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
crosstar/mistral_5_CoT_generated_sciq
|
crosstar
| 2025-06-25T01:57:32Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-24T10:45:44Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
NTIS/hf_gemma3_2-checkpoint-106000
|
NTIS
| 2025-06-25T01:56:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"pytorch",
"causal-lm",
"ko",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-24T05:25:10Z |
---
license: apache-2.0
language:
- ko
- en
tags:
- text-generation
- pytorch
- causal-lm
library_name: transformers
---
# hf_gemma3_2-checkpoint-106000
์ด ๋ชจ๋ธ์ ํ์ธํ๋๋ ์ธ์ด ๋ชจ๋ธ ์ฒดํฌํฌ์ธํธ์
๋๋ค.
## ๋ชจ๋ธ ์ ๋ณด
- **๋ฒ ์ด์ค ๋ชจ๋ธ**: hf_gemma3_2
- **์ฒดํฌํฌ์ธํธ**: checkpoint-106000
- **ํ์
**: Causal Language Model
- **๋ผ์ด์ ์ค**: Apache 2.0
## ์ฌ์ฉ ๋ฐฉ๋ฒ
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "NTIS/hf_gemma3_2-checkpoint-106000"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto"
)
# ํ
์คํธ ์์ฑ
text = "์๋
ํ์ธ์"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, do_sample=True, temperature=0.7)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
```
## ์ฃผ์์ฌํญ
- ์ด ๋ชจ๋ธ์ ์ฐ๊ตฌ/์คํ ๋ชฉ์ ์ผ๋ก ์ ๊ณต๋ฉ๋๋ค
- ์์
์ ์ฌ์ฉ ์ ์ ๋ผ์ด์ ์ค๋ฅผ ํ์ธํ์ธ์
|
versaceeros/ac07e322-f0c7-4f0c-b4c3-80468cb6f828
|
versaceeros
| 2025-06-25T01:51:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-25T01:44:13Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
NTIS/hf_gemma3_2-checkpoint-104000
|
NTIS
| 2025-06-25T01:51:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"pytorch",
"causal-lm",
"ko",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-24T05:21:18Z |
---
license: apache-2.0
language:
- ko
- en
tags:
- text-generation
- pytorch
- causal-lm
library_name: transformers
---
# hf_gemma3_2-checkpoint-104000
์ด ๋ชจ๋ธ์ ํ์ธํ๋๋ ์ธ์ด ๋ชจ๋ธ ์ฒดํฌํฌ์ธํธ์
๋๋ค.
## ๋ชจ๋ธ ์ ๋ณด
- **๋ฒ ์ด์ค ๋ชจ๋ธ**: hf_gemma3_2
- **์ฒดํฌํฌ์ธํธ**: checkpoint-104000
- **ํ์
**: Causal Language Model
- **๋ผ์ด์ ์ค**: Apache 2.0
## ์ฌ์ฉ ๋ฐฉ๋ฒ
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "NTIS/hf_gemma3_2-checkpoint-104000"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto"
)
# ํ
์คํธ ์์ฑ
text = "์๋
ํ์ธ์"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, do_sample=True, temperature=0.7)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
```
## ์ฃผ์์ฌํญ
- ์ด ๋ชจ๋ธ์ ์ฐ๊ตฌ/์คํ ๋ชฉ์ ์ผ๋ก ์ ๊ณต๋ฉ๋๋ค
- ์์
์ ์ฌ์ฉ ์ ์ ๋ผ์ด์ ์ค๋ฅผ ํ์ธํ์ธ์
|
johngreendr1/2887f78b-5df1-4b66-b54f-722307a97863
|
johngreendr1
| 2025-06-25T01:48:14Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"arxiv:1910.09700",
"base_model:sethuiyer/Medichat-Llama3-8B",
"base_model:adapter:sethuiyer/Medichat-Llama3-8B",
"region:us"
] | null | 2025-06-25T00:34:57Z |
---
base_model: sethuiyer/Medichat-Llama3-8B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
fnlp/qwen2-0_5B-rope8-d_kv_16-refactor
|
fnlp
| 2025-06-25T01:47:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T01:46:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
giang16GG11/gg1
|
giang16GG11
| 2025-06-25T01:47:15Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-1B-Instruct",
"region:us"
] | null | 2025-06-25T01:31:55Z |
---
base_model: meta-llama/Llama-3.2-1B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.