File size: 549 Bytes
99af140 b559ee5 0f271d5 b559ee5 0f271d5 de3bb25 b559ee5 0f271d5 b559ee5 0f271d5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
---
tags:
- unsloth
---
# kevin009/llama322
## Model Description
This is a LoRA-tuned version of kevin009/llama322 using KTO (Kahneman-Tversky Optimization).
## Training Parameters
- Learning Rate: 5e-06
- Batch Size: 1
- Training Steps: 2043
- LoRA Rank: 16
- Training Date: 2024-12-29
## Usage
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained("kevin009/llama322", token="YOUR_TOKEN")
tokenizer = AutoTokenizer.from_pretrained("kevin009/llama322")
```
|