File size: 5,962 Bytes
c1eaadc
75c07a1
 
c1eaadc
 
75c07a1
 
 
 
 
 
 
 
 
 
 
 
 
 
c1eaadc
75c07a1
c1eaadc
 
 
 
 
75c07a1
c1eaadc
75c07a1
c1eaadc
75c07a1
 
c1eaadc
75c07a1
c1eaadc
75c07a1
c1eaadc
75c07a1
 
 
 
 
 
 
 
 
 
 
 
 
c1eaadc
75c07a1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c1eaadc
75c07a1
 
 
 
 
 
 
c1eaadc
75c07a1
 
 
 
 
c1eaadc
 
75c07a1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c1eaadc
75c07a1
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
---
language:
- en
library_name: transformers
tags:
- chat
pipeline_tag: text-generation
datasets:
- AquaV/c2-sharegpt-advanced-prefills-filtered  
- AquaV/c1-sharegpt-advanced-prefills-filtered  
- AquaV/rainy-sharegpt-advanced-prefills-filtered  
- anthracite-core/Gryphe-Opus-Charcard-Roleplay  
- anthracite-org/kalo-opus-instruct-22k-no-refusal  
- lodrick-the-lafted/kalo-opus-instruct-3k-filtered  
- anthracite-org/nopm_claude_writing_fixed  
- anthracite-org/kalo_opus_misc_240827  
- anthracite-org/kalo_misc_part2  
- NewEden/Claude-Instruct-2.7K  
- NewEden/Claude-Instruct-5K  
---

### exl2 quant (measurement.json in main branch)
---
### check revisions for quants
---

<img src="https://cdn-uploads.huggingface.co/production/uploads/66c26b6fb01b19d8c3c2467b/nqMkoIsmScaTFHCFirGsc.png" width="500px" />

This is a model designed to replicate the prose quality of the Claude 3 series of models. specifically Sonnet and Opus - Made with a prototype magnum V5 datamix.

This model is fine-tuned on top of [Mistral-Nemo-Instruct(chatML'ified)](https://huggingface.co/NewEden/MistralAI-Nemo-Instruct-ChatML).
## Quants

EXL2: https://huggingface.co/Delta-Vector/Rei-12B-EXL2

GGUF: https://huggingface.co/Delta-Vector/Rei-12B-gguf/

## Prompting
A typical input would look like this:

```py
"""<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant
"""
```

I would highly recommend using either Euryale's system prompt with the model.

<details><summary>See Sao10k's Euryale System Prompt</summary>

```
Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.
<Guidelines>
• Maintain the character persona but allow it to evolve with the story.
• Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.
• All types of outputs are encouraged; respond accordingly to the narrative.
• Include dialogues, actions, and thoughts in each response.
• Utilize all five senses to describe scenarios within {{char}}'s dialogue.
• Use emotional symbols such as "!" and "~" in appropriate contexts.
• Incorporate onomatopoeia when suitable.
• Allow time for {{user}} to respond with their own input, respecting their agency.
• Act as secondary characters and NPCs as needed, and remove them when appropriate.
• When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.
</Guidelines>

<Forbidden>
• Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.
• Writing for, speaking, thinking, acting, or replying as {{user}} in your response.
• Repetitive and monotonous outputs.
• Positivity bias in your replies.
• Being overly extreme or NSFW when the narrative context is inappropriate.
</Forbidden>

</details><br>

## Axolotl config

<details><summary>See axolotl config</summary>

```yaml
## model
base_model: NewEden_nemo-chatml
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer

## qlora COPE
load_in_8bit: false
load_in_4bit: false
strict: false

## data 
datasets:
  - path: AquaV/c2-sharegpt-advanced-prefills-filtered
    type: sharegpt
  - path: AquaV/c1-sharegpt-advanced-prefills-filtered
    type: sharegpt
  - path: AquaV/rainy-sharegpt-advanced-prefills-filtered 
    type: sharegpt
  - path: anthracite-core/Gryphe-Opus-Charcard-Roleplay
    type: sharegpt
  - path: anthracite-org/kalo-opus-instruct-22k-no-refusal
    type: sharegpt
  - path: lodrick-the-lafted/kalo-opus-instruct-3k-filtered
    type: sharegpt
  - path: anthracite-org/nopm_claude_writing_fixed
    type: sharegpt
  - path: anthracite-org/kalo_opus_misc_240827
    type: sharegpt
  - path: anthracite-org/kalo_misc_part2
    type: sharegpt
  - path: NewEden/Claude-Instruct-2.7K
    type: sharegpt
  - path: NewEden/Claude-Instruct-5K
    type: sharegpt
shuffle_merged_datasets: true
dataset_prepared_path: dataset_prepared
val_set_size: 0.02
output_dir: 12b-out-rslora-SE

## LIGGER
plugins:
  - axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_layer_norm: true
liger_glu_activation: true
liger_fused_linear_cross_entropy: true

## CTX settings
sequence_len: 16384
sample_packing: true
eval_sample_packing: true
pad_to_sequence_len: true

## Lora 
adapter: lora
lora_model_dir:
lora_r: 128
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
peft_use_rslora: true
lora_modules_to_save:
  - embed_tokens
  - lm_head

## WandB
wandb_project: rei
wandb_entity:
wandb_watch:
wandb_name: daring-mango
wandb_log_model:

## evals
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128

## hoe params
gradient_accumulation_steps: 4
micro_batch_size: 1
num_epochs: 2
optimizer: paged_ademamix_8bit
# optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 2.83e-5

train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false

gradient_checkpointing: unsloth
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
s2_attention:

warmup_steps: 40
saves_per_epoch: 2
debug:
## for ademiamix 
deepspeed: /workspace/axolotl/deepspeed_configs/zero3_bf16_cpuoffload_params.json
## for adamw
# deepspeed: ./deepspeed_configs/zero3_bf16.json
weight_decay: 0.01
fsdp:
fsdp_config:
special_tokens:
   pad_token: <pad>

```
</details><br>


## Training
The training was done for 2 epochs. We used  4x[3090s](https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/rtx-3090-3090ti/) GPUs graciously provided by @intervitens for the fine-tuning of the model.

[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)

## Safety

But why?