Create README.md
Browse files
README.md
CHANGED
@@ -0,0 +1,38 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
tags:
|
4 |
+
- text-generation
|
5 |
+
- causal-lm
|
6 |
+
- reasoning
|
7 |
+
library_name: transformers
|
8 |
+
---
|
9 |
+
# JET-1.5B
|
10 |
+
|
11 |
+
JET-1.5B is designed to improve the efficient reasoning of LLMs by training the base **DeepSeek-Distill-Qwen-1.5B** model with a reinforcement learning framework. Through this training, the model learns to generate high-quality reasoning steps while minimizing unnecessary computation and token usage.
|
12 |
+
|
13 |
+
|
14 |
+
# Training Code
|
15 |
+
|
16 |
+
Our training pipeline is available on GitHub: [Just-Enough-Think](https://github.com/JinyiHan99/Just-Enough-Think/)
|
17 |
+
|
18 |
+
The repository contains scripts for:
|
19 |
+
- RL-based fine-tuning
|
20 |
+
- Evaluation and benchmarking
|
21 |
+
|
22 |
+
## Chat Template
|
23 |
+
|
24 |
+
```python
|
25 |
+
def build_JET_chat_template(question, tokenizer):
|
26 |
+
system_prompt = (
|
27 |
+
"You are a helpful AI assistant. A conversation takes place between the User "
|
28 |
+
"and the Assistant. The User asks a question, and the Assistant solves it.\n"
|
29 |
+
"Please help me solve this question. Wrap only the final answer in \\boxed{}."
|
30 |
+
)
|
31 |
+
return tokenizer.apply_chat_template(
|
32 |
+
[
|
33 |
+
{"role": "system", "content": system_prompt},
|
34 |
+
{"role": "user", "content": question}
|
35 |
+
],
|
36 |
+
tokenize=False,
|
37 |
+
add_generation_prompt=True
|
38 |
+
)
|