JET-1.5B

JET-1.5B is designed to improve the efficient reasoning of LLMs by training the base DeepSeek-Distill-Qwen-1.5B model with a reinforcement learning framework. Through this training, the model learns to generate high-quality reasoning steps while minimizing unnecessary computation and token usage.

Training Code

Our training pipeline is available on GitHub: Just-Enough-Think

The repository contains scripts for:

  • RL-based fine-tuning
  • Evaluation and benchmarking

Chat Template

def build_JET_chat_template(question, tokenizer):
    system_prompt = (
        "You are a helpful AI assistant. A conversation takes place between the User "
        "and the Assistant. The User asks a question, and the Assistant solves it.\n"
        "Please help me solve this question. Wrap only the final answer in \\boxed{}."
    )
    return tokenizer.apply_chat_template(
        [
            {"role": "system", "content": system_prompt},
            {"role": "user", "content": question}
        ],
        tokenize=False,
        add_generation_prompt=True
    )
Downloads last month
40
Safetensors
Model size
2B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for JinyiHan/JET-1.5B

Quantizations
3 models