JET-7B

JET-7B is designed to improve the efficient reasoning of LLMs by training the base DeepSeek-Distill-Qwen-7B model with a reinforcement learning framework. Through this training, the model learns to generate high-quality reasoning steps while minimizing unnecessary computation and token usage.

Chat Template

def build_JET_chat_template(question, tokenizer):
    system_prompt = (
        "You are a helpful AI assistant. A conversation takes place between the User "
        "and the Assistant. The User asks a question, and the Assistant solves it.\n"
        "Please help me solve this question. Wrap only the final answer in \\boxed{}."
    )
    return tokenizer.apply_chat_template(
        [
            {"role": "system", "content": system_prompt},
            {"role": "user", "content": question}
        ],
        tokenize=False,
        add_generation_prompt=True
    )
Downloads last month
118
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for JinyiHan/JET-7B

Quantizations
2 models