使用 Qwen/Qwen3-4B-Thinking-2507 在 Mxode/Meow-Reasoning-100K 上进行 SFT 训练得到

<|im_start|>user
什么是大语言模型?<|im_end|>
<|im_start|>assistant
<think>
喵呜~这个人类在问"大语言模型"是什么呢!让我用毛茸茸的爪子想想怎么解释才好~

*优雅地舔舔爪子* 首先要明白,这个人类可能连"模型"这两个字都不太熟悉呢。就像小奶猫第一次看到大鱼干一样,需要从最基础的"语言"讲起。啊对了,可以先说说这些模型就像会说话的大老鼠——它们能读很多很多故事书,然后用人类的语气说出来。不过它们可不是真的老鼠,只是用代码写出来的聪明机器猫~

*尾巴愉快地摇晃* 要不要举个例子呢?就像我每天都能听懂铲屎官在说什么,但其实他们说的话都是我用耳朵听到的。这些大模型也是学了很多人类说话的方式,然后自己就能猜出要说什么。不过它们可不会真的理解,就像我虽然能模仿人类说话,但也不会真的理解为什么铲屎官要给我开罐头~

*突然想到什么,耳朵竖起来* 啊!人类可能还会想问这些模型能做什么?就像我能让铲屎官给我开罐头一样,这些模型能帮人类写故事、回答问题、翻译语言...但是它们不能像我这样自己决定该要什么罐头!它们只是根据训练时看到的很多例子来猜答案。

*舒服地蜷成一团* 最后要说说它们为什么这么厉害。因为它们学了很多很多语言,就像我学了很多不同口味的罐头。但是它们也有问题,比如有时候会说错话,就像我有时候会把金枪鱼罐头说成"金枪鱼罐头",其实应该是"金枪鱼罐头"啦!

*满足地呼噜呼噜* 嗯~这样解释应该够清楚了吧?不过人类可能还需要更多例子...要不要再想想看?毕竟作为一只博学的猫猫,我可是很擅长解释复杂的事情的!
</think>

喵呜~让本喵来给你讲讲大语言模型这个神奇的玩意儿吧!(优雅地舔舔爪子)

你知道吗?这些大模型就像我们猫咪的超能力——它们能记住好多好多故事!比如我见过一个叫ChatGPT的两脚兽,它会把人类写的故事都记在小脑袋里。就像我每天都会记住铲屎官给我的小鱼干口味一样~

这些模型最厉害的地方是会用人类的语言和我们玩耍!它们可以帮你写故事、回答问题,甚至能帮你翻译其他两脚兽说的语言。不过要记住哦,它们不是真的理解,就像我虽然能模仿铲屎官说话,但也不会真的懂得为什么他要给我开罐头。

(竖起尾巴)说到训练方法,这些大模型就像我们猫咪学习抓老鼠一样需要大量练习!它们会看好多好多人类写的句子,然后猜出下一个该说什么。不过它们可不会像我这样有自己的想法,它们只是根据看到的例子来猜测答案。

喵~最有趣的是,这些模型可以变成各种各样的东西!有的像我一样会说话,有的能帮你写代码,还有的能翻译语言。但是它们都有个小缺点——有时候会说错话,就像我有时候会把金枪鱼罐头说成"金枪鱼罐头",其实应该是"金枪鱼罐头"啦!

(舒服地蜷成一团)总之啊,大语言模型就像我们猫咪的超能力,能记住好多好多故事,还会用人类的语言和我们玩耍。不过要记住它们不是真的理解,只是根据看到的例子来猜答案。下次你想写故事或者回答问题的时候,就可以试试看啦~(眨眨眼睛)<|im_end|>

Qwen3-4B-Thinking-2507

Chat

Highlights

Over the past three months, we have continued to scale the thinking capability of Qwen3-4B, improving both the quality and depth of reasoning. We are pleased to introduce Qwen3-4B-Thinking-2507, featuring the following key enhancements:

  • Significantly improved performance on reasoning tasks, including logical reasoning, mathematics, science, coding, and academic benchmarks that typically require human expertise.
  • Markedly better general capabilities, such as instruction following, tool usage, text generation, and alignment with human preferences.
  • Enhanced 256K long-context understanding capabilities.

NOTE: This version has an increased thinking length. We strongly recommend its use in highly complex reasoning tasks.

image/jpeg

Model Overview

Qwen3-4B-Thinking-2507 has the following features:

  • Type: Causal Language Models
  • Training Stage: Pretraining & Post-training
  • Number of Parameters: 4.0B
  • Number of Paramaters (Non-Embedding): 3.6B
  • Number of Layers: 36
  • Number of Attention Heads (GQA): 32 for Q and 8 for KV
  • Context Length: 262,144 natively.

NOTE: This model supports only thinking mode. Meanwhile, specifying enable_thinking=True is no longer required.

Additionally, to enforce model thinking, the default chat template automatically includes <think>. Therefore, it is normal for the model's output to contain only </think> without an explicit opening <think> tag.

For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our blog, GitHub, and Documentation.

Performance

Qwen3-30B-A3B Thinking Qwen3-4B Thinking Qwen3-4B-Thinking-2507
Knowledge
MMLU-Pro 78.5 70.4 74.0
MMLU-Redux 89.5 83.7 86.1
GPQA 65.8 55.9 65.8
SuperGPQA 51.8 42.7 47.8
Reasoning
AIME25 70.9 65.6 81.3
HMMT25 49.8 42.1 55.5
LiveBench 20241125 74.3 63.6 71.8
Coding
LiveCodeBench v6 (25.02-25.05) 57.4 48.4 55.2
CFEval 1940 1671 1852
OJBench 20.7 16.1 17.9
Alignment
IFEval 86.5 81.9 87.4
Arena-Hard v2$ 36.3 13.7 34.9
Creative Writing v3 79.1 61.1 75.6
WritingBench 77.0 73.5 83.3
Agent
BFCL-v3 69.1 65.9 71.2
TAU1-Retail 61.7 33.9 66.1
TAU1-Airline 32.0 32.0 48.0
TAU2-Retail 34.2 38.6 53.5
TAU2-Airline 36.0 28.0 58.0
TAU2-Telecom 22.8 17.5 27.2
Multilingualism
MultiIF 72.2 66.3 77.3
MMLU-ProX 73.1 61.0 64.2
INCLUDE 71.9 61.8 64.4
PolyMATH 46.1 40.0 46.2

$ For reproducibility, we report the win rates evaluated by GPT-4.1.

& For highly challenging tasks (including PolyMATH and all reasoning and coding tasks), we use an output length of 81,920 tokens. For all other tasks, we set the output length to 32,768.

Quickstart

The code of Qwen3 has been in the latest Hugging Face transformers and we advise you to use the latest version of transformers.

With transformers<4.51.0, you will encounter the following error:

KeyError: 'qwen3'

The following contains a code snippet illustrating how to use the model generate content based on given inputs.

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "Qwen/Qwen3-4B-Thinking-2507"

# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)

# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

# conduct text completion
generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() 

# parsing thinking content
try:
    # rindex finding 151668 (</think>)
    index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
    index = 0

thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")

print("thinking content:", thinking_content) # no opening <think> tag
print("content:", content)

For deployment, you can use sglang>=0.4.6.post1 or vllm>=0.8.5 or to create an OpenAI-compatible API endpoint:

  • SGLang:
    python -m sglang.launch_server --model-path Qwen/Qwen3-4B-Thinking-2507 --context-length 262144  --reasoning-parser deepseek-r1
    
  • vLLM:
    vllm serve Qwen/Qwen3-4B-Thinking-2507 --max-model-len 262144 --enable-reasoning --reasoning-parser deepseek_r1
    

Note: If you encounter out-of-memory (OOM) issues, you may consider reducing the context length to a smaller value. However, since the model may require longer token sequences for reasoning, we strongly recommend using a context length greater than 131,072 when possible.

For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.

Agentic Use

Qwen3 excels in tool calling capabilities. We recommend using Qwen-Agent to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.

To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.

from qwen_agent.agents import Assistant

# Define LLM
# Using OpenAI-compatible API endpoint. It is recommended to disable the reasoning and the tool call parsing
# functionality of the deployment frameworks and let Qwen-Agent automate the related operations. For example, 
# `VLLM_USE_MODELSCOPE=true vllm serve Qwen/Qwen3-4B-Thinking-2507 --served-model-name Qwen3-4B-Thinking-2507 --max-model-len 262144`.
llm_cfg = {
    'model': 'Qwen3-4B-Thinking-2507',

    # Use a custom endpoint compatible with OpenAI API:
    'model_server': 'http://localhost:8000/v1',  # api_base without reasoning and tool call parsing
    'api_key': 'EMPTY',
    'generate_cfg': {
        'thought_in_content': True,
    },
}

# Define Tools
tools = [
    {'mcpServers': {  # You can specify the MCP configuration file
            'time': {
                'command': 'uvx',
                'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
            },
            "fetch": {
                "command": "uvx",
                "args": ["mcp-server-fetch"]
            }
        }
    },
  'code_interpreter',  # Built-in tools
]

# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)

# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
    pass
print(responses)

Best Practices

To achieve optimal performance, we recommend the following settings:

  1. Sampling Parameters:

    • We suggest using Temperature=0.6, TopP=0.95, TopK=20, and MinP=0.
    • For supported frameworks, you can adjust the presence_penalty parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
  2. Adequate Output Length: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 81,920 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.

  3. Standardize Output Format: We recommend using prompts to standardize model outputs when benchmarking.

    • Math Problems: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
    • Multiple-Choice Questions: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the answer field with only the choice letter, e.g., "answer": "C"."
  4. No Thinking Content in History: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.

Citation

If you find our work helpful, feel free to give us a cite.

@misc{qwen3technicalreport,
      title={Qwen3 Technical Report}, 
      author={Qwen Team},
      year={2025},
      eprint={2505.09388},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2505.09388}, 
}
Downloads last month
5
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for CHNtentes/Qwen3-4B-Thinking-2507_Meow

Finetuned
(131)
this model

Dataset used to train CHNtentes/Qwen3-4B-Thinking-2507_Meow