Disclaimer: This model only adapted the thinking/responding style of GLM 4.6. No knowledge transfer happened here. Also do not expect similar results from a 4B model compared to the original with 357B effective parameters.
Please use a lower temperature around <= 0.6 to avoid repetitions.
- Downloads last month
- 185
Hardware compatibility
Log In
to view the estimation
4-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for Liontix/Qwen3-4B-Thinking-2507-GLM-4.6-Distill-GGUF
Base model
Qwen/Qwen3-4B-Thinking-2507
Finetuned
unsloth/Qwen3-4B-Thinking-2507