---
language:
- en
- zh
library_name: transformers
license: mit
pipeline_tag: text-generation
---
# GLM-4.6-FP8
    ๐ Join our Discord community.
    
    ๐ Check out the GLM-4.6 technical blog, technical report(GLM-4.5), and Zhipu AI technical documentation.
    
    ๐ Use GLM-4.6 API services on Z.ai API Platform..
    
    ๐ One click to GLM-4.6.
  
## Model Introduction
Compared with GLM-4.5, **GLM-4.6**  brings several key improvements:
* **Longer context window:** The context window has been expanded from 128K to 200K tokens, enabling the model to handle more complex agentic tasks.
* **Superior coding performance:** The model achieves higher scores on code benchmarks and demonstrates better real-world performance in applications such as Claude CodeใClineใRoo Code and Kilo Code, including improvements in generating visually polished front-end pages.
* **Advanced reasoning:** GLM-4.6 shows a clear improvement in reasoning performance and supports tool use during inference, leading to stronger overall capability.
* **More capable agents:** GLM-4.6 exhibits stronger performance in tool using and search-based agents, and integrates more effectively within agent frameworks.
* **Refined writing:** Better aligns with human preferences in style and readability, and performs more naturally in role-playing scenarios.
We evaluated GLM-4.6 across eight public benchmarks covering agents, reasoning, and coding. Results show clear gains over GLM-4.5, with GLM-4.6 also holding competitive advantages over leading domestic and international models such as **DeepSeek-V3.1-Terminus** and **Claude Sonnet 4**.

## Inference
**Both GLM-4.5 and GLM-4.6 use the same inference method.** 
you can check our [github](https://github.com/zai-org/GLM-4.5) for more detail.