--- license: apache-2.0 datasets: - agentica-org/DeepScaleR-Preview-Dataset base_model: - deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B tags: - reinforcement-learning language: - en - zh pipeline_tag: text-generation library_name: transformers ---

SIRI: Scaling Iterative Reinforcement Learning with Interleaved Compression

📃 Paper • 📝 Wandb

--- ## 🔍 Overview **SIRI (Scaling Iterative Reinforcement Learning with Interleaved Compression)** is a reinforcement-learning–based framework designed to improve the efficiency and accuracy of **Large Reasoning Models (LRMs)**. Traditional RL training often causes **overthinking** and long, redundant reasoning traces. Prior methods that compress outputs (length penalties, pruning, or skipping thought tokens) improve efficiency but hurt accuracy. SIRI solves this trade-off by **iteratively alternating between compression and expansion of the reasoning budget**, controlled by a cosine length scheduler. This approach dynamically balances concise reasoning with long-horizon exploration.

pareto_front

--- ## 🚀 Key Features - **Interleaved Compression–Expansion**: - *Compression phase*: forces concise, high-density reasoning by limiting rollout length. - *Expansion phase*: restores longer rollouts to encourage exploration and planning. - **Token Efficiency without Accuracy Loss**: Unlike previous methods, SIRI improves accuracy *while reducing average token usage*. - **Iterative RL Training**: Built on GRPO with modifications from DAPO (clip-high/low decoupling, KL removal). - **Generalization Across Model Sizes**: Validated on both **1.5B** and **7B** models. --- ## 📊 Benchmarks ![perf](https://cdn-uploads.huggingface.co/production/uploads/64ed568ccf6118a9379a61b8/0S2d9VZTiaoGI6_N9Vrh2.png) --- ## 📝 Citation ```bibtex @misc{wen2025siriscalingiterativereinforcement, title={SIRI: Scaling Iterative Reinforcement Learning with Interleaved Compression}, author={Haoming Wen and Yushi Bai and Juanzi Li and Jie Tang}, year={2025}, eprint={2509.25176}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2509.25176}, } ```