Harder Is Better: Boosting Mathematical Reasoning via Difficulty-Aware GRPO and Multi-Aspect Question Reformulation
Abstract
MathForge enhances mathematical reasoning in large models through a dual framework combining difficulty-aware policy optimization and multi-aspect question reformulation to address limitations in existing reinforcement learning methods.
Reinforcement Learning with Verifiable Rewards (RLVR) offers a robust mechanism for enhancing mathematical reasoning in large models. However, we identify a systematic lack of emphasis on more challenging questions in existing methods from both algorithmic and data perspectives, despite their importance for refining underdeveloped capabilities. Algorithmically, widely used Group Relative Policy Optimization (GRPO) suffers from an implicit imbalance where the magnitude of policy updates is lower for harder questions. Data-wise, augmentation approaches primarily rephrase questions to enhance diversity without systematically increasing intrinsic difficulty. To address these issues, we propose a two-dual MathForge framework to improve mathematical reasoning by targeting harder questions from both perspectives, which comprises a Difficulty-Aware Group Policy Optimization (DGPO) algorithm and a Multi-Aspect Question Reformulation (MQR) strategy. Specifically, DGPO first rectifies the implicit imbalance in GRPO via difficulty-balanced group advantage estimation, and further prioritizes harder questions by difficulty-aware question-level weighting. Meanwhile, MQR reformulates questions across multiple aspects to increase difficulty while maintaining the original gold answer. Overall, MathForge forms a synergistic loop: MQR expands the data frontier, and DGPO effectively learns from the augmented data. Extensive experiments show that MathForge significantly outperforms existing methods on various mathematical reasoning tasks. The code and augmented data are all available at https://github.com/AMAP-ML/MathForge.
Community
The theoretical proof in the appendix cannot lead to the main conclusion of this paper: GRPO focuses on problems of medium difficulty.
Hi! Thank you for your careful reading and insightful comment.
In Appendices B.2 and B.3, we show that the total policy update magnitude in GRPO can be well approximated by 2G\sqrt{p(1-p)}, which reaches its maximum when p=0.5 (i.e., problems of medium difficulty).
We would be happy to discuss this further. Please feel free to contact us via email at yanqidai@ruc.edu.cn.
This can only estimate the upper bound of gradient updates, but it cannot estimate the magnitude of each update.
You can refer to the last two paragraphs of analysis in Appendix B.2; although it is not a strictly accurate measure of the update magnitude, we believe it can serve as a suitable approximation.
I have studied this problem before, and if you further derive the theory for problems of different difficulty levels, you will find that it is theoretically impossible to distinguish the impact of different difficulties on training.
Thank you for your interest and comment.
We agree that many theoretical aspects of RL training for large models remain open. If you have further insights or results, we would be glad to see them shared for broader discussion.
arXiv explained breakdown of this paper 👉 https://arxivexplained.com/papers/harder-is-better-boosting-mathematical-reasoning-via-difficulty-aware-grpo-and-multi-aspect-question-reformulation
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper