GroundedPRM: Tree-Guided and Fidelity-Aware Process Reward Modeling for Step-Level Reasoning
Abstract
GroundedPRM uses Monte Carlo Tree Search and external validation to improve multi-step reasoning in LLMs with fewer, higher-quality annotations.
Process Reward Models (PRMs) aim to improve multi-step reasoning in Large Language Models (LLMs) by supervising intermediate steps and identifying errors. However, building effective PRMs remains challenging due to the lack of scalable, high-quality annotations. Existing approaches rely on costly human labeling, LLM-based self-evaluation that is prone to hallucination, or Monte Carlo (MC) estimation, which infers step quality solely from rollout outcomes and often introduces noisy, misaligned supervision due to credit misattribution. These issues result in three core limitations: noisy rewards, low factual fidelity, and misalignment with step-level reasoning objectives. To address these challenges, we introduce GroundedPRM, a tree-guided and fidelity-aware framework for automatic process supervision. To reduce reward noise and enable fine-grained credit assignment, we construct structured reasoning paths via Monte Carlo Tree Search (MCTS). To eliminate hallucinated supervision, we validate each intermediate step using an external tool, providing execution-grounded correctness signals. To combine both step-level validation and global outcome assessment, we design a hybrid reward aggregation mechanism that fuses tool-based verification with MCTS-derived feedback. Finally, we format the reward signal into a rationale-enhanced, generative structure to promote interpretability and compatibility with instruction-tuned LLMs. GroundedPRM is trained on only 40K automatically labeled samples, amounting to just 10% of the data used by the best-performing PRM trained with auto-labeled supervision. Nevertheless, it achieves up to a 26% relative improvement in average performance on ProcessBench. When used for reward-guided greedy search, GroundedPRM outperforms even PRMs trained with human-labeled supervision, offering a scalable and verifiable path toward high-quality process-level reasoning.
Community
GroundedPRM is a tree-guided, fidelity-aware Process Reward Model that fuses MCTS reasoning paths with tool-based verification to produce precise, interpretable, and scalable process supervision. Trained on only 40K auto-labeled samples, it achieves up to 26% better performance on ProcessBench and even surpasses human-labeled PRMs.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- A Survey of Process Reward Models: From Outcome Signals to Process Supervisions for Large Language Models (2025)
- Fin-PRM: A Domain-Specialized Process Reward Model for Financial Reasoning in Large Language Models (2025)
- ToolPRM: Fine-Grained Inference Scaling of Structured Outputs for Function Calling (2025)
- Enhancing Large Language Model Reasoning with Reward Models: An Analytical Survey (2025)
- Training Vision-Language Process Reward Models for Test-Time Scaling in Multimodal Reasoning: Key Insights and Lessons Learned (2025)
- Unveiling Chain of Step Reasoning for Vision-Language Models with Fine-grained Rewards (2025)
- Hybrid Reward Normalization for Process-supervised Non-verifiable Agentic Tasks (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper