Abstract
DocReward, a document reward model, evaluates and enhances the structural and stylistic quality of generated documents, outperforming GPT-4o and GPT-5 in both accuracy and human-preferred document generation.
Recent advances in agentic workflows have enabled the automation of tasks such as professional document generation. However, they primarily focus on textual quality, neglecting visual structure and style, which are crucial for readability and engagement. This gap arises mainly from the absence of suitable reward models to guide agentic workflows toward producing documents with stronger structural and stylistic quality. To address this, we propose DocReward, a document reward model that evaluates documents based on their structure and style. We construct a multi-domain dataset DocPair of 117K paired documents, covering 32 domains and 267 document types, each including a high- and low-professionalism document with identical content but different structure and style. This enables the model to evaluate professionalism comprehensively, and in a textual-quality-agnostic way. DocReward is trained using the Bradley-Terry loss to score documents, penalizing predictions that contradict the annotated ranking. To assess the performance of reward models, we create a test dataset containing document bundles ranked by well-educated human evaluators. Notably, DocReward outperforms GPT-4o and GPT-5 in accuracy by 30.6 and 19.4 percentage points, respectively, demonstrating its superiority over baselines. In an extrinsic evaluation of document generation, DocReward achieves a significantly higher win rate of 60.8%, compared to GPT-5's 37.7% win rate, demonstrating its utility in guiding generation agents toward producing human-preferred documents.
Community
Introducing DocReward, a document reward model that evaluates documents based on their structure and style.
cool paper
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Multi-Objective Task-Aware Predictor for Image-Text Alignment (2025)
- PosterForest: Hierarchical Multi-Agent Collaboration for Scientific Poster Generation (2025)
- Beyond Quality: Unlocking Diversity in Ad Headline Generation with Large Language Models (2025)
- POINTS-Reader: Distillation-Free Adaptation of Vision-Language Models for Document Conversion (2025)
- WebRenderBench: Enhancing Web Interface Generation through Layout-Style Consistency and Reinforcement Learning (2025)
- Leveraging Generative Models for Real-Time Query-Driven Text Summarization in Large-Scale Web Search (2025)
- OmniStyle2: Scalable and High Quality Artistic Style Transfer Data Generation via Destylization (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper