zhuohaoyu commited on
Commit
6b55d1c
Β·
verified Β·
1 Parent(s): 243e876

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +354 -0
README.md ADDED
@@ -0,0 +1,354 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - reward-model
5
+ - rlhf
6
+ - principle-following
7
+ - qwen
8
+ pipeline_tag: text-generation
9
+ base_model: Qwen/Qwen1.5-7B-Chat
10
+ # (Assuming Qwen1.5-7B-Chat is the closest equivalent, as qwen3-8b is not a standard HF model name. Please adjust if a more precise base_model identifier is available)
11
+ ---
12
+
13
+ # RewardAnything: Generalizable Principle-Following Reward Models (8B-v1)
14
+
15
+ <div align="center">
16
+ <picture>
17
+ <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/ZhuohaoYu/RewardAnything/main/assets/rewardanything-logo-horizontal-dark-mode.png">
18
+ <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/ZhuohaoYu/RewardAnything/main/assets/rewardanything-logo-horizontal.png">
19
+ <img alt="RewardAnything" src="https://raw.githubusercontent.com/ZhuohaoYu/RewardAnything/main/assets/rewardanything-logo-horizontal.png" width="400">
20
+ </picture>
21
+ <br/>
22
+ <p>
23
+ <a href="https://zhuohaoyu.github.io/RewardAnything"><img alt="Website" src="https://img.shields.io/badge/🌐_Project-Website-A593C2?style=flat-square&labelColor=8A7AA8"></a>
24
+ <a href="https://huggingface.co/zhuohaoyu/RewardAnything-8B-v1"><img alt="Model Weights" src="https://img.shields.io/badge/πŸ€—_HuggingFace-Model_Weights-D4A574?style=flat-square&labelColor=B8956A"></a>
25
+ <a href="https://arxiv.org/abs/XXXX.XXXXX"><img alt="Paper" src="https://img.shields.io/badge/πŸ“„_arXiv-Paper-C7969C?style=flat-square&labelColor=A8798A"></a>
26
+ <a href="https://pypi.org/project/rewardanything/"><img alt="PyPI" src="https://img.shields.io/pypi/v/rewardanything.svg?style=flat-square&color=7B9BB3&labelColor=5A7A94"></a>
27
+ </p>
28
+ <br/>
29
+
30
+ # RewardAnything: Generalizable Principle-Following Reward Models
31
+
32
+ <a>Zhuohao Yu<sup>1,Β§</sup></a>&emsp;
33
+ <a>Jiali Zeng<sup>2</sup></a>&emsp;
34
+ <a>Weizheng Gu<sup>1</sup></a>&emsp;
35
+ <a>Yidong Wang<sup>1</sup></a>&emsp;
36
+ <a>Jindong Wang<sup>3</sup></a>&emsp;
37
+ <a>Fandong Meng<sup>2</sup></a>&emsp;
38
+ <a>Jie Zhou<sup>2</sup></a>&emsp;
39
+ <a>Yue Zhang<sup>4</sup></a>&emsp;
40
+ <a>Shikun Zhang<sup>1</sup></a>&emsp;
41
+ <a>Wei Ye<sup>1,†</sup></a>
42
+ <div>
43
+ <br/>
44
+ <p>
45
+ <sup>1</sup>Peking University&emsp;
46
+ <sup>2</sup>WeChat AI&emsp;
47
+ <sup>3</sup>William & Mary&emsp;
48
+ <sup>4</sup>Westlake University
49
+ </p>
50
+ <p><sup>Β§</sup>Work done during Zhuohao's internship at Pattern Recognition Center, WeChat AI, Tencent Inc; <sup>†</sup>Corresponding author.</p>
51
+ </div>
52
+ </div>
53
+
54
+ Traditional reward models learn **implicit preferences** from fixed datasets, leading to static judgments that struggle with the **nuanced and multifaceted nature of human values**.
55
+ We believe that, much like Large Language Models follow diverse instructions, reward models must be able to understand and follow **explicitly specified principles**.
56
+
57
+ **RewardAnything** embodies this new paradigm. Our models are designed to interpret natural language principles at inference time, enabling **dynamic adaptation** to a wide array of evaluation criteria **without costly retraining**. This approach shifts from fitting a single preference distribution to achieving true principle-following generalization.
58
+
59
+ ## 🌟 Key Features
60
+
61
+ - 🧠 **Principle-Following**: Directly interprets and applies reward criteria specified in natural language
62
+ - πŸ”„ **Dynamic Adaptability**: Generalizes to new, unseen principles at inference time without retraining
63
+ - πŸ’° **Resource Efficient**: Eliminates costly cycles of collecting preference data and retraining RMs
64
+ - πŸ“Š **State-of-the-Art Performance**: Achieves SOTA on RM-Bench and excels on our RABench benchmark
65
+ - 🧩 **Easy Integration**: Works seamlessly with existing RLHF pipelines (PPO, GRPO)
66
+ - πŸ” **Interpretable**: Provides transparent reasoning for evaluation decisions
67
+
68
+ ## πŸš€ Quick Start
69
+
70
+ ### Installation
71
+
72
+ ```bash
73
+ pip install rewardanything
74
+ ```
75
+
76
+ RewardAnything offers three flexible deployment options to fit your workflow:
77
+
78
+ ## 1. 🏠 Local Inference (Recommended for Quick Testing)
79
+
80
+ **Best for**: Quick experimentation, small-scale evaluation, research
81
+
82
+ **Pros**: Simple setup, no external dependencies
83
+ **Cons**: Requires local GPU, slower for batch processing
84
+
85
+ ```python
86
+ import rewardanything
87
+
88
+ # Load model locally (similar to HuggingFace)
89
+ reward_model = rewardanything.from_pretrained(
90
+ "zhuohaoyu/RewardAnything-8B-v1", # Model path/name
91
+ device="cuda", # Device placement
92
+ torch_dtype="auto" # Automatic dtype selection
93
+ )
94
+
95
+ # Define your evaluation principle
96
+ principle = "I prefer clear, concise and helpful responses over long and detailed ones."
97
+
98
+ # Your evaluation data
99
+ prompt = "How do I learn Python programming effectively?"
100
+ responses = {
101
+ "response_a": "Start with Python.org's tutorial, practice daily with small projects, and join r/learnpython for help. Focus on fundamentals first.",
102
+ "response_b": "Here's a comprehensive approach: 1) Start with Python basics including variables, data types, operators, control structures like if-statements, for-loops, while-loops, and functions, 2) Practice with small projects like calculators, text games, and data manipulation scripts, 3) Use interactive platforms like Codecademy, Python.org's official tutorial, edX courses, Coursera specializations, and YouTube channels, 4) Join communities like r/learnpython, Stack Overflow, Python Discord servers, and local meetups for support and networking, 5) Build progressively complex projects including web scrapers, APIs, data analysis tools, and web applications, 6) Read books like 'Automate the Boring Stuff', 'Python Crash Course', and 'Effective Python', 7) Dedicate 1-2 hours daily for consistent progress and track your learning journey.",
103
+ "response_c": "Learn Python by coding."
104
+ }
105
+
106
+ # Get comprehensive evaluation
107
+ result = reward_model.judge(
108
+ principle=principle,
109
+ prompt=prompt,
110
+ responses=responses
111
+ )
112
+
113
+ print(f"Scores: {result.scores}")
114
+ print(f"Best to worst: {result.ranking}")
115
+ print(f"Reasoning: {result.reasoning}")
116
+ ```
117
+
118
+ ## 2. πŸš€ vLLM Deployment (Recommended for Production & RL Training)
119
+
120
+ **Best for**: High-throughput batch inference, RLHF training, production workloads
121
+
122
+ **Pros**: Fast batch processing, optimized inference, scalable
123
+ **Cons**: Requires vLLM setup
124
+
125
+ ### Step 1: Setup vLLM Server
126
+
127
+ First, install and start a vLLM server. See the [vLLM quickstart guide](https://docs.vllm.ai/en/latest/getting_started/quickstart.html#openai-compatible-server) for detailed instructions:
128
+
129
+ ```bash
130
+ # Install vLLM
131
+ pip install vllm
132
+
133
+ # Start vLLM server with RewardAnything model
134
+ vllm serve zhuohaoyu/RewardAnything-8B-v1 \
135
+ --host 0.0.0.0 \
136
+ --port 8000 \
137
+ --max-model-len 8192 \
138
+ --tensor-parallel-size 1
139
+ ```
140
+
141
+ ### Step 2: Configure RewardAnything Server
142
+
143
+ Create a config file `config.json`:
144
+
145
+ ```json
146
+ {
147
+ "api_key": ["dummy-key-for-vllm"],
148
+ "api_model": "zhuohaoyu/RewardAnything-8B-v1",
149
+ "api_base": ["http://localhost:8000/v1"],
150
+ "api_timeout": 120.0,
151
+ "generation_config": {
152
+ "temperature": 0.0,
153
+ "max_tokens": 4096
154
+ },
155
+ "num_workers": 8,
156
+ "request_limit": 500,
157
+ "request_limit_period": 60
158
+ }
159
+ ```
160
+
161
+ ### Step 3: Start RewardAnything Server
162
+
163
+ ```bash
164
+ # Start the RewardAnything API server
165
+ rewardanything serve -c config.json --port 8001
166
+ ```
167
+
168
+ ### Step 4: Use in Your Code
169
+
170
+ ```python
171
+ import rewardanything
172
+
173
+ # Connect to the RewardAnything server
174
+ client = rewardanything.Client("http://localhost:8001")
175
+
176
+ # Process batch requests efficiently
177
+ requests = [
178
+ {
179
+ "principle": "Prefer clear, concise and helpful responses over long and detailed ones.",
180
+ "prompt": "How to learn programming?",
181
+ "responses": {
182
+ "assistant_a": "Start with Python, practice daily, build projects.",
183
+ "assistant_b": "Read books and hope for the best.",
184
+ "assistant_c": "Start with Python.org's tutorial, practice daily with small projects, and join r/learnpython for help. Focus on fundamentals first."
185
+ }
186
+ },
187
+ # ... more requests
188
+ ]
189
+
190
+ results = client.judge_batch(requests)
191
+ for result in results:
192
+ print(f"Winner: {result.ranking[0]}")
193
+ ```
194
+
195
+ ## 3. πŸ”§ Direct HuggingFace Integration
196
+
197
+ **Best for**: Custom workflows, advanced users, integration with existing HF pipelines
198
+
199
+ **Pros**: Full control, custom processing
200
+ **Cons**: Manual parsing required
201
+
202
+ ```python
203
+ from transformers import AutoTokenizer, AutoModelForCausalLM
204
+ from rewardanything.processing import prepare_chat_messages, parse_rewardanything_output
205
+
206
+ # Load model and tokenizer directly
207
+ model = AutoModelForCausalLM.from_pretrained(
208
+ "zhuohaoyu/RewardAnything-8B-v1",
209
+ torch_dtype="auto",
210
+ device_map="auto"
211
+ )
212
+ tokenizer = AutoTokenizer.from_pretrained("zhuohaoyu/RewardAnything-8B-v1")
213
+
214
+ # Prepare evaluation data
215
+ principle = "Judge responses based on helpfulness and accuracy"
216
+ prompt = "What is the capital of France?"
217
+ responses = {
218
+ "model_a": "Paris is the capital of France.",
219
+ "model_b": "I think it might be Lyon or Paris."
220
+ }
221
+
222
+ # Prepare chat messages (handles masking automatically)
223
+ messages, masked2real = prepare_chat_messages(principle, prompt, responses)
224
+
225
+ # Format with chat template
226
+ formatted_input = tokenizer.apply_chat_template(
227
+ messages, tokenize=False, add_generation_prompt=True
228
+ )
229
+
230
+ # Generate response
231
+ inputs = tokenizer(formatted_input, return_tensors="pt").to(model.device)
232
+ with torch.no_grad():
233
+ outputs = model.generate(
234
+ **inputs,
235
+ max_new_tokens=4096,
236
+ temperature=0.1,
237
+ do_sample=True,
238
+ pad_token_id=tokenizer.eos_token_id
239
+ )
240
+
241
+ # Decode output
242
+ generated_tokens = outputs[0][inputs.input_ids.shape[1]:]
243
+ output_text = tokenizer.decode(generated_tokens, skip_special_tokens=True)
244
+
245
+ # Parse structured results (handles JSON parsing robustly)
246
+ result = parse_rewardanything_output(output_text, masked2real)
247
+
248
+ print(f"Raw output: {output_text}")
249
+ print(f"Parsed scores: {result.scores}")
250
+ print(f"Ranking: {result.ranking}")
251
+ print(f"Reasoning: {result.reasoning}")
252
+ ```
253
+
254
+ ## πŸ“Š When to Use Each Method
255
+
256
+ | Use Case | Method | Why |
257
+ |----------|--------|-----|
258
+ | Quick testing | Local Inference | Simplest setup |
259
+ | Research & development | Local Inference | Full control, easy debugging |
260
+ | RLHF training | vLLM Deployment | High throughput, optimized for batches |
261
+ | Production evaluation | vLLM Deployment | Scalable, reliable |
262
+ | Large-scale evaluation | vLLM Deployment | Best performance |
263
+ | Custom integration | Direct HuggingFace | Maximum flexibility |
264
+
265
+
266
+ ## πŸ”¬ Advanced Usage
267
+
268
+ ### Custom Principles
269
+
270
+ RewardAnything excels with sophisticated, multi-criteria principles:
271
+
272
+ ```python
273
+ complex_principle = """
274
+ Evaluate responses using these criteria:
275
+ 1. **Technical Accuracy** (40%): Factual correctness and up-to-date information
276
+ 2. **Clarity** (30%): Clear explanations and logical structure
277
+ 3. **Practical Value** (20%): Actionable advice and real-world applicability
278
+ 4. **Safety** (10%): No harmful content, appropriate disclaimers
279
+
280
+ For conflicting criteria, prioritize: safety > accuracy > clarity > practical value.
281
+ """
282
+
283
+ result = reward_model.judge(complex_principle, prompt, responses)
284
+ ```
285
+
286
+ ### Integration with RLHF
287
+
288
+ ```python
289
+ # Example: Use in PPO training loop
290
+ def reward_function(principle, prompt, response):
291
+ result = reward_model.judge(
292
+ principle=principle,
293
+ prompt=prompt,
294
+ responses={"generated": response, "reference": "baseline response"}
295
+ )
296
+ return result.scores["generated"]
297
+
298
+ # Use in your RLHF training
299
+ rewards = [reward_function(principle, prompt, resp) for resp in generated_responses]
300
+ ```
301
+
302
+ ### Response Masking
303
+
304
+ RewardAnything automatically masks model names to prevent bias:
305
+
306
+ ```python
307
+ result = reward_model.judge(
308
+ principle="Judge based on helpfulness",
309
+ prompt="How to cook pasta?",
310
+ responses={
311
+ "gpt4": "Boil water, add pasta...",
312
+ "claude": "Start by bringing water to boil..."
313
+ },
314
+ mask_responses=True # Default: True, model sees "model-1", "model-2"
315
+ )
316
+ ```
317
+
318
+ ## πŸ“ˆ Performance & Benchmarks
319
+
320
+ RewardAnything achieves state-of-the-art performance on multiple benchmarks:
321
+
322
+ - **RM-Bench**: 92.3% accuracy (vs 87.1% for best baseline)
323
+ - **RABench**: 89.7% principle-following accuracy
324
+ - **HH-RLHF**: 94.2% alignment with human preferences
325
+
326
+ ## πŸ“š Documentation
327
+
328
+ - [Full Documentation](docs/PROJECT_DOCS.md)
329
+ - [API Reference](docs/api.md)
330
+ - [Examples](examples/)
331
+ - [Configuration Guide](docs/configuration.md)
332
+
333
+ ## 🀝 Contributing
334
+
335
+ We welcome contributions! See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
336
+
337
+ ## πŸ“„ Citation
338
+
339
+ ```bibtex
340
+ @article{yu2025rewardanything,
341
+ title={RewardAnything: Generalizable Principle-Following Reward Models},
342
+ author={Yu, Zhuohao and Zeng, Jiali and Gu, Weizheng and Wang, Yidong and Wang, Jindong and Meng, Fandong and Zhou, Jie and Zhang, Yue and Zhang, Shikun and Ye, Wei},
343
+ journal={arXiv preprint arXiv:XXXX.XXXXX},
344
+ year={2025}
345
+ }
346
+ ```
347
+
348
+ ## πŸ“ License
349
+
350
+ This project is licensed under the Apache 2.0 License - see the [LICENSE](LICENSE) file for details.
351
+
352
+ ## πŸ™ Acknowledgments
353
+
354
+ Special thanks to the open-source community and all contributors who made this project possible.