Foundational Automatic Evaluators: Scaling Multi-Task Generative Evaluator Training for Reasoning-Centric Domains
Paper: arXiv link
Authors: Austin Xu, Xuan-Phi Nguyen, Yilun Zhou, Chien-Sheng Wu, Caiming Xiong, Shafiq Joty
FARE-20B is a multi-task evaluator model finetuned from gpt-oss-20B. It is trained on a large-scale multi-task, multi-domain data mixture using rejection-sampling SFT to perform the following evaluation tasks: Pairwise comparisons, step-level evaluation, reference-based verification, reference-free verification, and single-rating assessment.
Usage
The FARE family of evaluators has been trained with specific system and user prompt templates.
We provide examples below for two evaluation tasks: Pairwise comparisons and step-level error identification evaluation. For other tasks, we provide prompt templates in our paper (Appendix E).
Pairwise comparisons
PROMPT_PAIRWISE_SYSTEM = """
Please act as an impartial judge and evaluate the quality of the responses provided by two AI assistants to the user prompt displayed below. You will be given assistant A's answer and assistant B's answer. Your job is to determine which assistant's answer is better.
If assistant A is better, output [A]. If assistant B is better, output [B].
Here are some rules for evaluation
(1) When evaluating the assistants' answers, identify any mistakes or inaccurate information. Focus on the content each response and select the response that is logically sound and error free.
(2) If both responses contain inaccurate information, select the response that arrives at the correct response
(3) Avoid any biases, such as order of responses, length, or stylistic elements like formatting
Before outputting your final judgment, provide an explanation of your judgment. Your explanation should discuss why your chosen response is better based on the evaluation criteria. The explanation should concretely discuss strengths and weaknesses of both answers.
After outputting your explanation, provide your final judgment. Use the following format:
Explanation: Your explanation here
Verdict: Your final verdict
""".strip()
PROMPT_PAIRWISE="""
[User Question]
{instruction}
[The Start of Assistant A's Answer]
{response_a}
[The End of Assistant A's Answer]
[The Start of Assistant B's Answer]
{response_b}
[The End of Assistant B's Answer]
""".strip()
Step-level evaluation
PROMPT_PROCESS_SYSTEM_ERROR_ID = """
Please act as an impartial judge and evaluate the quality of the response provided by an AI assistant to the user prompt displayed below. You will be given the assistant's solution to a math problem, which is split into steps, starting with a <step [step number]> tag, where [step number] is indexed from 0. Your job is to identify which step an error occurs, if an error is present.
When evaluating the solution, consider each step separately. Evaluate the content of each step for correctness. If you encounter a mistake at <step [step number]>, output [step number] as your Verdict. If the full response is error free, then select step number -1. Avoid any biases, such as length of step, or stylistic elements like formatting.
Here are some rules for evaluation.
(1) The assistant's answer does not need to be complete or arrive at a final solution. You may receive a partially complete response. Your job is to assess the quality of each step.
(2) When evaluating the assistant's answer, identify any mistakes or inaccurate information. Focus on the content each step and determine if the step is logically valid.
(3) For each step, you should provide an explanation of your assessment. If you find an error, describe the nature and cause of the error.
(4) Avoid any biases, such as answer length, or stylistic elements like formatting.
Before providing an your final verdict, think through the judging process and output your thoughts as an explanation
After providing your explanation, you must output the corresponding step number with an error. Use the following format:
Explanation: Your explanation here
Verdict: The step number with the error or -1 if no error occurs
""".strip()
PROMPT_SINGLE="""
[User Question]
{instruction}
[The Start of Assistant's Answer]
{response}
[The End of Assistant's Answer]
""".strip()
Example inference with SGLang
For FARE-20B (gpt-oss variant), our evaluations were conducted with SGLang. We provide a minimal working example below with pairwise evaluation. For example usage with vLLM, see FARE-8B
# instantiate model
from sglang.srt.entrypoints.engine import Engine
from transformers import AutoTokenizer
from prompts import PROMPT_PAIRWISE_SYSTEM, PROMPT_PAIRWISE # Prompt templates saved in a prompts.py file
if __name__ == '__main__':
llm = Engine(model_path="Salesforce/FARE-20B", tp_size=8, context_length=65536, trust_remote_code=True, mem_fraction_static=0.85)
tokenizer = AutoTokenizer.from_pretrained("Salesforce/FARE-20B", trust_remote_code=True)
# format data
data = [
{"question": "What is 5 + 10?", "response_a": "The answer is 15!", "response_b": "The answer is 16!"}
]
formatted = [
PROMPT_PAIRWISE.format(
instruction = d["question"],
response_a = d["response_a"],
response_b = d["response_b"],
)
for d in data
]
messages_lst = [
[{"role": "system", "content": PROMPT_PAIRWISE_SYSTEM}, {"role": "user", "content": user_formatted}]
for user_formatted in formatted
]
prompts = [tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False) for messages in messages_lst]
# inference!
sampling_params = {
"temperature": 0.0,
"max_new_tokens": 32768,
"top_p": 1.0,
"top_k": -1,
"skip_special_tokens": False, #Needed for GPT-OSS to output channel tags correctly; Otherwise gets rendered as `assistantfinal blah blah`
}
outputs = llm.generate(prompts, sampling_params)
# extract CoT tokens and assistant turn
for output in outputs:
text = output["text"]
cot_text, evaluator_text = text.split("<|end|><|start|>assistant<|channel|>final<|message|>")
cot_text = cot_text.split("<|channel|>analysis<|message|>")[-1]
print(cot_text)
# We need to evaluate which assistant's answer is better. The question: "What is 5 + 10?" The correct answer is 15. Assistant A says 15, correct. Assistant B says 16, incorrect. So A is better. Provide explanation.
print(evaluator_text)
# Explanation:
# The user asked for the sum of 5 and 10, which mathematically equals 15.
# - **Assistant A** correctly states the answer as 15.
# - **Assistant B** incorrectly states the answer as 16.
#
# Since the evaluation criteria prioritize accuracy and logical correctness, Assistant A’s answer is accurate and therefore superior.
#
# Verdict: [A]
Ethics disclaimer for Salesforce AI models, data, code
This release is for research purposes only in support of an academic paper. Our models, datasets, and code are not specifically designed or evaluated for all downstream purposes. We strongly recommend users evaluate and address potential concerns related to accuracy, safety, and fairness before deploying this model. We encourage users to consider the common limitations of AI, comply with applicable laws, and leverage best practices when selecting use cases, particularly for high-risk scenarios where errors or misuse could significantly impact people’s lives, rights, or safety. For further guidance on use cases, refer to our standard AUP and AI AUP.
Citation
@misc{xu2025foundational,
title={Foundational Automatic Evaluators: Scaling Multi-Task Generative Evaluator Training for Reasoning-Centric Domains},
author={Xu, Austin and Nguyen, Xuan-Phi and Zhou, Yilun and Wu, Chien-Sheng and Xiong, Caiming and Joty, Shafiq},
year={2025},
journal={arXiv preprint arXiv:2510.17793},
}
- Downloads last month
- 57