TroVE: Inducing Verifiable and Efficient Toolboxes for Solving Programmatic Tasks
Abstract
TROVE, a training-free method, curates and refines reusable high-level functions for code language models to solve tasks more efficiently and accurately.
Language models (LMs) can solve tasks such as answering questions about tables or images by writing programs. However, using primitive functions often leads to verbose and error-prone programs, and higher-level functions require expert design. To enable better solutions without human labor, we ask code LMs to curate reusable high-level functions, and use them to write solutions. We present TROVE, a training-free method of inducing a verifiable and efficient toolbox of functions, by generating via using, growing, and periodically trimming the toolbox. On 11 datasets from math, table question answering, and image reasoning tasks, TROVE consistently yields simpler solutions with higher accuracy than baselines using CODELLAMA and previous methods using GPT, while using 79-98% smaller toolboxes. TROVE further enables 31% faster and 13% more accurate human verification than baselines. With the same pipeline, it creates diverse functions for varied tasks and datasets, providing insights into their individual characteristics.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- A Compute-Matched Re-Evaluation of TroVE on MATH (2025)
- AQuilt: Weaving Logic and Self-Inspection into Low-Cost, High-Relevance Data Synthesis for Specialist LLMs (2025)
- Format-Adapter: Improving Reasoning Capability of LLMs by Adapting Suitable Format (2025)
- KG-Augmented Executable CoT for Mathematical Coding (2025)
- PBE Meets LLM: When Few Examples Aren't Few-Shot Enough (2025)
- OpenCodeReasoning-II: A Simple Test Time Scaling Approach via Self-Critique (2025)
- A Toolbox, Not a Hammer - Multi-TAG: Scaling Math Reasoning with Multi-Tool Aggregation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper