Datasets:
task_categories:
- image-text-to-text
dataset_info:
features:
- name: id
dtype: string
- name: difficulty
dtype: string
- name: task_id
dtype: string
- name: image
dtype: image
- name: correct_tool
dtype: string
- name: cot_prompt
dtype: string
- name: no_cot_prompt
dtype: string
splits:
- name: train
num_bytes: 1670050311
num_examples: 1012
download_size: 1670796005
dataset_size: 1670050311
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
PhysToolBench: Benchmarking Physical Tool Understanding for MLLMs
Hugging Face Paper | arXiv Paper | GitHub Repo
📢 News
- [2025.10.13] The paper is now available on arXiv!
- [2025.10.10] We release the dataset and the code! Welcome to use and star our project!
Introduction
"Man is a Tool-using Animal; without tools he is nothing, with tools he is all." --Thomas Carlyle
For an Embodied Agent, using physical tools is crucial in many tasks. The understanding of physical tools significantly impacts the task's success rate and execution efficiency (Top). PhysToolBench (Bottom) systematically evaluates the understanding of physical tools of multimodal LLMs. The benchmark is designed with three progressive levels of difficulty and employs a Visual Question Answering (VQA) format. Notice that in the actual benchmark, tools in the images are numerically labeled. Images here are for illustrative purposes only.
The ability to use, understand, and create tools is a hallmark of human intelligence, enabling sophisticated interaction with the physical world. For any general-purpose intelligent agent to achieve true versatility, it must also master these fundamental skills. While modern Multimodal Large Language Models (MLLMs) leverage their extensive common knowledge for high-level planning in embodied AI and in downstream Vision-Language-Action (VLA) models, the extent of their true understanding of physical tools remains unquantified.
To bridge this gap, we present PhysToolBench, the first benchmark dedicated to evaluating the comprehension of physical tools by MLLMs. Our benchmark is structured as a Visual Question Answering (VQA) dataset comprising over 1,000 image-text pairs. It assesses capabilities across three distinct difficulty levels:
- Tool Recognition: Requiring the recognition of a tool's primary function.
- Tool Understanding: Testing the ability to grasp the underlying principles of a tool's operation.
- Tool Creation: Challenging the model to fashion a new tool from surrounding objects when conventional options are unavailable.
Our comprehensive evaluation of 32 MLLMs—spanning proprietary, open-source, specialized embodied, and backbones in VLAs—reveals a significant deficiency in tool understanding. Furthermore, we provide an in-depth analysis and propose preliminary solutions.
Sample Usage
Set up
Environment setup:
git clone https://github.com/PhysToolBench/PhysToolBench.git
cd PhysToolBench
conda create phystoolbench
pip install -r requirements.txt
Download the dataset:
Dataset are available at Huggingface Repo.
huggingface-cli download --repo-type dataset zhangzixin02/PhysToolBench
Inference
You can run MLLMs in two ways to evaluate it on the PhysToolBench:
Use the API of the proprietory MLLMs:
Our code will automatically choose the appropriate API interface based on the model name. For example, to evaluate the gpt-5 model, you can run:
python src/inference.py --model_name gpt-5 --api_url https://xxxxxx --api_key "sk-xxxxxx" --resume # Put your own API URL and API key here
We recommand using multiple threads to speed up the inference for proprietory models. For example, to use 8 threads, you can run:
python src/inference.py --model_name gpt-5 --api_url https://xxxxxx --api_key "sk-xxxxxx" --resume --num_threads 8
You can modify the logic in
src/model_api.py
to support more models or use different API interfaces. Currently, OpenAI, Claude, Gemini format are supported.Deploy the Open-Source models and run them locally:
To facilitate large-scale inference, we deployed open-source models used in our paper as servers with FastAPI so that they can be accessed via API for flexible evaluation. Note that more dependencies need to be installed for local inference. You can refer to the requirements.txt file for the dependencies, we recommand following the original repository of the MLLM you use for dependency installation.
2.1. Start the server:
python vlm_local/Qwen-2.5VL/qwen_2_5vl_server.py --port 8004 # deploy the qwen-2.5-vl server on port 8004, you can change the port to other available ports
2.2. Run the local MLLM:
python src/inference.py --model_name qwen-2.5-vl-7B --api_url http://localhost:8004 --api_key "" --resume # Evaluate the qwen-2.5-vl-7B model
Since the lmdeploy and vllm are also compatible with the OpenAi-API format, you can easily deploy other models using lmdeploy and vllm as long as they are supported by these tools.
Evaluate
You can evaluate the results and calculate the scores with the following command:
python src/metric.py
Citation
If you use PhysToolBench in your research, please cite the following paper:
@article{zhang2025phystoolbench,
title={PhysToolBench: Benchmarking Physical Tool Understanding for MLLMs},
author={Zhang, Zixin and Chen, Kanghao and Lin, Xingwang and Jiang, Lutao and Zheng, Xu and Lyu, Yuanhuiyi and Guo, Litao and Li, Yinchuan and Chen, Ying-Cong},
journal={arXiv preprint arXiv:2510.09507},
year={2025}
}
Acknowledgement
Our code is built upon the following repositories, and we thank the authors for their contributions: