paper
stringlengths
14
183
authors
listlengths
1
95
abstract
stringlengths
246
3.6k
link
stringlengths
42
42
track
stringclasses
2 values
award
stringclasses
3 values
paper_id
stringlengths
10
10
SURDS: Benchmarking Spatial Understanding and Reasoning in Driving Scenarios with Vision Language Models
[ "Xianda Guo", "Ruijun Zhang", "Yiqun Duan", "Yuhang He", "Dujun Nie", "Wenke Huang", "Chenming Zhang", "Shuai Liu", "Hao Zhao", "Long Chen" ]
Accurate spatial reasoning in outdoor environments—covering geometry, object pose, and inter-object relationships—is fundamental to downstream tasks such as mapping, motion forecasting, and high-level planning in autonomous driving. We introduce SURDS, a large-scale benchmark designed to systematically evaluate the spatial reasoning capabilities of vision language models (VLMs). Built on the nuScenes dataset, SURDS comprises 41,080 vision–question–answer training instances and 9,250 evaluation samples, spanning six spatial categories: orientation, depth estimation, pixel-level localization, pairwise distance, lateral ordering, and front–behind relations. We benchmark leading general-purpose VLMs, including GPT, Gemini, and Qwen, revealing persistent limitations in fine-grained spatial understanding. To address these deficiencies, we go beyond static evaluation and explore whether alignment techniques can improve spatial reasoning performance. Specifically, we propose a reinforcement learning–based alignment scheme leveraging spatially grounded reward signals—capturing both perception-level accuracy (location) and reasoning consistency (logic). We further incorporate final-answer correctness and output-format rewards to guide fine-grained policy adaptation. Our GRPO-aligned variant achieves overall score of 40.80 in SURDS benchmark. Notably, it outperforms proprietary systems such as GPT-4o (13.30) and Gemini-2.0-flash (35.71). To our best knowledge, this is the first study to demonstrate that reinforcement learning–based alignment can significantly and consistently enhance the spatial reasoning capabilities of VLMs in real-world driving contexts. We release the SURDS benchmark, evaluation toolkit, and GRPO alignment code through: https://github.com/XiandaGuo/Drive-MLLM.
https://openreview.net/forum?id=A9Jfrc1ouy
Datasets and Benchmarks
Poster
A9Jfrc1ouy
GTPBD: A Fine-Grained Global Terraced Parcel and Boundary Dataset
[ "Zhiwei Zhang", "Zi Ye", "Yibin Wen", "Shuai Yuan", "Haohuan Fu", "Huang Jianxi", "Juepeng Zheng" ]
Agricultural parcels serve as basic units for conducting agricultural practices and applications, which is vital for land ownership registration, food security assessment, soil erosion monitoring, etc. However, existing agriculture parcel extraction studies only focus on mid-resolution mapping or regular plain farmlands while lacking representation of complex terraced terrains due to the demands of precision agriculture. In this paper, we introduce a more fine-grained terraced parcel dataset named GTPBD (Global Terraced Parcel and Boundary Dataset), which is the first fine-grained dataset covering major worldwide terraced regions with more than 200,000 complex terraced parcels with manually annotation. GTPBD comprises 47,537 high-resolution images with three-level labels, including pixel-level boundary labels, mask labels, and parcel labels. It covers seven major geographic zones in China and transcontinental climatic regions around the world. Compared to the existing datasets, the GTPBD dataset brings considerable challenges due to the: (1) terrain diversity; (2) complex and irregular parcel objects; and (3) multiple domain styles. Our proposed GTPBD dataset is suitable for four different tasks, including semantic segmentation, edge detection, terraced parcel extraction and unsupervised domain adaptation (UDA) tasks. Accordingly, we benchmark the GTPBD dataset on eight semantic segmentation methods, four edge extraction methods, three parcel extraction methods and five UDA methods, along with a multi-dimensional evaluation framework integrating pixel-level and object-level metrics. GTPBD fills a critical gap in terraced remote sensing research, providing a basic infrastructure for fine-grained agricultural terrain analysis and cross-scenario knowledge transfer. The code and data are available at https://github.com/Z-ZW-WXQ/GTPBD/.
https://openreview.net/forum?id=A3aV30YGqP
Datasets and Benchmarks
Poster
A3aV30YGqP
R&D-Agent-Quant: A Multi-Agent Framework for Data-Centric Factors and Model Joint Optimization
[ "Yuante Li", "Xu Yang", "Xiao Yang", "Xisen Wang", "Weiqing Liu", "Jiang Bian" ]
Financial markets pose fundamental challenges for asset return prediction due to their high dimensionality, non-stationarity, and persistent volatility. Despite advances in large language models and multi-agent systems, current quantitative research pipelines suffer from limited automation, weak interpretability, and fragmented coordination across key components such as factor mining and model innovation. In this paper, we propose R&D-Agent for Quantitative Finance, in short R&D-Agent(Q), the first data-centric multi-agent framework designed to automate the full-stack research and development of quantitative strategies via coordinated factor-model co-optimization. R&D-Agent(Q) decomposes the quant process into two iterative stages: a Research stage that dynamically sets goal-aligned prompts, formulates hypotheses based on domain priors, and maps them to concrete tasks, and a Development stage that employs a code-generation agent, Co-STEER, to implement task-specific code, which is then executed in real-market backtests. The two stages are connected through a feedback stage that thoroughly evaluates experimental outcomes and informs subsequent iterations, with a multi-armed bandit scheduler for adaptive direction selection. Empirically, R&D-Agent(Q) achieves up to 2× higher annualized returns than classical factor libraries using 70% fewer factors, and outperforms state-of-the-art deep time-series models on real markets. Its joint factor–model optimization delivers a strong balance between predictive accuracy and strategy robustness. Our code is available at: https://github.com/microsoft/RD-Agent.
https://openreview.net/forum?id=9VxTXAUH7G
Datasets and Benchmarks
Poster
9VxTXAUH7G
FailureSensorIQ: A Multi-Choice QA Dataset for Understanding Sensor Relationships and Failure Modes
[ "Christodoulos Constantinides", "Dhaval C Patel", "Shuxin Lin", "Claudio Guerrero", "SUNIL DAGAJIRAO PATIL", "Jayant Kalagnanam" ]
We introduce FailureSensorIQ, a novel Multi-Choice Question-Answering (MCQA) benchmarking system designed to assess the ability of Large Language Models (LLMs) to reason and understand complex, domain-specific scenarios in Industry 4.0. Unlike traditional QA benchmarks, our system focuses on multiple aspects of reasoning through failure modes, sensor data, and the relationships between them across various industrial assets. Through this work, we envision a paradigm shift where modeling decisions are not only data-driven using statistical tools like correlation analysis and significance tests, but also domain-driven by specialized LLMs which can reason about the key contributors and useful patterns that can be captured with feature engineering. We evaluate the Industrial knowledge of over a dozen LLMs including GPT-4, Llama, and Mistral on FailureSensorIQ from different lens using Perturbation-Uncertainty-Complexity analysis, Expert Evaluation study, Asset-Specific Knowledge Gap analysis, ReAct agent using external knowledge-bases. Even though closed-source models with strong reasoning capabilities approach expert-level performance, the comprehensive benchmark reveals a significant drop in performance that is fragile to perturbations, distractions, and inherent knowledge gaps in the models. We also provide a real-world case study of how LLMs can drive the modeling decisions on 3 different failure prediction datasets related to various assets. We release: (a) expert-curated MCQA for various industrial assets, (b) FailureSensorIQ benchmark and Hugging Face leaderboard based on MCQA built from non-textual data found in ISO documents, and (c) ``LLMFeatureSelector'', an LLM-based feature selection scikit-learn pipeline. The software is available at https://github.com/IBM/FailureSensorIQ.
https://openreview.net/forum?id=9KfkMAy2ut
Datasets and Benchmarks
Poster
9KfkMAy2ut
Can LLMs Correct Themselves? A Benchmark of Self-Correction in LLMs
[ "Guiyao Tie", "Zenghui Yuan", "Zeli Zhao", "Chaoran Hu", "Tianhe Gu", "Ruihang Zhang", "Sizhe Zhang", "Junran Wu", "Xiaoyue Tu", "Ming Jin", "Qingsong Wen", "Lixing Chen", "Pan Zhou", "Lichao Sun" ]
Self-correction of large language models (LLMs) emerges as a critical component for enhancing their reasoning performance. Although various self-correction methods have been proposed, a comprehensive evaluation of these methods remains largely unexplored, and the question of whether LLMs can truly correct themselves is a matter of significant interest and concern. In this study, we introduce **CorrectBench**, a benchmark developed to evaluate the effectiveness of self-correction strategies, including intrinsic, external, and fine-tuned approaches, across three tasks: commonsense reasoning, mathematical reasoning, and code generation. Our findings reveal that: 1) Self-correction methods can improve accuracy, especially for complex reasoning tasks; 2) Mixing different self-correction strategies yields further improvements, though it reduces efficiency; 3) Reasoning LLMs (e.g., DeepSeek-V3) have limited optimization under additional self-correction methods and have high time costs. Interestingly, a comparatively simple chain-of-thought (CoT) baseline demonstrates competitive accuracy and efficiency. These results underscore the potential of self-correction to enhance LLM's reasoning performance while highlighting the ongoing challenge of improving their efficiency. Consequently, we advocate for further research focused on optimizing the balance between reasoning capabilities and operational efficiency.
https://openreview.net/forum?id=956KYtqwcU
Datasets and Benchmarks
Poster
956KYtqwcU
TAPAS: Datasets for Learning the Learning with Errors Problem
[ "Eshika Saxena", "Alberto Alfarano", "Francois Charton", "Emily Wenger", "Kristin E. Lauter" ]
AI-powered attacks on Learning with Errors (LWE)—an important hard math problem in post-quantum cryptography—rival or outperform "classical" attacks on LWE under certain parameter settings. Despite the promise of this approach, a dearth of accessible data limits AI practitioners' ability to study and improve these attacks. Creating LWE data for AI model training is time- and compute-intensive and requires significant domain expertise. To fill this gap and accelerate AI research on LWE attacks, we propose the TAPAS datasets, a ${\bf t}$oolkit for ${\bf a}$nalysis of ${\bf p}$ost-quantum cryptography using ${\bf A}$I ${\bf s}$ystems. These datasets cover several LWE settings and can be used off-the-shelf by AI practitioners to prototype new approaches to cracking LWE. This work documents TAPAS dataset creation, establishes attack performance baselines, and lays out directions for future work.
https://openreview.net/forum?id=91scW3DywW
Datasets and Benchmarks
Poster
91scW3DywW
scGeneScope: A Treatment-Matched Single Cell Imaging and Transcriptomics Dataset and Benchmark for Treatment Response Modeling
[ "Joel Dapello", "Marcel Nassar", "Ridvan Eksi", "Ban Wang", "Jules Gagnon-Marchand", "Kenneth T Gao", "Akram Baharlouei", "Kyra Thrush-Evensen", "Nina Riehs", "Amy F Peterson", "Aniket Tolpadi", "Abhejit Rajagopal", "Henry E Miller", "Ashley Mae Conard", "David Alvarez-Melis", "Rory Stark", "Simone Bianco", "Morgan Levine", "Ava P Amini", "Alex Xijie Lu", "Nicolo Fusi", "Ravi Pandya", "Valentina Pedoia", "Hana El-Samad" ]
Understanding cellular responses to chemical interventions is critical to the discovery of effective therapeutics. Because individual biological techniques often measure only one axis of cellular response at a time, high-quality multimodal datasets are needed to unlock a holistic understanding of how cells respond to treatments and to advance computational methods that integrate modalities. However, many techniques destroy cells and thus preclude paired measurements, and attempts to match disparate unimodal datasets are often confounded by data being generated in incompatible experimental settings. Here we introduce scGeneScope, a multimodal single‑cell RNA sequencing (scRNA-seq) and Cell Painting microscopy image dataset conditionally paired by chemical treatment, designed to facilitate the development and benchmarking of unimodal, multimodal, and multiple profile machine learning methods for cellular profiling. 28 chemicals, each acting on distinct biological pathways or mechanisms of action (MoAs), were applied to U2-OS cells in two experimental data generation rounds, creating paired sets of replicates that were then profiled independently by scRNA‑seq or Cell Painting. Using scGeneScope, we derive a replicate- and experiment-split treatment identification benchmark simulating MoA discovery under realistic laboratory variability conditions and evaluate unimodal, multimodal, and multiprofile models ranging in complexity from linear approaches to recent foundation models. Multiprofile integration improved performance in both the unimodal and multimodal settings, with gains more consistent in the former. Evaluation of unimodal models for MoA identification demonstrated that recent scRNA-seq foundation models deployed zero-shot were consistently outperformed by classic fit-to-data methods, underscoring the need for careful, realistic benchmarking in machine learning for biology. We release the scGeneScope dataset and benchmarking code to support further research.
https://openreview.net/forum?id=918POZbZ50
Datasets and Benchmarks
Poster
918POZbZ50
Embodied Web Agents: Bridging Physical-Digital Realms for Integrated Agent Intelligence
[ "Yining Hong", "Rui Sun", "Bingxuan Li", "Xingcheng Yao", "Maxine Wu", "Alexander Chien", "Da Yin", "Ying Nian Wu", "Zhecan Wang", "Kai-Wei Chang" ]
AI agents today are mostly siloed — they either retrieve and reason over vast amount of digital information and knowledge obtained online; or interact with the physical world through embodied perception, planning and action — but rarely both. This separation limits their ability to solve tasks that require integrated physical and digital intelligence, such as cooking from online recipes, navigating with dynamic map data, or interpreting real-world landmarks using web knowledge. We introduce \textsc{Embodied Web Agents}, a novel paradigm for AI agents that fluidly bridge embodiment and web-scale reasoning. To operationalize this concept, we first develop the \textsc{Embodied Web Agents} task environments, a unified simulation platform that integrates realistic 3D indoor and outdoor environments with functional web interfaces. Building upon this platform, we construct and release the \textsc{Embodied Web Agents} Benchmark, which encompasses a diverse suite of tasks including cooking, navigation, shopping, tourism, and geolocation — all requiring coordinated reasoning across physical and digital realms for systematic assessment of cross-domain intelligence. Experimental results reveal significant performance gaps between state-of-the-art AI systems and human capabilities, establishing both challenges and opportunities at the intersection of embodied cognition and web-scale knowledge access.
https://openreview.net/forum?id=8wyCbbzF4Q
Datasets and Benchmarks
Spotlight
8wyCbbzF4Q
PSBench: a large-scale benchmark for estimating the accuracy of protein complex structural models
[ "Pawan Neupane", "Jian Liu", "Jianlin Cheng" ]
Predicting protein complex structures is essential for protein function analysis, protein design, and drug discovery. While AI methods like AlphaFold can predict accurate structural models for many protein complexes, reliably estimating the quality of these predicted models (estimation of model accuracy, or EMA) for model ranking and selection remains a major challenge. A key barrier to developing effective machine learning-based EMA methods is the lack of large, diverse, and well-annotated datasets for training and evaluation. To address this gap, we introduce PSBench, a benchmark suite comprising five large-scale, labeled datasets, four of which were generated during the 15th and 16th community-wide Critical Assessment of Protein Structure Prediction (CASP15 and CASP16), and one curated for new Protein Data Bank (PDB) entries deposited between July 2024 and August 2025. PSBench includes over 1.4 million structural models covering a wide range of protein sequence lengths, complex stoichiometries, functional classes, and modeling difficulties. Each model is annotated with multiple complementary quality scores at the global, local, and interface levels. PSBench also provides multiple evaluation metrics and baseline EMA methods to facilitate rigorous comparisons. To demonstrate PSBench’s utility, we trained and evaluated GATE, a graph transformer-based EMA method, on the CASP15 data. GATE was blindly tested in CASP16 (2024), where it ranked among the top-performing EMA methods. These results highlight PSBench as a valuable resource for advancing EMA research in protein complex modeling. PSBench is publicly available at: https://github.com/BioinfoMachineLearning/PSBench.
https://openreview.net/forum?id=8raQTkdPos
Datasets and Benchmarks
Poster
8raQTkdPos
CoreaSpeech: Korean Speech Corpus via JAMO-based Coreset Selection for Efficient and Robust Korean Speech Generation
[ "Ki-Joong Kwon", "Jun-Ho So", "Sang-Hoon Lee" ]
While substantial advances have been achieved in TTS for languages such as English and Mandarin, Korean remains comparatively underrepresented due to the lack of rigorous preprocessing methods, systematically constructed datasets, a shortage of standardized Korean TTS benchmarks, and explicitly optimized models for Korean. To address these limitations, we propose a Korean-tailored data-refinement and coreset selection pipeline. It refines speech data and performs textual normalization especially for numerals and English terms, followed by a novel coreset selection strategy that leverages Jamo-based linguistic and phonological features unique to Korean. As a result, we release CoreaSpeech, an efficient and robust Korean speech corpus comprising 700 hours across 21,449 speakers. This refined core subset, evenly balanced across utterances ranging from 0 to 30 seconds, is derived from 2,058 hours of widely used Korean datasets. Building on this, we conducted extensive experiments via cross-lingual fine-tuning with our CoreaSpeech dataset. Furthermore, we introduce a new universal Korean TTS benchmark dataset including clean, noisy, and numeric subsets. Additionally, we demonstrate that our Korean-specific text normalization serves as a plug-and-play module, reliably improving performance regardless of the underlying TTS architecture. We publicly release our dataset, pipeline code, and evaluation benchmarks to support reproducible research and further advances in Korean and multilingual speech synthesis.
https://openreview.net/forum?id=8nHq0IIwpd
Datasets and Benchmarks
Poster
8nHq0IIwpd
LooGLE v2: Are LLMs Ready for Real World Long Dependency Challenges?
[ "ZiyuanHe", "Yuxuan Wang", "Jiaqi Li", "Kexin Liang", "Muhan Zhang" ]
Large language models (LLMs) are equipped with increasingly extended context windows recently, yet their long context understanding capabilities over long dependency tasks remain fundamentally limited and underexplored. This gap is especially significant in many real-world long-context applications that were rarely benchmarked. In this paper, we introduce $\textbf{LooGLE v2}$, a novel benchmark designed to evaluate LLMs' long context ability in real-world applications and scenarios. Our benchmark consists of automatically collected real-world long texts, ranging from 16k to 2M tokens, encompassing domains in law, finance, game and code. Accordingly, we delicately design 10 types of domain-specific long-dependency tasks and generate 1,934 QA instances with various diversity and complexity in a scalable data curation pipeline for further practical needs. We conduct a comprehensive assessment of 6 locally deployed and 4 API-based LLMs. The evaluation results show that even the best-performing model achieves only a 59.2\% overall score on our benchmark. Despite the extensive context windows, popular LLMs are only capable of understanding a much shorter length of context than they claim to be, revealing significant limitations in their ability to handle real-world tasks with long dependencies and highlighting substantial room for model improvement in practical long-context understanding.
https://openreview.net/forum?id=8ObELfOFbB
Datasets and Benchmarks
Poster
8ObELfOFbB
CheMixHub: Datasets and Benchmarks for Chemical Mixture Property Prediction
[ "Ella Miray Rajaonson", "Mahyar Rajabi Kochi", "Luis M. Mejía-Mendoza", "Seyed Mohamad Moosavi", "Benjamin Manuel Sanchez" ]
Developing improved predictive models for multi-molecular systems is crucial, as nearly every chemical product used results from a mixture of chemicals. While being a vital part of the industry pipeline, the chemical mixture space remains relatively unexplored by the Machine Learning (ML) community. In this paper, we introduce CheMixHub, a holistic benchmark for molecular mixtures spanning a corpus of 11 chemical mixtures property prediction tasks. With applications ranging from drug delivery formulations to battery electrolytes, CheMixHub currently totals approximately 500k data points gathered and curated from 7 publicly available datasets. We devise various data splitting techniques to assess context-specific generalization and model robustness, providing a foundation for the development of predictive models for chemical mixture properties. Furthermore, we map out the modelling space of deep learning models for chemical mixtures, establishing initial benchmarks for the community. This dataset has the potential to accelerate chemical mixture development, encompassing reformulation, optimization, and discovery. The dataset and code for the benchmarks can be found at: https://github.com/chemcognition-lab/chemixhub
https://openreview.net/forum?id=8HUnx0rJNq
Datasets and Benchmarks
Poster
8HUnx0rJNq
CPRet: A Dataset, Benchmark, and Model for Retrieval in Competitive Programming
[ "Han Deng", "Yuan Meng", "SHIXIANG TANG", "Wanli Ouyang", "Xinzhu Ma" ]
Competitive programming is widely used to evaluate the coding and reasoning abilities of large language models. However, the growing presence of duplicate or highly similar problems raises concerns not only about competition fairness, but also about the validity of competitive programming as a benchmark for model evaluation. We introduce a retrieval-oriented benchmark suite for competitive programming, covering four retrieval tasks—two code-centric (Text-to-Code, Code-to-Code) and two newly proposed problem-centric tasks (Problem-to-Duplicate, Simplified-to-Full)—built from a combination of automatically crawled problem–solution data and manually curated annotations. Our contribution includes both high-quality training data and temporally separated test sets for reliable evaluation. We develop two task-specialized retrievers based on this dataset: CPRetriever-Code, trained with a novel Group-InfoNCE loss for problem–code alignment, and CPRetriever-Prob, fine-tuned for problem-level similarity. Both models achieve strong results and are open-sourced for local use. Finally, we analyze LiveCodeBench and find that high-similarity problems inflate model pass rates and reduce differentiation, underscoring the need for similarity-aware evaluation in future benchmarks.
https://openreview.net/forum?id=8FZ4oRWJjq
Datasets and Benchmarks
Poster
8FZ4oRWJjq
MedSG-Bench: A Benchmark for Medical Image Sequences Grounding
[ "Jingkun Yue", "Siqi Zhang", "Zinan Jia", "Huihuan Xu", "Zongbo Han", "Xiaohong Liu", "Guangyu Wang" ]
Visual grounding is essential for precise perception and reasoning in multimodal large language models (MLLMs), especially in medical imaging domains. While existing medical visual grounding benchmarks primarily focus on single-image scenarios, real-world clinical applications often involve sequential images, where accurate lesion localization across different modalities and temporal tracking of disease progression (e.g., pre- vs. post-treatment comparison) require fine-grained cross-image semantic alignment and context-aware reasoning. To remedy the underrepresentation of image sequences in existing medical visual grounding benchmarks, we propose MedSG-Bench, the first benchmark tailored for Medical Image Sequences Grounding. It comprises eight VQA-style tasks, formulated into two paradigms of the grounding tasks, including 1) Image Difference Grounding, which focuses on detecting change regions across images, and 2) Image Consistency Grounding, which emphasizes detection of consistent or shared semantics across sequential images. MedSG-Bench covers 76 public datasets, 10 medical imaging modalities, and a wide spectrum of anatomical structures and diseases, totaling 9,630 question–answer pairs. We benchmark both general-purpose MLLMs (e.g., Qwen2.5-VL) and medical-domain specialized MLLMs (e.g., HuatuoGPT-vision), observing that even the advanced models exhibit substantial limitations in medical sequential grounding tasks. To advance this field, we construct MedSG-188K, a large-scale instruction-tuning dataset tailored for sequential visual grounding, and further develop MedSeq-Grounder, an MLLM designed to facilitate future research on fine-grained understanding across medical sequential images. We release all resources on https://github.com/Yuejingkun/MedSG-Bench
https://openreview.net/forum?id=8CKhxBaWO5
Datasets and Benchmarks
Spotlight
8CKhxBaWO5
SoMi-ToM: Evaluating Multi-Perspective Theory of Mind in Embodied Social Interactions
[ "Xianzhe Fan", "Xuhui Zhou", "Chuanyang Jin", "Kolby Nottingham", "Hao Zhu", "Maarten Sap" ]
Humans continuously infer the states, goals, and behaviors of others by perceiving their surroundings in dynamic, real-world social interactions. However, most Theory of Mind (ToM) benchmarks only evaluate static, text-based scenarios, which have a significant gap compared to real interactions. We propose the SoMi-ToM benchmark, designed to evaluate multi-perspective ToM in embodied multi-agent complex social interactions. This benchmark is based on rich multimodal interaction data generated by the interaction environment SoMi, covering diverse crafting goals and social relationships. Our framework supports multi-level evaluation: (1) first-person evaluation provides multimodal (visual, dialogue, action, etc.) input from a first-person perspective during a task for real-time state inference, (2) third-person evaluation provides complete third-person perspective video and text records after a task for goal and behavior inference. This evaluation method allows for a more comprehensive examination of a model's ToM capabilities from both the subjective immediate experience and the objective global observation. We constructed a challenging dataset containing 35 third-person perspective videos, 363 first-person perspective images, and 1225 expert-annotated multiple-choice questions (three options). On this dataset, we systematically evaluated the performance of human subjects and several state-of-the-art large vision-language models (LVLMs). The results show that LVLMs perform significantly worse than humans on SoMi-ToM: the average accuracy gap between humans and models is 40.1% in first-person evaluation and 26.4% in third-person evaluation. This indicates that future LVLMs need to further improve their ToM capabilities in embodied, complex social interactions.
https://openreview.net/forum?id=7zFLFtqBm0
Datasets and Benchmarks
Poster
7zFLFtqBm0
WolBanking77: Wolof Banking Speech Intent Classification Dataset
[ "Abdou Karim KANDJI", "Frederic Precioso", "Cheikh BA", "Samba NDIAYE", "Augustin NDIONE" ]
Intent classification models have made a significant progress in recent years. However, previous studies primarily focus on high-resource language datasets, which results in a gap for low-resource languages and for regions with high rates of illiteracy, where languages are more spoken than read or written. This is the case in Senegal, for example, where Wolof is spoken by around 90\% of the population, while the national illiteracy rate remains at of 42\%. Wolof is actually spoken by more than 10 million people in West African region. To address these limitations, we introduce the Wolof Banking Speech Intent Classification Dataset (WolBanking77), for academic research in intent classification. WolBanking77 currently contains 9,791 text sentences in the banking domain and more than 4 hours of spoken sentences. Experiments on various baselines are conducted in this work, including text and voice state-of-the-art models. The results are very promising on this current dataset. In addition, this paper presents an in-depth examination of the dataset’s contents. We report baseline F1-scores and word error rates metrics respectively on NLP and ASR models trained on WolBanking77 dataset and also comparisons between models. Dataset and code available at: [wolbanking77](https://github.com/abdoukarim/wolbanking77).
https://openreview.net/forum?id=7k0JBDeHAv
Datasets and Benchmarks
Poster
7k0JBDeHAv
Introducing FOReCAst: The Future Outcome Reasoning and Confidence Assessment Benchmark
[ "Moy Yuan", "Zifeng Ding", "Andreas Vlachos" ]
Forecasting is an important task in many domains. However, existing forecasting benchmarks lack comprehensive confidence assessment, focusing on limited question types, and often consist of artificial questions that do not reflect real-world needs. To address these gaps, we introduce FOReCAst (Future Outcome Reasoning and Confidence Assessment), a benchmark that evaluates models' ability to make predictions and their confidence in them. FOReCAst spans diverse forecasting scenarios involving Boolean questions, timeframe prediction, and quantity estimation, enabling a comprehensive evaluation of both prediction accuracy and confidence calibration for real-world applications.
https://openreview.net/forum?id=7hVyqs8NaP
Datasets and Benchmarks
Poster
7hVyqs8NaP
mmWalk: Towards Multi-modal Multi-view Walking Assistance
[ "Kedi Ying", "Ruiping Liu", "Chongyan Chen", "Mingzhe Tao", "Hao Shi", "Kailun Yang", "Jiaming Zhang", "Rainer Stiefelhagen" ]
Walking assistance in extreme or complex environments remains a significant challenge for people with blindness or low vision (BLV), largely due to the lack of a holistic scene understanding. Motivated by the real-world needs of the BLV community, we build mmWalk, a simulated multi-modal dataset that integrates multi-view sensor and accessibility-oriented features for outdoor safe navigation. Our dataset comprises $120$ manually controlled, scenario-categorized walking trajectories with $62k$ synchronized frames. It contains over $559k$ panoramic images across RGB, depth, and semantic modalities. Furthermore, to emphasize real-world relevance, each trajectory involves outdoor corner cases and accessibility-specific landmarks for BLV users. Additionally, we generate mmWalkVQA, a VQA benchmark with over $69k$ visual question-answer triplets across $9$ categories tailored for safe and informed walking assistance. We evaluate state-of-the-art Vision-Language Models (VLMs) using zero- and few-shot settings and found they struggle with our risk assessment and navigational tasks. We validate our mmWalk-finetuned model on real-world datasets and show the effectiveness of our dataset for advancing multi-modal walking assistance.
https://openreview.net/forum?id=7WDFZKtf7q
Datasets and Benchmarks
Poster
7WDFZKtf7q
VideoCAD: A Dataset and Model for Learning Long‑Horizon 3D CAD UI Interactions from Video
[ "Brandon Man", "Ghadi Nehme", "Md Ferdous Alam", "Faez Ahmed" ]
Computer-Aided Design (CAD) is a time-consuming and complex process, requiring precise, long-horizon user interactions with intricate 3D interfaces. While recent advances in AI-driven user interface (UI) agents show promise, most existing datasets and methods focus on short, low-complexity tasks in mobile or web applications, failing to capture the demands of professional engineering tools. In this work, we introduce VideoCAD, the first attempt to model UI interactions for precision engineering tasks. Specifically, VideoCAD is a large-scale synthetic dataset consisting of over 41K annotated video recordings of CAD operations, generated using an automated framework for collecting high-fidelity UI action data from human-made CAD designs. Compared to existing datasets, VideoCAD offers an order-of-magnitude increase in complexity for real-world engineering UI tasks, with time horizons up to $20\times$ longer than those in other datasets. We show two important downstream applications of VideoCAD: (1) learning UI interactions from professional 3D CAD tools for precision tasks and (2) a visual question-answering (VQA) benchmark designed to evaluate multimodal large language models (LLMs) on spatial reasoning and video understanding. To learn the UI interactions, we propose VideoCADFormer, a state-of-the-art model for learning CAD interactions directly from video, which outperforms existing behavior cloning baselines. Both VideoCADFormer and the VQA benchmark derived from VideoCAD reveal key challenges in the current state of video-based UI understanding, including the need for precise action grounding, multi-modal and spatial reasoning, and long-horizon dependencies. Dataset and code available at: https://github.com/ghadinehme/VideoCAD.
https://openreview.net/forum?id=7SD9RCvcb9
Datasets and Benchmarks
Poster
7SD9RCvcb9
Ineq-Comp: Benchmarking Human-Intuitive Compositional Reasoning in Automated Theorem Proving of Inequalities
[ "Haoyu Zhao", "Yihan Geng", "Shange Tang", "Yong Lin", "Bohan Lyu", "Hongzhou Lin", "Chi Jin", "Sanjeev Arora" ]
LLM-based formal proof assistants (e.g., in Lean) hold great promise for automating mathematical discovery. But beyond syntactic correctness, do these systems truly understand mathematical structure as humans do? We investigate this question in context of mathematical inequalities---specifically the prover's ability to recognize that the given problem simplifies by applying a known inequality such as AM/GM. Specifically, we are interested in their ability to do this in a {\em compositional setting} where multiple inequalities must be applied as part of a solution. We introduce \ineqcomp, a benchmark built from elementary inequalities through systematic transformations, including variable duplication, algebraic rewriting, and multi-step composition. Although these problems remain easy for humans, we find that most provers---including Goedel, STP, and Kimina-7B---struggle significantly. DeepSeek-Prover-V2-7B shows relative robustness, but still suffers a 20\% performance drop (pass@32). Even for DeepSeek-Prover-V2-671B model, the gap between compositional variants and seed problems exists, implying that simply scaling up the model size alone does not fully solve the compositional weakness. Strikingly, performance remains poor for all models even when formal proofs of the constituent parts are provided in context, revealing that the source of weakness is indeed in compositional reasoning. Our results expose a persisting gap between the generalization behavior of current AI provers and human mathematical intuition. All data and evaluation code can be found at \url{https://github.com/haoyuzhao123/LeanIneqComp}.
https://openreview.net/forum?id=7B8AYtmQRT
Datasets and Benchmarks
Poster
7B8AYtmQRT
MolVision: Molecular Property Prediction with Vision Language Models
[ "Deepan Adak", "Yogesh S Rawat", "Shruti Vyas" ]
Molecular property prediction is a fundamental task in computational chemistry with critical applications in drug discovery and materials science. While recent works have explored Large Language Models (LLMs) for this task, they primarily rely on textual molecular representations such as SMILES/SELFIES, which can be ambiguous and structurally uninformative. In this work, we introduce MolVision, a novel approach that leverages Vision-Language Models (VLMs) by integrating both molecular structure images and textual descriptions to enhance property prediction. We construct a benchmark spanning nine diverse datasets, covering both classification and regression tasks. Evaluating nine different VLMs in zero-shot, few-shot, and fine-tuned settings, we find that visual information improves prediction performance, particularly when combined with efficient fine-tuning strategies such as LoRA. Our results reveal that while visual information alone is insufficient, multimodal fusion significantly enhances generalization across molecular properties. Adaptation of vision encoder for molecular images in conjunction with LoRA further improves the performance. The code and data is available at : https://molvision.github.io/MolVision/.
https://openreview.net/forum?id=6vI3OOYddm
Datasets and Benchmarks
Poster
6vI3OOYddm
The Catechol Benchmark: Time-series Solvent Selection Data for Few-shot Machine Learning
[ "Toby Boyne", "Juan S Campos", "Rebecca D. Langdon", "Jixiang Qing", "Yilin Xie", "Shiqiang Zhang", "Calvin Tsay", "Ruth Misener", "Daniel W. Davies", "Kim E Jelfs", "Sarah Boyall", "Thomas Mark Dixon", "Linden Schrecker", "Jose Pablo Folch" ]
Machine learning has promised to change the landscape of laboratory chemistry, with impressive results in molecular property prediction and reaction retro-synthesis. However, chemical datasets are often inaccessible to the machine learning community as they tend to require cleaning, thorough understanding of the chemistry, or are simply not available. In this paper, we introduce a novel dataset for yield prediction, providing the first-ever transient flow dataset for machine learning benchmarking, covering over 1200 process conditions. While previous datasets focus on discrete parameters, our experimental set-up allow us to sample a large number of continuous process conditions, generating new challenges for machine learning models. We focus on solvent selection, a task that is particularly difficult to model theoretically and therefore ripe for machine learning applications. We showcase benchmarking for regression algorithms, transfer-learning approaches, feature engineering, and active learning, with important applications towards solvent replacement and sustainable manufacturing.
https://openreview.net/forum?id=6l8q74TabE
Datasets and Benchmarks
Poster
6l8q74TabE
Merlin L48 Spectrogram Dataset
[ "Aaron Sun", "Subhransu Maji", "Grant Van Horn" ]
In the single-positive multi-label (SPML) setting, each image in a dataset is labeled with the presence of a single class, while the true presence of other classes remains unknown. The challenge is to narrow the performance gap between this partially-labeled setting and fully-supervised learning, which often requires a significant annotation budget. Prior SPML methods were developed and benchmarked on synthetic datasets created by randomly sampling single positive labels from fully-annotated datasets like Pascal VOC, COCO, NUS-WIDE, and CUB200. However, this synthetic approach does not reflect real-world scenarios and fails to capture the fine-grained complexities that can lead to difficult misclassifications. In this work, we introduce the L48 dataset, a fine-grained, real-world multi-label dataset derived from recordings of bird sounds. L48 provides a natural SPML setting with single-positive annotations on a challenging, fine-grained domain, as well as two extended settings in which domain priors give access to additional negative labels. We benchmark existing SPML methods on L48 and observe significant performance differences compared to synthetic datasets and analyze method weaknesses, underscoring the need for more realistic and difficult benchmarks.
https://openreview.net/forum?id=6hY4hBL8M6
Datasets and Benchmarks
Poster
6hY4hBL8M6
SuperGPQA: Scaling LLM Evaluation across 285 Graduate Disciplines
[ "Xeron Du", "Yifan Yao", "Kaijing Ma", "Bingli Wang", "Tianyu Zheng", "King Zhu", "Minghao Liu", "Yiming Liang", "Xiaolong Jin", "Zhenlin Wei", "Chujie Zheng", "Kaixin Deng", "Shuyue Guo", "Shian Jia", "Sichao Jiang", "Yiyan Liao", "Rui Li", "Qinrui Li", "Sirun Li", "Yizhi LI", "Yunwen Li", "dehua ma", "Yuansheng Ni", "Haoran Que", "Qiyao Wang", "Zhoufutu Wen", "Siwei Wu", "Tianshun Xing", "许明", "Zhenzhu Yang", "Zekun Moore Wang", "Junting Zhou", "yuelin bai", "Xingyuan Bu", "chenglin cai", "Liang Chen", "Yifan Chen", "Cheng Chengtuo", "Tianhao Cheng", "Keyi Ding", "Siming Huang", "HUANG YUN", "Yaoru Li", "Yizhe Li", "Zhaoqun Li", "Tianhao Liang", "Chengdong Lin", "Hongquan Lin", "Yinghao Ma", "Z.Y. Peng", "Zifan Peng", "Qige Qi", "Shi Qiu", "Xingwei Qu", "Shanghaoran Quan", "Yizhou Tan", "Zili Wang", "王晨清", "Hao Wang", "Yiya Wang", "Yubo Wang", "Jiajun Xu", "Kexin Yang", "Ruibin Yuan", "Yuanhao Yue", "Tianyang Zhan", "Chun Zhang", "Jinyang Zhang", "Xiyue Zhang", "Owen Xingjian Zhang", "Yue Zhang", "Yongchi Zhao", "Xiangyu Zheng", "ChenghuaZhong", "Yang Gao", "Zhoujun Li", "Dayiheng Liu", "Qian Liu", "Tianyu Liu", "Shiwen Ni", "Junran Peng", "Yujia Qin", "Wenbo Su", "Guoyin Wang", "Shi Wang", "Jian Yang", "Min Yang", "Meng Cao", "Xiang Yue", "Zhaoxiang Zhang", "Wangchunshu Zhou", "Jiaheng Liu", "Qunshu Lin", "Wenhao Huang", "Ge Zhang" ]
Large language models (LLMs) have demonstrated remarkable proficiency in mainstream academic disciplines such as mathematics, physics, and computer science. However, human knowledge encompasses over 200 specialized disciplines, far exceeding the scope of existing benchmarks. The capabilities of LLMs in many of these specialized fields-particularly in light industry, agriculture, and service-oriented disciplines-remain inadequately evaluated. To address this gap, we present SuperGPQA, a comprehensive benchmark that evaluates graduate-level knowledge and reasoning capabilities across 285 disciplines. Our benchmark employs a novel Human-LLM collaborative filtering mechanism to eliminate trivial or ambiguous questions through iterative refinement based on both LLM responses and expert feedback. Our experimental results reveal significant room for improvement in the performance of current state-of-the-art LLMs across diverse knowledge domains (e.g., the reasoning-focused model Gemini-2.5-Pro achieved the highest accuracy of 63.56% on SuperGPQA), highlighting the considerable gap between current model capabilities and artificial general intelligence. Additionally, we present comprehensive insights from our management of a large-scale annotation process, involving over 80 expert annotators and an interactive Human-LLM collaborative system, offering valuable methodological guidance for future research initiatives of comparable scope.
https://openreview.net/forum?id=6WgflzYQpf
Datasets and Benchmarks
Poster
6WgflzYQpf
MM-OPERA: Benchmarking Open-ended Association Reasoning for Large Vision-Language Models
[ "Zimeng Huang", "Jinxin Ke", "Xiaoxuan Fan", "Yufeng Yang", "Yang Liu", "Liu Zhonghan", "Zedi Wang", "Junteng Dai", "Haoyi Jiang", "Yuyu Zhou", "Keze Wang", "Ziliang Chen" ]
Large Vision-Language Models (LVLMs) have exhibited remarkable progress. However, deficiencies remain compared to human intelligence, such as hallucination and shallow pattern matching. In this work, we aim to evaluate a fundamental yet underexplored intelligence: association, a cornerstone of human cognition for creative thinking and knowledge integration. Current benchmarks, often limited to closed-ended tasks, fail to capture the complexity of open-ended association reasoning vital for real-world applications. To address this, we present MM-OPERA, a systematic benchmark with 11,497 instances across two open-ended tasks: Remote-Item Association (RIA) and In-Context Association (ICA), aligning association intelligence evaluation with human psychometric principles. It challenges LVLMs to resemble the spirit of divergent thinking and convergent associative reasoning through free-form responses and explicit reasoning paths. We deploy tailored LLM-as-a-Judge strategies to evaluate open-ended outputs, applying process-reward-informed judgment to dissect reasoning with precision. Extensive empirical studies on state-of-the-art LVLMs, including sensitivity analysis of task instances, validity analysis of LLM-as-a-Judge strategies, and diversity analysis across abilities, domains, languages, cultures, etc., provide a comprehensive and nuanced understanding of the limitations of current LVLMs in associative reasoning, paving the way for more human-like and general-purpose AI. The dataset and code are available at https://github.com/MM-OPERA-Bench/MM-OPERA.
https://openreview.net/forum?id=6BpKATZQd8
Datasets and Benchmarks
Poster
6BpKATZQd8
Factorio Learning Environment
[ "Jack Hopkins", "Mart Bakler", "Akbir Khan" ]
Large Language Models (LLMs) are rapidly saturating existing benchmarks, necessitating new open-ended evaluations. We introduce the Factorio Learning Environment (FLE), based on the game of Factorio, that tests agents in long-term planning, spatial reasoning, program synthesis, and resource optimization. FLE provides exponentially scaling challenges -- from basic automation to complex factories processing millions of resource units per second. We provide two settings: (1) open-play with the open-ended task of building the largest factory on an procedurally generated map and (2) lab-play consisting of 33 bounded tasks accross three settings with fixed resources. We demonstrate across both settings that models still lack strong spatial reasoning. In lab-play, we find that LLMs exhibit promising short-horizon skills, yet are unable to operate effectively in constrained environments, reflecting limitations in error analysis. In open-play, while LLMs discover automation strategies that improve growth (e.g electric-powered drilling), they fail to achieve complex automation (e.g electronic-circuit manufacturing)
https://openreview.net/forum?id=652Q6jBFMZ
Datasets and Benchmarks
Poster
652Q6jBFMZ
SWE-smith: Scaling Data for Software Engineering Agents
[ "John Yang", "Kilian Lieret", "Carlos E Jimenez", "Alexander Wettig", "Kabir Khandpur", "Yanzhe Zhang", "Binyuan Hui", "Ofir Press", "Ludwig Schmidt", "Diyi Yang" ]
Despite recent progress in Language Models (LMs) for software engineering, collecting training data remains a significant pain point. Existing datasets are small, with at most 1,000s of training instances from 11 or fewer GitHub repositories. The procedures to curate such datasets are often complex, necessitating hundreds of hours of human labor; companion execution environments also take up several terabytes of storage, severely limiting their scalability and usability. To address this pain point, we introduce SWE-smith, a novel pipeline for generating software engineering training data at scale. Given any Python codebase, SWE-smith constructs a corresponding execution environment, then automatically synthesizes 100s to 1,000s of task instances that break existing test(s) in the codebase. Using SWE-smith, we create a dataset of 50k instances sourced from 128 GitHub repositories, an order of magnitude larger than all previous works. We train SWE-agent-LM-32B, achieving 40.2% Pass@1 resolve rate on the SWE-bench Verified benchmark, state of the art among open source models. We open source SWE-smith (collection procedure, task instances, trajectories, models) to lower the barrier of entry for research in LM systems for automated software engineering. All assets available at \url{https://swesmith.com}.
https://openreview.net/forum?id=63iVrXc8cC
Datasets and Benchmarks
Spotlight
63iVrXc8cC
Open-Insect: Benchmarking Open-Set Recognition of Novel Species in Biodiversity Monitoring
[ "Yuyan Chen", "Nico Lang", "B. Christian Schmidt", "Aditya Jain", "Yves Basset", "Sara Beery", "Maxim Larrivée", "David Rolnick" ]
Global biodiversity is declining at an unprecedented rate, yet little information is known about most species and how their populations are changing. Indeed, some 90% Earth’s species are estimated to be completely unknown. Machine learning has recently emerged as a promising tool to facilitate long-term, large-scale biodiversity monitoring, including algorithms for fine-grained classification of species from images. However, such algorithms typically are not designed to detect examples from categories unseen during training – the problem of open-set recognition (OSR) – limiting their applicability for highly diverse, poorly studied taxa such as insects. To address this gap, we introduce Open-Insect, a large-scale, fine-grained dataset to evaluate unknown species detection across different geographic regions with varying difficulty. We benchmark 38 OSR algorithms across three categories: post-hoc, training-time regularization, and training with auxiliary data, finding that simple post-hoc approaches remain a strong baseline. We also demonstrate how to leverage auxiliary data to improve species discovery in regions with limited data. Our results provide timely insights to guide the development of computer vision methods for biodiversity monitoring and species discovery.
https://openreview.net/forum?id=63Tia99ofI
Datasets and Benchmarks
Spotlight
63Tia99ofI
PHANTOM: A Benchmark for Hallucination Detection in Financial Long-Context QA
[ "Lanlan Ji", "Dominic Seyler", "Gunkirat Kaur", "Manjunath Hegde", "Koustuv Dasgupta", "Bing Xiang" ]
While Large Language Models (LLMs) show great promise, their tendencies to hallucinate pose significant risks in high-stakes domains like finance, especially when used for regulatory reporting and decision-making. Existing hallucination detection benchmarks fail to capture the complexities of financial benchmarks, which require high numerical precision, nuanced understanding of the language of finance, and ability to handle long-context documents. To address this, we introduce PHANTOM, a novel benchmark dataset for evaluating hallucination detection in long-context financial QA. Our approach first generates a seed dataset of high-quality "query-answer-document (chunk)" triplets, with either hallucinated or correct answers - that are validated by human annotators and subsequently expanded to capture various context lengths and information placements. We demonstrate how PHANTOM allows fair comparison of hallucination detection models and provides insights into LLM performance, offering a valuable resource for improving hallucination detection in financial applications. Further, our benchmarking results highlight the severe challenges out-of-the-box models face in detecting real-world hallucinations on long context data, and establish some promising directions towards alleviating these challenges, by fine-tuning open-source LLMs using PHANTOM.
https://openreview.net/forum?id=5YQAo0S3Hm
Datasets and Benchmarks
Poster
5YQAo0S3Hm
MLE-Dojo: Interactive Environments for Empowering LLM Agents in Machine Learning Engineering
[ "Rushi Qiang", "Yuchen Zhuang", "Yinghao Li", "Dingu Sagar V K", "Rongzhi Zhang", "ChangHao Li", "Ian Shu-Hei Wong", "Sherry Yang", "Percy Liang", "Chao Zhang", "Bo Dai" ]
We introduce MLE-Dojo, a Gym-style framework for systematically reinforcement learning, evaluating, and improving autonomous large language model (LLM) agents in iterative machine learning engineering (MLE) workflows. Unlike existing benchmarks that primarily rely on static datasets or single-attempt evaluations, MLE-Dojo provides an interactive environment enabling agents to iteratively experiment, debug, and refine solutions through structured feedback loops. Built upon 200+ real-world Kaggle challenges, MLE-Dojo covers diverse, open-ended MLE tasks carefully curated to reflect realistic engineering scenarios such as data processing, architecture search, hyperparameter tuning, and code debugging. Its fully executable environment supports comprehensive agent training via both supervised fine-tuning and reinforcement learning, facilitating iterative experimentation, realistic data sampling, and real-time outcome verification. Extensive evaluations of eight frontier LLMs reveal that while current models achieve meaningful iterative improvements, they still exhibit significant limitations in autonomously generating long-horizon solutions and efficiently resolving complex errors. Furthermore, MLE-Dojo’s flexible and extensible architecture seamlessly integrates diverse data sources, tools, and evaluation protocols, uniquely enabling model-based agent tuning and promoting interoperability, scalability, and reproducibility. We open-source our framework and benchmarks to foster community-driven innovation towards next-generation MLE agents.
https://openreview.net/forum?id=5W5mFU4oMO
Datasets and Benchmarks
Poster
5W5mFU4oMO
LawShift: Benchmarking Legal Judgment Prediction Under Statute Shifts
[ "Zhuo Han", "Yi Yang", "Yi Feng", "Wanhong Huang", "Xuxing Ding", "Chuanyi Li", "Jidong Ge", "Vincent Ng" ]
Legal Judgment Prediction (LJP) seeks to predict case outcomes given available case information, offering practical value for both legal professionals and laypersons. However, a key limitation of existing LJP models is their limited adaptability to statutory revisions. Current SOTA models are neither designed nor evaluated for statutory revisions. To bridge this gap, we introduce LawShift, a benchmark dataset for evaluating LJP under statutory revisions. Covering 31 fine-grained change types, LawShift enables systematic assessment of SOTA models' ability to handle legal changes. We evaluate five representative SOTA models on LawShift, uncovering significant limitations in their response to legal updates. Our findings show that model architecture plays a critical role in adaptability, offering actionable insights and guiding future research on LJP in dynamic legal contexts.
https://openreview.net/forum?id=5SpFenlxDF
Datasets and Benchmarks
Poster
5SpFenlxDF
CARES: Comprehensive Evaluation of Safety and Adversarial Robustness in Medical LLMs
[ "Sijia Chen", "Xiaomin Li", "Mengxue Zhang", "Eric Hanchen Jiang", "Qingcheng Zeng", "Chen-Hsiang Yu" ]
Large language models (LLMs) are increasingly deployed in medical contexts, raising critical concerns about safety, alignment, and susceptibility to adversarial manipulation. While prior benchmarks assess model refusal capabilities for harmful prompts, they often lack clinical specificity, graded harmfulness levels, and coverage of jailbreak-style attacks. We introduce CARES (Clinical Adversarial Robustness and Evaluation of Safety), a benchmark for evaluating LLM safety in healthcare. CARES includes over 18,000 prompts spanning eight medical safety principles, four harm levels, and four prompting styles—direct, indirect, obfuscated, and role-play—to simulate both malicious and benign use cases. We propose a three-way response evaluation protocol (Accept, Caution, Refuse) and a fine-grained Safety Score metric to assess model behavior. Our analysis reveals that many state-of-the-art LLMs remain vulnerable to jailbreaks that subtly rephrase harmful prompts, while also over-refusing safe but atypically phrased queries. Finally, we propose a mitigation strategy using a lightweight classifier to detect jailbreak attempts and steer models toward safer behavior via reminder-based conditioning. CARES provides a rigorous framework for testing and improving medical LLM safety under adversarial and ambiguous conditions.
https://openreview.net/forum?id=5RykuxC8Jl
Datasets and Benchmarks
Poster
5RykuxC8Jl
HouseLayout3D: A Benchmark and Training-free Baseline for 3D Layout Estimation in the Wild
[ "Valentin Bieri", "Marie-Julie Rakotosaona", "Keisuke Tateno", "Francis Engelmann", "Leonidas Guibas" ]
Current 3D layout estimation models are predominantly trained on synthetic datasets biased toward simplistic, single-floor scenes. This prevents them from generalizing to complex, multi-floor buildings, often forcing a per-floor processing approach that sacrifices global context. Few works have attempted to holistically address multi-floor layouts. In this work, we introduce HouseLayout3D, a real-world benchmark dataset, which highlights the limitations of existing research when handling expansive, architecturally complex spaces. Additionally, we propose MultiFloor3D, a baseline method leveraging recent advances in 3D reconstruction and 2D segmentation. Our approach significantly outperforms state-of-the-art methods on both our new and existing datasets. Remarkably, it does not require any layout-specific training.
https://openreview.net/forum?id=5M5WdH659Y
Datasets and Benchmarks
Poster
5M5WdH659Y
PSMBench: A Benchmark and Dataset for Evaluating LLMs Extraction of Protocol State Machines from RFC Specifications
[ "Zilin Shen", "Xinyu Luo", "Imtiaz Karim", "Elisa Bertino" ]
Accurately extracting protocol-state machines (PSMs) from the long, densely written Request-for-Comments (RFC) standards that govern Internet‐scale communication remains a bottleneck for automated security analysis and protocol testing. In this paper, we introduce RFC2PSM, the first large-scale dataset that pairs 1,580 pages of cleaned RFC text with 108 manually validated states and 297 transitions covering 14 widely deployed protocols spanning the data-link, transport, session, and application layers. Built on this corpus, we propose PsmBench, a benchmark that (i) feeds chunked RFC to an LLM, (ii) prompts the model to emit a machine-readable PSM, and (iii) scores the output with structure-aware, semantic fuzzy-matching metrics that reward partially correct graphs. A comprehensive baseline study of nine state-of-the-art open and commercial LLMs reveals a persistent state–transition gap: models identify many individual states (up to $0.82$ F1) but struggle to assemble coherent transition graphs ($\leq 0.38$ F1), highlighting challenges in long-context reasoning, alias resolution, and action/event disambiguation. We release the dataset, evaluation code, and all model outputs as open-sourced, providing a fully reproducible starting point for future work on reasoning over technical prose and generating executable graph structures. RFC2PSM and PsmBench aim to catalyze cross-disciplinary progress toward LLMs that can interpret and verify the protocols that keep the Internet safe.
https://openreview.net/forum?id=5HGBErIHuV
Datasets and Benchmarks
Poster
5HGBErIHuV
Struct-Bench: A Benchmark for Differentially Private Structured Text Generation
[ "Shuaiqi Wang", "Vikas Raunak", "Arturs Backurs", "Victor Reis", "Pei Zhou", "Sihao Chen", "Longqi Yang", "Zinan Lin", "Sergey Yekhanin", "Giulia Fanti" ]
Differentially private (DP) synthetic data generation is a promising technique for utilizing private datasets that otherwise cannot be exposed for model training or other analytics. While much research literature has focused on generating private unstructured text and image data, in enterprise settings, structured data (e.g., tabular) is more common, often including natural language fields or components. Existing synthetic data evaluation techniques (e.g., FID) struggle to capture the structural properties and correlations of such datasets. In this work, we propose Struct-Bench, a framework and benchmark for evaluating synthetic datasets derived from structured datasets that contain natural language data. The Struct-Bench framework requires users to provide a representation of their dataset structure as a Context-Free Grammar (CFG). Our benchmark comprises 5 real-world and 2 synthetically generated datasets. We show that these datasets demonstrably present a great challenge even for state-of-the-art DP synthetic data generation methods. Struct-Bench provides reference implementations of different metrics and a leaderboard, offering a standardized platform to benchmark and investigate privacy-preserving synthetic data methods. We also present a case study showing how Struct-Bench improves the synthetic data quality of Private Evolution (PE) on structured data. The benchmark and the leaderboard have been publicly made available at https://struct-bench.github.io.
https://openreview.net/forum?id=59vXWteYuh
Datasets and Benchmarks
Poster
59vXWteYuh
FreshStack: Building Realistic Benchmarks for Evaluating Retrieval on Technical Documents
[ "Nandan Thakur", "Jimmy Lin", "Sam Havens", "Michael Carbin", "Omar Khattab", "Andrew Drozdov" ]
We introduce FreshStack, a holistic framework for automatically building information retrieval (IR) evaluation benchmarks by incorporating challenging questions and answers. FreshStack conducts the following steps: (1) automatic corpus collection from code and technical documentation, (2) nugget generation from community-asked questions and answers, and (3) nugget-level support, retrieving documents using a fusion of retrieval techniques and hybrid architectures. We use FreshStack to build five datasets on fast-growing, recent, and niche domains to ensure the tasks are sufficiently challenging. On FreshStack, existing retrieval models, when applied out-of-the-box, significantly underperform oracle approaches on all five domains, denoting plenty of headroom to improve IR quality. In addition, we identify cases where rerankers do not improve first-stage retrieval accuracy (two out of five domains) and oracle context helps an LLM generator generate a high-quality RAG answer. We hope FreshStack will facilitate future work toward constructing realistic, scalable, and uncontaminated IR and RAG evaluation benchmarks.
https://openreview.net/forum?id=54TTgXlS2U
Datasets and Benchmarks
Poster
54TTgXlS2U
ConnectomeBench: Can LLMs proofread the connectome?
[ "Jeff Brown", "Andrew Kirjner", "Annika Vivekananthan", "Edward Boyden" ]
Connectomics—the mapping of neural connections in an organism's brain—currently requires extraordinary human effort to proofread the data collected from imaging and machine-learning assisted segmentation. With the growing excitement around using AI agents to automate important scientific tasks, we explore whether current AI systems can perform multiple tasks necessary for data proofreading. We introduce ConnectomeBench, a multimodal benchmark evaluating large language model (LLM) capabilities in three critical proofreading tasks: segment type identification, split error correction, and merge error detection. Using expert annotated data from two large open-source datasets—a cubic millimeter of mouse visual cortex and the complete Drosophila brain—we evaluate proprietary multimodal LLMs including Claude 3.7/4 Sonnet, o4-mini, GPT-4.1, GPT-4o, as well as open source models like InternVL-3 and NVLM. Our results demonstrate that current models achieve surprisingly high performance in segment identification (52-82\% balanced accuracy vs. 20-25\% chance) and binary/multiple choice split error correction (75-85\% accuracy vs. 50\% chance) while generally struggling on merge error identification tasks. Overall, while the best models still lag behind expert performance, they demonstrate promising capabilities that could eventually enable them to augment and potentially replace human proofreading in connectomics.
https://openreview.net/forum?id=50gEiKEuUZ
Datasets and Benchmarks
Spotlight
50gEiKEuUZ
Sheetpedia: A 300K-Spreadsheet Corpus for Spreadsheet Intelligence and LLM Fine-Tuning
[ "Zailong Tian", "Zhuoheng Han", "Houfeng Wang", "Lizi Liao" ]
Spreadsheets are widely used for data analysis and reporting, yet their complex structure and formula logic pose significant challenges for AI systems. We introduce Sheetpedia, a large-scale corpus of over 290,000 diverse spreadsheets (from 324,000+ workbooks) compiled from enterprise email archives and online forums. We detail a rigorous collection and preprocessing pipeline (integrating the Enron email spreadsheet archive and the Fuse web corpus, plus a new crawl of Excel forums) to standardize formats, filter languages, and remove duplicates. Sheetpedia provides extensive coverage of real formulas and annotations – addressing a gap left by prior table datasets (e.g. web tables used in TURL or Text-to-SQL in Spider) which often lack formula semantics. We present comprehensive corpus statistics, highlighting rich formula diversity and a majority (78\%+) of English content. To demonstrate the corpus’s utility, we fine-tune large language models on Sheetpedia for two novel spreadsheet understanding tasks: Natural Language to Semantic Range (NL2SR) and Natural Language to Formula (NL2Formula). Using a rejection-sampling data generation strategy, our fine-tuned models achieve up to 97.5\% accuracy on NL2SR and 71.7\% on NL2Formula – substantially outperforming baseline approaches. Sheetpedia (to be released publicly) fills a crucial need for a large, high-quality spreadsheet benchmark, enabling more effective spreadsheet intelligence and natural language interfaces for spreadsheet tools.
https://openreview.net/forum?id=4vLYwlA3X5
Datasets and Benchmarks
Spotlight
4vLYwlA3X5
OVERT: A Benchmark for Over-Refusal Evaluation on Text-to-Image Models
[ "Ziheng Cheng", "Yixiao Huang", "Hui Xu", "Somayeh Sojoudi", "Xuandong Zhao", "Dawn Song", "Song Mei" ]
Text-to-Image (T2I) models have achieved remarkable success in generating visual content from text inputs. Although multiple safety alignment strategies have been proposed to prevent harmful outputs, they often lead to overly cautious behavior ---rejecting even benign prompts---a phenomenon known as \textit{over-refusal} that reduces the practical utility of T2I models. Despite over-refusal having been observed in practice, there is no large-scale benchmark that systematically evaluates this phenomenon for T2I models. In this paper, we present an automatic workflow to construct synthetic evaluation data, resulting in OVERT (\textbf{OVE}r-\textbf{R}efusal evaluation on \textbf{T}ext-to-image models), the first large-scale benchmark for assessing over-refusal behaviors in T2I models. OVERT includes 4,600 seemingly harmful but benign prompts across nine safety-related categories, along with 1,785 genuinely harmful prompts (OVERT-unsafe) to evaluate the safety–utility trade-off. Using OVERT, we evaluate several leading T2I models and find that over-refusal is a widespread issue across various categories (Figure 1), underscoring the need for further research to enhance the safety alignment of T2I models without compromising their functionality. As a preliminary attempt to reduce over-refusal, we explore prompt rewriting; however, we find it often compromises faithfulness to the meaning of the original prompts. Finally, we demonstrate the flexibility of our generation framework in accommodating diverse safety requirements by generating customized evaluation data adapting to user-defined policies.
https://openreview.net/forum?id=4ueprXZqZP
Datasets and Benchmarks
Poster
4ueprXZqZP
TalkCuts: A Large-Scale Dataset for Multi-Shot Human Speech Video Generation
[ "Jiaben Chen", "Zixin Wang", "Ailing Zeng", "Yang Fu", "Xueyang Yu", "Siyuan Cen", "Julian Tanke", "Yihang Chen", "Koichi Saito", "Yuki Mitsufuji", "Chuang Gan" ]
In this work, we present TalkCuts, a large-scale dataset designed to facilitate the study of multi-shot human speech video generation. Unlike existing datasets that focus on single-shot, static viewpoints, TalkCuts offers 164k clips totaling over 500 hours of high-quality 1080P human speech videos with diverse camera shots, including close-up, half-body, and full-body views. The dataset includes detailed textual descriptions, 2D keypoints and 3D SMPL-X motion annotations, covering over 10k identities, enabling multimodal learning and evaluation. As a first attempt to showcase the value of the dataset, we present Orator, an LLM-guided multi-modal generation framework as a simple baseline, where the language model functions as a multi-faceted director, orchestrating detailed specifications for camera transitions, speaker gesticulations, and vocal modulation. This architecture enables the synthesis of coherent long-form videos through our integrated multi-modal video generation module. Extensive experiments in both pose-guided and audio-driven settings show that training on TalkCuts significantly enhances the cinematographic coherence and visual appeal of generated multi-shot speech videos. We believe TalkCuts provides a strong foundation for future work in controllable, multi-shot speech video generation and broader multimodal learning.
https://openreview.net/forum?id=4a0w7AkrY7
Datasets and Benchmarks
Poster
4a0w7AkrY7
DAVE: Diagnostic benchmark for Audio Visual Evaluation
[ "Gorjan Radevski", "Teodora Popordanoska", "Matthew B. Blaschko", "Tinne Tuytelaars" ]
Audio-visual understanding is a rapidly evolving field that seeks to integrate and interpret information from both auditory and visual modalities. Despite recent advances in multi-modal learning, existing benchmarks often suffer from strong visual bias -- when answers can be inferred from visual data alone -- and provide only aggregate scores that conflate multiple sources of error. This makes it difficult to determine whether models struggle with visual understanding, audio interpretation, or audio-visual alignment. In this work, we introduce DAVE: Diagnostic Audio Visual Evaluation, a novel benchmark dataset designed to systematically evaluate audio-visual models across controlled settings. DAVE alleviates existing limitations by (i) ensuring both modalities are necessary to answer correctly and (ii) decoupling evaluation into atomic subcategories. Our detailed analysis of state-of-the-art models reveals specific failure modes and provides targeted insights for improvement. By offering this standardized diagnostic framework, we aim to facilitate more robust development of audio-visual models. Dataset: https://huggingface.co/datasets/gorjanradevski/dave Code: https://github.com/gorjanradevski/dave
https://openreview.net/forum?id=4ZAX1NT0ms
Datasets and Benchmarks
Poster
4ZAX1NT0ms
MARS-VFL: A Unified Benchmark for Vertical Federated Learning with Realistic Evaluation
[ "Wei Shen", "Weiqi Liu", "Mingde Chen", "Wenke Huang", "Mang Ye" ]
Vertical Federated Learning (VFL) has emerged as a critical privacy-preserving learning paradigm, enabling collaborative model training by leveraging distributed features across clients. However, due to privacy concerns, there are few publicly available real-world datasets for evaluating VFL methods, which poses significant challenges to related research. To bridge this gap, we propose MARS-VFL, a unified benchmark for realistic VFL evaluation. It integrates data from practical applications involving collaboration across different features, maintaining compatibility with the VFL setting. Based on this, we standardize the evaluation of VFL methods from the mainstream aspects of efficiency, robustness, and security. We conduct comprehensive experiments to assess different VFL approaches, providing references for unified evaluation. Furthermore, we are the first to unify the evaluation of robustness challenges in VFL and introduce a new method for addressing robustness challenges, establishing standard baselines for future research.
https://openreview.net/forum?id=4Ud0pRqFto
Datasets and Benchmarks
Spotlight
4Ud0pRqFto
InterMT: Multi-Turn Interleaved Preference Alignment with Human Feedback
[ "Boyuan Chen", "Donghai Hong", "Jiaming Ji", "Jiacheng Zheng", "Bowen Dong", "Jiayi Zhou", "Kaile Wang", "Josef Dai", "Xuyao Wang", "Wenqi Chen", "Qirui Zheng", "Wenxin Li", "Sirui Han", "Yike Guo", "Yaodong Yang" ]
As multimodal large models (MLLMs) continue to advance across challenging tasks, a key question emerges: \textbf{\textit{What essential capabilities are still missing? }} A critical aspect of human learning is continuous interaction with the environment -- not limited to language, but also involving multimodal understanding and generation. To move closer to human-level intelligence, models must similarly support \textbf{multi-turn}, \textbf{multimodal interaction}. In particular, they should comprehend interleaved multimodal contexts and respond coherently in ongoing exchanges. In this work, we present \textbf{an initial exploration} through the \textsc{InterMT} -- \textbf{the first preference dataset for \textit{multi-turn} multimodal interaction}, grounded in real human feedback. In this exploration, we particularly emphasize the importance of human oversight, introducing expert annotations to guide the process, motivated by the fact that current MLLMs lack such complex interactive capabilities. \textsc{InterMT} captures human preferences at both global and local levels into nine sub-dimensions, consists of 15.6k prompts, 52.6k multi-turn dialogue instances, and 32.4k human-labeled preference pairs. To compensate for the lack of capability for multi-modal understanding and generation, we introduce an agentic workflow that leverages tool-augmented MLLMs to construct multi-turn QA instances. To further this goal, we introduce \textsc{InterMT-Bench} to assess the ability of MLLMs in assisting judges with multi-turn, multimodal tasks. We demonstrate the utility of \textsc{InterMT} through applications such as judge moderation and further reveal the \textit{multi-turn scaling law} of judge model. We hope the open-source of our data can help facilitate further research on aligning current MLLMs to the next step.
https://openreview.net/forum?id=4SUtAp2cm0
Datasets and Benchmarks
Spotlight
4SUtAp2cm0
The Leaderboard Illusion
[ "Shivalika Singh", "Yiyang Nan", "Alex Wang", "Daniel D'souza", "Sayash Kapoor", "Ahmet Üstün", "Sanmi Koyejo", "Yuntian Deng", "Shayne Longpre", "Noah A. Smith", "Beyza Ermis", "Marzieh Fadaee", "Sara Hooker" ]
Measuring progress is fundamental to the advancement of any scientific field. As benchmarks play an increasingly central role, they also grow more susceptible to distortion. Chatbot Arena has emerged as the go-to leaderboard for ranking the most capable AI systems. Yet, in this work we identify systematic issues that have resulted in a distorted playing field. We find that undisclosed private testing practices benefit a handful of providers who are able to test multiple variants before public release and retract scores if desired. We establish that the ability of these providers to choose the best score leads to biased Arena scores due to selective disclosure of performance results. At an extreme, we found one provider testing 27 private variants before making one model public at the second position on the leaderboard. We also establish that proprietary closed models are sampled at higher rates (number of battles) and have fewer models removed from the arena than open-weight and open-source alternatives. Both these policies lead to large data access asymmetries over time. The top two providers have individually received an estimated 19.2% and 20.4% of all data on the arena. In contrast, a combined 83 open-weight models have only received an estimated 29.7% of the total data. With conservative estimates, we show that access to Chatbot Arena data yields substantial benefits; even limited additional data can result in relative performance gains of up to 112% on ArenaHard, a test set from the arena distribution. Together, these dynamics result in overfitting to Arena-specific dynamics rather than general model quality. The Arena builds on the substantial efforts of both the organizers and an open community that maintains this valuable evaluation platform. We offer actionable recommendations to reform the Chatbot Arena's evaluation framework and promote fairer, more transparent benchmarking for the field.
https://openreview.net/forum?id=4Ae8edNqm0
Datasets and Benchmarks
Poster
4Ae8edNqm0
Satellites Reveal Mobility: A Commuting Origin-destination Flow Generator for Global Cities
[ "Can Rong", "Xin Zhang", "Yanxin Xi", "HONGJIE SUI", "Jingtao Ding", "Yong Li" ]
Commuting Origin-destination (OD) flows, capturing daily population mobility of citizens, are vital for sustainable development across cities around the world. However, it is challenging to obtain the data due to the high cost of travel surveys and privacy concerns. Surprisingly, we find that satellite imagery, publicly available across the globe, contains rich urban semantic signals to support high-quality OD flow generation, with over 98\% expressiveness of traditional multisource hard-to-collect urban sociodemographic, economics, land use, and point of interest data. This inspires us to design a novel data generator, GlODGen (Global-scale OriginDestination Flow Generator), which can generate OD flow data for any cities of interest around the world. Specifically, GlODGen first leverages Vision-Language Geo-Foundation Models to extract urban semantic signals related to human mobility from satellite imagery. These features are then combined with population data to form region-level representations, which are used to generate OD flows via graph diffusion models. Extensive experiments on 4 continents and 6 representative cities show that GlODGen has great generalizability across diverse urban environments on different continents and can generate OD flow data for global cities highly consistent with real-world mobility data. We implement GlODGen as an automated tool, seamlessly integrating data acquisition and curation, urban semantic feature extraction, and OD flow generation together. It has been released at https://github.com/tsinghua-fib-lab/generate-od-pubtools.
https://openreview.net/forum?id=49W4eKKjPU
Datasets and Benchmarks
Poster
49W4eKKjPU
MetaBox-v2: A Unified Benchmark Platform for Meta-Black-Box Optimization
[ "Zeyuan Ma", "Yue-Jiao Gong", "Hongshu Guo", "Wenjie Qiu", "Sijie Ma", "Hongqiao Lian", "Jiajun Zhan", "Kaixu Chen", "Chen Wang", "Zhiyang Huang", "Zechuan Huang", "Guojun Peng", "Ran Cheng", "Yining Ma" ]
Meta-Black-Box Optimization (MetaBBO) streamlines the automation of optimization algorithm design through meta-learning. It typically employs a bi-level structure: the meta-level policy undergoes meta-training to reduce the manual effort required in developing algorithms for low-level optimization tasks. The original MetaBox (2023) provided the first open-source framework for reinforcement learning-based single-objective MetaBBO. However, its relatively narrow scope no longer keep pace with the swift advancement in this field. In this paper, we introduce MetaBox-v2 (\url{https://github.com/MetaEvo/MetaBox}) as a milestone upgrade with four novel features: 1) a unified architecture supporting RL, evolutionary, and gradient-based approaches, by which we reproduce $23$ up-to-date baselines; 2) efficient parallelization schemes, which reduce the training/testing time by $10-40$x; 3) a comprehensive benchmark suite of $18$ synthetic/realistic tasks ($1900$+ instances) spanning single-objective, multi-objective, multi-model, and multi-task optimization scenarios; 4) plentiful and extensible interfaces for custom analysis/visualization and integrating to external optimization tools/benchmarks. To show the utility of MetaBox-v2, we carry out a systematic case study that evaluates the built-in baselines in terms of the optimization performance, generalization ability and learning efficiency. Valuable insights are concluded from thorough and detailed analysis for practitioners and those new to the field.
https://openreview.net/forum?id=415T06x0vG
Datasets and Benchmarks
Poster
415T06x0vG
MLLM-ISU: The First-Ever Comprehensive Benchmark for Multimodal Large Language Models based Intrusion Scene Understanding
[ "Fujun Han", "Peng Ye" ]
Vision-based intrusion detection has multiple applications in practical scenarios, e.g., autonomous driving, intelligent monitoring, and security. Previous works mainly focus on improving the intrusion detection performance, without a comprehensive and in-depth understanding of the intrusion scene. To fill this gap, we explore a novel task called Multimodal Large Language Models based Intrusion Scene Understanding (MLLM-ISU) and report a comprehensive benchmark for the task. Specifically, we first design an effective and automatic visual question-answer generation strategy, constructing a new MLLM-ISU dataset, with 3000 VQA evaluation Pairs, 8925 training Pairs, and six relevant subtasks. Then, we perform a comprehensive assessment on various state-of-the-art proprietary and open-source MLLMs, e.g., DeepSeek-VL2, GPT-4o, Qwen2.5-VL, etc, and find that current MLLMs have weak abilities for this task. Further, in order to improve the intrusion understanding capabilities of current MLLMs, we propose a Post-Training Framework with three sequential training stages, i.e., Intrusion-aware Visual Instruction Pre-training, Intrusion Chain of Thought tuning, and Intrusion-centric VQA tuning, and sufficient experiments and comparisons are conducted to verify the effectiveness of the proposed three-stages training framework. Available datasets and codes: https://github.com/1012537710/MLLM-ISU.
https://openreview.net/forum?id=3wJh9Pw2sn
Datasets and Benchmarks
Poster
3wJh9Pw2sn
MedicalNarratives: Connecting Medical Vision and Language with Localized Narratives
[ "Wisdom Oluchi Ikezogwo", "Kevin Minghan Zhang", "Mehmet Saygin Seyfioglu" ]
Multi-modal models are data hungry. While datasets with natural images are abundant, medical image datasets can not afford the same luxury. To enable representation learning for medical images at scale, we turn to YouTube, a platform with a large reservoir of open-source medical pedagogical videos. We curate MedicalNarratives, a dataset 4.7M medical image-text pairs, with 1M samples containing dense annotations in the form of traces spatial traces (and bounding boxes), and 118K videos centered on the trace event (with aligned text), enabling spatiotemporal grounding beyond single frames. Similar to think-aloud studies where instructors speak while hovering their mouse cursor movements over relevant image regions, 1M images in MedicalNarratives contains localized mouse traces in image pixels, creating a spatial association between the text and pixels. To evaluate the utility of MedicalNarratives, we train GenMedClip with a CLIP-like objective using our dataset spanning 12 medical domains. GenMedClip outperforms previous state-of-the-art models on all 12 domains on a newly constructed medical imaging benchmark. Data, demo, code, and models will be made available.
https://openreview.net/forum?id=3rY182JOOZ
Datasets and Benchmarks
Poster
3rY182JOOZ
ResearchCodeBench: Benchmarking LLMs on Implementing Novel Machine Learning Research Code
[ "Tianyu Hua", "Harper Hua", "Violet Xiang", "Benjamin Klieger", "Sang T. Truong", "Weixin Liang", "Fan-Yun Sun", "Nick Haber" ]
Large language models (LLMs) have shown promise in transforming machine learning research, yet their capability to faithfully implement genuinely novel ideas from recent research papers—ideas unseen during pretraining—remains unclear. We introduce ResearchCodeBench, a benchmark that evaluates LLMs’ ability to translate cutting-edge ML contributions from top 2024-2025 research papers into executable code. We assessed 30+ proprietary and open-source LLMs, finding that even the best models correctly implement less than 40% of the code. We present empirical findings on performance comparison, contamination, and error patterns. By providing a rigorous evaluation platform, ResearchCodeBench enables continuous understanding and advancement of LLM-driven innovation in research code generation.
https://openreview.net/forum?id=3k70Vt0YFS
Datasets and Benchmarks
Spotlight
3k70Vt0YFS
CleverBirds: A Multiple-Choice Benchmark for Fine-grained Human Knowledge Tracing
[ "Leonie Bossemeyer", "Samuel Heinrich", "Grant Van Horn", "Oisin Mac Aodha" ]
Mastering fine-grained visual recognition, essential in many expert domains, can require that specialists undergo years of dedicated training. Modeling the progression of such expertize in humans remains challenging, and accurately inferring a human learner’s knowledge state is a key step toward understanding visual learning. We introduce CleverBirds, a large-scale knowledge tracing benchmark for fine-grained bird species recognition. Collected by the citizen-science platform eBird, it offers insight into how individuals acquire expertize in complex fine-grained classification. More than 40,000 participants have engaged in the quiz, answering over 17 million multiple-choice questions spanning over 10,000 bird species, with long-range learning patterns across an average of 400 questions per participant. We release this dataset to support the development and evaluation of new methods for visual knowledge tracing. We show that tracking learners' knowledge is challenging, especially across participant subgroups and question types, with different forms of contextual information offering varying degrees of predictive benefit. CleverBirds is among the largest benchmark of its kind, offering a substantially higher number of learnable concepts. With it, we hope to enable new avenues for studying the development of visual expertize over time and across individuals.
https://openreview.net/forum?id=3chgJBffVS
Datasets and Benchmarks
Poster
3chgJBffVS
VeriThoughts: Enabling Automated Verilog Code Generation using Reasoning and Formal Verification
[ "Patrick Yubeaton", "Andre Nakkab", "Weihua Xiao", "Luca Collini", "Ramesh Karri", "Chinmay Hegde", "Siddharth Garg" ]
This paper introduces VeriThoughts, a novel dataset designed for reasoning-based Verilog code generation. We establish a new benchmark framework grounded in formal verification methods to evaluate the quality and correctness of generated hardware descriptions. Additionally, we present a suite of specialized small-scale models optimized specifically for Verilog generation. Our work addresses the growing need for automated hardware design tools that can produce verifiably correct implementations from high-level specifications, potentially accelerating the hardware development process while maintaining rigorous correctness guarantees.
https://openreview.net/forum?id=3Z8fWHKqlu
Datasets and Benchmarks
Poster
3Z8fWHKqlu
MultiHuman-Testbench: Benchmarking Image Generation for Multiple Humans
[ "Shubhankar Borse", "Seokeon Choi", "Sunghyun Park", "Jeongho Kim", "Shreya Kadambi", "Risheek Garrepalli", "Sungrack Yun", "Munawar Hayat", "Fatih Porikli" ]
Generation of images containing multiple humans, performing complex actions, while preserving their facial identities, is a significant challenge. A major factor contributing to this is the lack of a a dedicated benchmark. To address this, we introduce MultiHuman-Testbench, a novel benchmark for rigorously evaluating generative models for multi-human generation. The benchmark comprises 1800 samples, including carefully curated text prompts, describing a range of simple to complex human actions. These prompts are matched with a total of 5,550 unique human face images, sampled uniformly to ensure diversity across age, ethnic background, and gender. Alongside captions, we provide human-selected pose conditioning images which accurately match the prompt. We propose a multi-faceted evaluation suite employing four key metrics to quantify face count, ID similarity, prompt alignment, and action detection. We conduct a thorough evaluation of a diverse set of models, including zero-shot approaches and training-based methods, with and without regional priors. We also propose novel techniques to incorporate image and region isolation using human segmentation and Hungarian matching, significantly improving ID similarity. Our proposed benchmark and key findings provide valuable insights and a standardized tool for advancing research in multi-human image generation.
https://openreview.net/forum?id=3Wb4WuTUVc
Datasets and Benchmarks
Poster
3Wb4WuTUVc
Bubbleformer: Forecasting Boiling with Transformers
[ "Sheikh Md Shakeel Hassan", "Xianwei Zou", "Akash Dhruv", "Aparna Chandramowlishwaran" ]
Modeling boiling---an inherently chaotic, multiphase process central to energy and thermal systems---remains a significant challenge for neural PDE surrogates. Existing models require future input (e.g., bubble positions) during inference because they fail to learn nucleation from past states, limiting their ability to autonomously forecast boiling dynamics. They also fail to model flow boiling velocity fields, where sharp interface–momentum coupling demands long-range and directional inductive biases. We introduce Bubbleformer, a transformer-based spatiotemporal model that forecasts stable and long-range boiling dynamics including nucleation, interface evolution, and heat transfer without dependence on simulation data during inference. Bubbleformer integrates factorized axial attention, frequency-aware scaling, and conditions on thermophysical parameters to generalize across fluids, geometries, and operating conditions.To evaluate physical fidelity in chaotic systems, we propose interpretable physics-based metrics that evaluate heat flux consistency, interface geometry, and mass conservation. We also release BubbleML 2.0, a high-fidelity dataset that spans diverse working fluids (cryogens, refrigerants, dielectrics), boiling configurations (pool and flow boiling), flow regimes (bubbly, slug, annular), and boundary conditions. Bubbleformer sets new benchmark results in both prediction and forecasting of two-phase boiling flows.
https://openreview.net/forum?id=3TN5My3Xw6
Datasets and Benchmarks
Spotlight
3TN5My3Xw6
Intend to Move: A Multimodal Dataset for Intention-Aware Human Motion Understanding
[ "Ryo Umagami", "Liu Yue", "Xuangeng Chu", "Ryuto Fukushima", "Tetsuya Narita", "Yusuke Mukuta", "Tomoyuki Takahata", "Jianfei Yang", "Tatsuya Harada" ]
Human motion is inherently intentional, yet most motion modeling paradigms focus on low-level kinematics, overlooking the semantic and causal factors that drive behavior. Existing datasets further limit progress: they capture short, decontextualized actions in static scenes, providing little grounding for embodied reasoning. To address these limitations, we introduce $\textit{Intend to Move (I2M)}$, a large-scale, multimodal dataset for intention-grounded motion modeling. I2M contains 10.1 hours of two-person 3D motion sequences recorded in dynamic realistic home environments, accompanied by multi-view RGB-D video, 3D scene geometry, and language annotations of each participant’s evolving intentions. Benchmark experiments reveal a fundamental gap in current motion models: they fail to translate high-level goals into physically and socially coherent motion. I2M thus serves not only as a dataset but as a benchmark for embodied intelligence, enabling research on models that can reason about, predict, and act upon the ``why'' behind human motion.
https://openreview.net/forum?id=3CVU3RRvPx
Datasets and Benchmarks
Poster
3CVU3RRvPx
BMMR: A Large-Scale Bilingual Multimodal Multi-Discipline Reasoning Dataset
[ "Zhiheng Xi", "Guanyu Li", "YuTao Fan", "Honglin Guo", "Yufang Liu", "Xiaoran Fan", "Jiaqi Liu", "dingjinchao", "Wangmeng Zuo", "Zhenfei Yin", "LEI BAI", "Tao Ji", "Tao Gui", "Qi Zhang", "Xuanjing Huang" ]
In this paper, we introduce BMMR, a large-scale bilingual, multimodal, multi-disciplinary reasoning dataset for the community to develop and evaluate large multimodal models (LMMs). BMMR comprises 100k university-level questions drawn from 300 UNESCO-defined subjects, spanning diverse formats—multiple-choice, fill-in-the-blank, and open-ended QA—and sourced from both print and digital media such as books, exams, and quizzes. All data are curated and filtered via a human-in-the-loop, automated, and scalable framework, and each instance is paired with a high-quality reasoning path. The dataset is organized into two parts: BMMR-Eval that comprises 20k high-quality instances to comprehensively assess LMMs’ knowledge and reasoning across multiple disciplines in both Chinese and English; and BMMR-Train that contains 80k instances to support further research and development, extending the current focus on mathematical reasoning to diverse disciplines and domains. In addition, we propose the process-based multi-discipline BMMR-Verifier for accurate and fine-grained evaluation of LMMs’ reasoning. Extensive experiments reveal that (i) even SOTA models leave substantial headroom on BMMR-Eval; (ii) reasoning models exhibit discipline bias and outperform LMMs only on specific subjects; (iii) open-source models still trail their proprietary counterparts; and (iv) fine-tuning on BMMR-Train narrows this gap. Additionally, we conduct reasoning-chain analyses using BMMR-Verifier and other in-depth studies, uncovering the challenges LMMs currently face in multidisciplinary reasoning. We will release the data and models, and we believe our work can offers valuable insights and contributions to the community.
https://openreview.net/forum?id=2XstFOMwp4
Datasets and Benchmarks
Poster
2XstFOMwp4
Whose View of Safety? A Deep DIVE Dataset for Pluralistic Alignment of Text-to-Image Models
[ "Charvi Rastogi", "Tian Huey Teh", "Pushkar Mishra", "Roma Patel", "Ding Wang", "Mark Diaz", "Alicia Parrish", "Aida Mostafazadeh Davani", "Zoe Ashwood", "Michela Paganini", "Vinodkumar Prabhakaran", "Verena Rieser", "Lora Aroyo" ]
Current text-to-image (T2I) models often fail to account for diverse human experiences, leading to misaligned systems. We advocate for pluralism in AI alignment, where an AI understands and is steerable towards diverse, and often conflicting, human values. Our work provides three core contributions to achieve this in T2I models. First, we introduce a novel dataset for Diverse Intersectional Visual Evaluation (DIVE) -- the first multimodal dataset for pluralistic alignment. It enables deep alignment to diverse safety perspectives through a large pool of demographically intersectional human raters who provided extensive feedback across 1000 prompts, with high replication, capturing nuanced safety perceptions. Second, we empirically confirm demographics as a crucial proxy for diverse viewpoints in this domain, revealing significant, context-dependent differences in harm perception that diverge from conventional evaluations. Finally, we discuss implications for building aligned T2I models, including efficient data collection strategies, LLM judgment capabilities, and model steerability towards diverse perspectives. This research offers foundational tools for more equitable and aligned T2I systems. Content Warning: The paper includes sensitive content that may be harmful.
https://openreview.net/forum?id=2TxdMkJ6Yw
Datasets and Benchmarks
Spotlight
2TxdMkJ6Yw
CodeAssistBench (CAB): Dataset & Benchmarking for Multi-turn Chat-Based Code Assistance
[ "Myeongsoo Kim", "Shweta Garg", "Baishakhi Ray", "Varun Kumar", "Anoop Deoras" ]
Programming assistants powered by large language models have transformed software development, yet most benchmarks focus narrowly on code generation tasks. Recent efforts like InfiBench and StackEval attempt to address this gap using Stack Overflow data but remain limited to single-turn interactions in isolated contexts, require significant manual curation, and fail to represent complete project environments. We introduce CodeAssistBench (CAB), the first benchmark framework for evaluating multi-turn programming assistance in realistic settings that address questions grounded in actual codebases. Unlike existing programming Q&A benchmarks, CAB automatically generates scalable datasets from GitHub issues tagged with questions using configurable parameters (e.g., repository creation date, star count, programming languages), and includes automatic containerization of codebases for evaluation. It then evaluates models through simulated users in these containerized environments with full codebase access. Using this framework, we constructed a test set of 3,286 real-world programming questions across 214 repositories, spanning seven programming languages and diverse problem domains. Our evaluation of leading LLMs reveals a substantial capability gap: while models perform well on Stack Overflow questions with success rates of 70-83%, they resolve only up to 16.49% of CAB's issues from recent repositories (post-training cutoff). This discrepancy highlights the challenges of providing assistance in complex, project-specific contexts versus answering standalone questions. Our fully automated framework enables continuous benchmark expansion and is available at https://github.com/amazon-science/CodeAssistBench/.
https://openreview.net/forum?id=2R6y4Ku9kG
Datasets and Benchmarks
Poster
2R6y4Ku9kG
InternScenes: A Large-scale Simulatable Indoor Scene Dataset with Realistic Layouts
[ "Weipeng Zhong", "Peizhou Cao", "Yichen Jin", "Luo Li", "Wenzhe Cai", "Jingli Lin", "Hanqing Wang", "Zhaoyang Lyu", "Tai Wang", "Xudong XU", "Bo Dai", "Jiangmiao Pang" ]
The advancement of Embodied AI heavily relies on large-scale, simulatable 3D scene datasets characterized by scene diversity and realistic layouts. However, existing datasets typically suffer from limitations in data scale or diversity, sanitized layouts lacking small items, and severe object collisions. To address these shortcomings, we introduce \textbf{InternScenes}, a novel large-scale simulatable indoor scene dataset comprising approximately 40,000 diverse scenes by integrating three disparate scene sources, \ie, real-world scans, procedurally generated scenes, and designer-created scenes, including 1.96M 3D objects and covering 15 common scene types and 288 object classes. We particularly preserve massive small items in the scenes, resulting in realistic and complex layouts with an average of 41.5 objects per region. Our comprehensive data processing pipeline ensures simulatability by creating real-to-sim replicas for real-world scans, enhances interactivity by incorporating interactive objects into these scenes, and resolves object collisions by physical simulations. We demonstrate the value of InternScenes with two benchmark applications: scene layout generation and point-goal navigation. Both show the new challenges posed by the complex and realistic layouts. More importantly, InternScenes paves the way for scaling up the model training for both tasks, making the generation and navigation in such complex scenes possible. We commit to open-sourcing the data and benchmarks to benefit the whole community.
https://openreview.net/forum?id=2Nyue9Tjtg
Datasets and Benchmarks
Poster
2Nyue9Tjtg
ORBIT - Open Recommendation Benchmark for Reproducible Research with Hidden Tests
[ "Jingyuan He", "Jiongnan Liu", "Vishan Vishesh Oberoi", "Bolin Wu", "Mahima Jagadeesh Patel", "Kangrui Mao", "Chuning Shi", "I-Ta Lee", "Arnold Overwijk", "Chenyan Xiong" ]
Recommender systems are among the most impactful AI applications, interacting with billions of users every day, guiding them to relevant products, services, or information tailored to their preferences. However, the research and development of recommender systems are hindered by existing datasets that fail to capture realistic user behaviors and inconsistent evaluation settings that lead to ambiguous conclusions. This paper introduces the \textbf{O}pen \textbf{R}ecommendation \textbf{B}enchmark for Reproducible Research with H\textbf{I}dden \textbf{T}ests (\textbf{ORBIT}), a unified benchmark for consistent and realistic evaluation of recommendation models. ORBIT offers a standardized evaluation framework of public datasets with reproducible splits and transparent settings for its public leaderboard. Additionally, ORBIT introduces a new webpage recommendation task, ClueWeb-Reco, featuring web browsing sequences from 87 million public, high-quality webpages. ClueWeb-Reco is a synthetic dataset derived from real, user-consented, and privacy-guaranteed browsing data. It aligns with modern recommendation scenarios and is reserved as the hidden test part of our leaderboard to challenge recommendation models' generalization ability. ORBIT measures 12 representative recommendation models on its public benchmark and introduces a prompted LLM baseline on the ClueWeb-Reco hidden test. Our benchmark results reflect general improvements of recommender systems on the public datasets, with variable individual performances. The results on the hidden test reveal the limitations of existing approaches in large-scale webpage recommendation and highlight the potential for improvements with LLM integrations. ORBIT benchmark, leaderboard, and codebase are available at \url{https://www.open-reco-bench.ai}.
https://openreview.net/forum?id=2NSPbJrrIW
Datasets and Benchmarks
Poster
2NSPbJrrIW
GUARD: Constructing Realistic Two-Player Matrix and Security Games for Benchmarking Game-Theoretic Algorithms
[ "Noah Krever", "Jakub Cerny", "Moise Blanchard", "Christian Kroer" ]
Game-theoretic algorithms are commonly benchmarked on recreational games, classical constructs from economic theory such as congestion and dispersion games, or entirely random game instances. While the past two decades have seen the rise of security games -- grounded in real-world scenarios like patrolling and infrastructure protection -- their practical evaluation has been hindered by limited access to the datasets used to generate them. In particular, although the structural components of these games (e.g., patrol paths derived from maps) can be replicated, the critical data defining target values -- central to utility modeling -- remain inaccessible. In this paper, we introduce a flexible framework that leverages open-access datasets to generate realistic matrix and security game instances. These include animal movement data for modeling anti-poaching scenarios and demographic and infrastructure data for infrastructure protection. Our framework allows users to customize utility functions and game parameters, while also offering a suite of preconfigured instances. We provide theoretical results highlighting the degeneracy and limitations of benchmarking on random games, and empirically compare our generated games against random baselines across a variety of standard algorithms for computing Nash and Stackelberg equilibria, including linear programming, incremental strategy generation, and self-play with no-regret learners.
https://openreview.net/forum?id=28bjSsEpMP
Datasets and Benchmarks
Spotlight
28bjSsEpMP
EuroSpeech: A Multilingual Speech Corpus
[ "Samuel Pfisterer", "Florian Grötschla", "Luca A Lanzendörfer", "Florian Yan", "Roger Wattenhofer" ]
Recent progress in speech processing has highlighted that high-quality performance across languages requires substantial training data for each individual language. While existing multilingual datasets cover many languages, they often contain insufficient data for each language, leading to models trained on these datasets to exhibit poor performance on most supported languages. Our work addresses this challenge by introducing a scalable pipeline for constructing speech datasets from parliamentary recordings. The proposed pipeline includes robust components for media retrieval and a two-stage alignment algorithm designed to handle non-verbatim transcripts and long-form audio. Applying this pipeline to recordings from 22 European parliaments, we extract over 61k hours of aligned speech segments, achieving substantial per-language coverage with 19 languages exceeding 1k hours and 22 languages exceeding 500 hours of high-quality speech data. We obtain an average 41.8\% reduction in word error rates over baselines when finetuning an existing ASR model on our dataset, demonstrating the usefulness of our approach.
https://openreview.net/forum?id=26VLybEQ2h
Datasets and Benchmarks
Spotlight
26VLybEQ2h
Fixing It in Post: A Comparative Study of LLM Post-Training Data Quality and Model Performance
[ "Aladin Djuhera", "Swanand Ravindra Kadhe", "Syed Zawad", "Farhan Ahmed", "Heiko Ludwig", "Holger Boche" ]
Recent work on large language models (LLMs) has increasingly focused on post-training and alignment with datasets curated to enhance instruction following, world knowledge, and specialized skills. However, most post-training datasets used in leading open- and closed-source LLMs remain inaccessible to the public, with limited information about their construction process. This lack of transparency has motivated the recent development of open-source post-training corpora. While training on these open alternatives can yield performance comparable to that of leading models, systematic comparisons remain challenging due to the significant computational cost of conducting them rigorously at scale, and are therefore largely absent. As a result, it remains unclear how specific samples, task types, or curation strategies influence downstream performance when assessing data quality. In this work, we conduct the first comprehensive side-by-side analysis of two prominent open post-training datasets: Tulu-3-SFT-Mix and SmolTalk. Using the Magpie framework, we annotate each sample with detailed quality metrics, including turn structure (single-turn vs. multi-turn), task category, input quality, and response quality, and we derive statistics that reveal structural and qualitative similarities and differences between the two datasets. Based on these insights, we design a principled curation recipe that produces a new data mixture, **TuluTalk**, which contains 14% fewer samples than either source dataset while matching or exceeding their performance on key benchmarks. Our findings offer actionable insights for constructing more effective post-training datasets that improve model performance within practical resource limits. To support future research, we publicly release both the annotated source datasets and our curated TuluTalk mixture.
https://openreview.net/forum?id=1ybOI1VvQL
Datasets and Benchmarks
Spotlight
1ybOI1VvQL
Scaling Physical Reasoning with the PHYSICS Dataset
[ "Shenghe Zheng", "Qianjia Cheng", "Junchi Yao", "Mengsong Wu", "haonan he", "Ning Ding", "Yu Cheng", "Shuyue Hu", "LEI BAI", "Dongzhan Zhou", "Ganqu Cui", "Peng Ye" ]
Large Language Models (LLMs) have achieved remarkable progress on advanced reasoning tasks such as mathematics and coding competitions. Meanwhile, physics, despite being both reasoning-intensive and essential to real-world understanding, received limited academic and industrial attention. This paper introduces PHYSICS, a dataset containing 16,568 high-quality physics problems spanning subjects and difficulty levels, to facilitate this issue. Specifically, PHYSICS is curated with exercises from over 100 textbooks through a carefully designed pipeline for quality control. It covers five major physics domains: Mechanics, Electromagnetism, Thermodynamics, Optics, and Modern Physics. It also spans a wide range of difficulty levels, from high school to graduate-level physics courses. To utilize the data for improving and evaluating the model's physical reasoning capabilities, we split the dataset into training and test sets, and provide reasoning paths generated by powerful reasoning models for the training data to facilitate model training. In addition, for the evaluation part, we find that existing evaluation frameworks exhibit biases in aspects such as units, simplification, and precision in physics domain. To balance efficiency and accuracy, we introduce a Rule+Model evaluation framework tailored to physics problems. Our evaluations on current state-of-the-art open-source and proprietary models highlight the limitations of current models in handling physics-related tasks. We hope that our dataset and evaluation methodology will jointly advance the development of LLMs in the field of physics. The code and data can be found at: https://github.com/Zhengsh123/PHYSICS.
https://openreview.net/forum?id=1lo778KztK
Datasets and Benchmarks
Poster
1lo778KztK
Meta-World+: An Improved, Standardized, RL Benchmark
[ "Reginald McLean", "Evangelos Chatzaroulas", "Luc McCutcheon", "Frank Röder", "Tianhe Yu", "Zhanpeng He", "K.R. Zentner", "Ryan Julian", "J K Terry", "Isaac Woungang", "Nariman Farsad", "Pablo Samuel Castro" ]
Meta-World is widely used for evaluating multi-task and meta-reinforcement learning agents, which are challenged to master diverse skills simultaneously. Since its introduction however, there have been numerous undocumented changes which inhibit a fair comparison of algorithms. This work strives to disambiguate these results from the literature, while also leveraging the past versions of Meta-World to provide insights into multi-task and meta-reinforcement learning benchmark design. Through this process we release an open-source version of Meta-World that has full reproducibility of past results, is more technically ergonomic, and gives users more control over the tasks that are included in a task set.
https://openreview.net/forum?id=1de3azE606
Datasets and Benchmarks
Poster
1de3azE606
MyoChallenge 2024: A New Benchmark for Physiological Dexterity and Agility in Bionic Humans
[ "Cheryl Wang", "Chun Kwang Tan", "Balint K Hodossy", "Shirui Lyu", "Pierre Schumacher", "James Heald", "Kai Biegun", "Samo Hromadka", "Maneesh Sahani", "Gunwoo Park", "Beomsoo Shin", "JongHyun Park", "SEUNGBUM KOO", "Chenhui Zuo", "Chengtian Ma", "Yanan Sui", "Nicklas Hansen", "Stone Tao", "Yuan Gao", "Hao Su", "Seungmoon Song", "Letizia Gionfrida", "Massimo Sartori", "Guillaume Durandau", "Vikash Kumar", "Vittorio Caggiano" ]
Recent advancements in bionic prosthetic technology offer transformative opportunities to restore mobility and functionality for individuals with missing limbs. Users of bionic limbs, or bionic humans, learn to seamlessly integrate prosthetic extensions into their motor repertoire, regaining critical motor abilities. The remarkable movement generalization and environmental adaptability demonstrated by these individuals highlight motor intelligence capabilities unmatched by current artificial intelligence systems. Addressing these limitations, MyoChallenge '24 at NeurIPS 2024 established a benchmark for human-robot coordination with an emphasis on joint control of both biological and mechanical limbs. The competition featured two distinct tracks: a manipulation task utilizing the myoMPL model, integrating a virtual biological arm and the Modular Prosthetic Limb (MPL) for a passover task; and a locomotion task using the novel myoOSL model, combining a bilateral virtual biological leg with a trans-femoral amputation and the Open Source Leg (OSL) to navigate varied terrains. Marking the third iteration of the MyoChallenge, the event attracted over 50 teams with more than 290 submissions all around the globe, with diverse participants ranging from independent researchers to high school students. The competition facilitated the development of several state-of-the-art control algorithms for bionic musculoskeletal systems, leveraging techniques such as imitation learning, muscle synergy, and model-based reinforcement learning that significantly surpassed our proposed baseline performance by a factor of 10. By providing the open-source simulation framework of MyoSuite, standardized tasks, and physiologically realistic models, MyoChallenge serves as a reproducible testbed and benchmark for bridging ML and biomechanics. The competition website is featured here: https://sites.google.com/view/myosuite/myochallenge/myochallenge-2024.
https://openreview.net/forum?id=1dSLbhErNv
Datasets and Benchmarks
Poster
1dSLbhErNv
DataSIR: A Benchmark Dataset for Sensitive Information Recognition
[ "Fan Mo", "Bo Liu", "Yuan Fan", "Kun Qin", "Yizhou Zhao", "Jinhe Zhou", "Jia Sun", "Jinfei Liu", "Kui Ren" ]
With the rapid development of artificial intelligence technologies, the demand for training data has surged, exacerbating risks of data leakage. Despite increasing incidents and costs associated with such leaks, data leakage prevention (DLP) technologies lag behind evolving evasion techniques that bypass existing sensitive information recognition (SIR) models. Current datasets lack comprehensive coverage of these adversarial transformations, limiting the evaluation of robust SIR systems. To address this gap, we introduce DataSIR, a benchmark dataset specifically designed to evaluate SIR models on sensitive data subjected to diverse format transformations. We curate 26 sensitive data categories based on multiple international regulations, and collect 131,890 original samples correspondingly. Through empirical analysis of real-world evasion tactics, we implement 21 format transformation methods, which are applied to the original samples, expanding the dataset to 1,647,501 samples to simulate adversarial scenarios. We evaluated DataSIR using four traditional NLP models and four large language models (LLMs). For LLMs, we design structured prompts with varying degrees of contextual hints to assess the impact of prior knowledge on recognition accuracy. These evaluations demonstrate that our dataset effectively differentiates the performance of various SIR algorithms. Combined with its rich category and format diversity, the dataset can serve as a benchmark for evaluating related models and help develop future more advanced SIR models. Our dataset and experimental code are publicly available at https://www.kaggle.com/datasets/fanmo1/datasir and https://github.com/Fan-Mo-ZJU/DataSIR.
https://openreview.net/forum?id=1aJ4yvtCac
Datasets and Benchmarks
Poster
1aJ4yvtCac
PatientSim: A Persona-Driven Simulator for Realistic Doctor-Patient Interactions
[ "Daeun Kyung", "Hyunseung Chung", "Seongsu Bae", "Jiho Kim", "Jae Ho Sohn", "Taerim Kim", "Soo Kyung Kim", "Edward Choi" ]
Doctor-patient consultations require multi-turn, context-aware communication tailored to diverse patient personas. Training or evaluating doctor LLMs in such settings requires realistic patient interaction systems. However, existing simulators often fail to reflect the full range of personas seen in clinical practice. To address this, we introduce PatientSim, a patient simulator that generates realistic and diverse patient personas for clinical scenarios, grounded in medical expertise. PatientSim operates using: 1) clinical profiles, including symptoms and medical history, derived from real-world data in the MIMIC-ED and MIMIC-IV datasets, and 2) personas defined by four axes: personality, language proficiency, medical history recall level, and cognitive confusion level, resulting in 37 unique combinations. We evaluate eight LLMs for factual accuracy and persona consistency. The top-performing open-source model, Llama 3.3 70B, is validated by four clinicians to confirm the robustness of our framework. As an open-source, customizable platform, PatientSim provides a reproducible and scalable solution that can be customized for specific training needs. Offering a privacy-compliant environment, it serves as a robust testbed for evaluating medical dialogue systems across diverse patient presentations and shows promise as an educational tool for healthcare. The code is available at https://github.com/dek924/PatientSim.
https://openreview.net/forum?id=1THAjdP4QJ
Datasets and Benchmarks
Spotlight
1THAjdP4QJ
PARALLELPROMPT: Extracting Parallelism from Large Language Model Queries
[ "Steven Kolawole", "Keshav Santhanam", "Virginia Smith", "Pratiksha Thaker" ]
LLM serving systems typically treat user prompts as monolithic inputs, optimizing inference through decoding tricks or inter-query batching. However, many real-world prompts contain *latent semantic parallelism*—decomposable structures where subtasks can be executed independently to reduce latency while preserving meaning. We introduce PARALLELPROMPT, the first benchmark for measuring intra-query parallelism in natural user prompts. Our dataset comprises over 37,000 real-world prompts from public LLM chat logs, each annotated with a structured schema capturing task templates, shared context, and iteration inputs. These schemas are extracted using LLM-assisted prompting with rule-based multilingual validation. To evaluate the benefits of decomposition, we provide an execution suite that benchmarks serial vs. parallel strategies, measuring latency, structural adherence, and semantic fidelity. Our results show that intra-query parallelism can be successfully parsed in over 75\% of curated datasets, unlocking up to *$5\times$ speedups* on tasks like translation, comprehension, and comparative analysis, with minimal quality degradation. By releasing this benchmark, curation pipeline, and evaluation suite, we provide the first standardized testbed for studying structure-aware execution in LLM serving pipelines.
https://openreview.net/forum?id=1KSxxnFNb9
Datasets and Benchmarks
Poster
1KSxxnFNb9
PolypSense3D: A Multi-Source Benchmark Dataset for Depth-Aware Polyp Size Measurement in Endoscopy
[ "Ruyu Liu", "Lin Wang", "Zhou Mingming", "Jianhua Zhang", "ZHANG HAOYU", "Xiufeng Liu", "Xu Cheng", "Sixian Chan", "Shen yanbin", "Dai Sheng", "Yuping Yan", "Yaochu Jin", "Lingjuan Lyu" ]
Accurate polyp sizing during endoscopy is crucial for cancer risk assessment but is hindered by subjective methods and inadequate datasets lacking integrated 2D appearance, 3D structure, and real-world size information. We introduce PolypSense3D, the first multi-source benchmark dataset specifically targeting depth-aware polyp size measurement. It uniquely integrates over 43,000 frames from virtual simulations, physical phantoms, and clinical sequences, providing synchronized RGB, dense/sparse depth, segmentation masks, camera parameters, and millimeter-scale size labels derived via a novel forceps-assisted in-vivo annotation technique. To establish its value, we benchmark state-of-the-art segmentation and depth estimation models. Results quantify significant domain gaps between simulated/phantom and clinical data and reveal substantial error propagation from perception stages to final size estimation, with the best fully automated pipelines achieving an average Mean Absolute Error (MAE) of 0.95 mm on the clinical data subset. Publicly released under CC BY-SA 4.0 with code and evaluation protocols, PolypSense3D offers a standardized platform to accelerate research in robust, clinically relevant quantitative endoscopic vision. The benchmark dataset and code are available at: https://github.com/HNUicda/PolypSense3D and https://doi.org/10.7910/DVN/K13H89.
https://openreview.net/forum?id=138y2wo6ok
Datasets and Benchmarks
Poster
138y2wo6ok
RoboCerebra: A Large-scale Benchmark for Long-horizon Robotic Manipulation Evaluation
[ "Songhao Han", "Boxiang Qiu", "Yue Liao", "Siyuan Huang", "Chen Gao", "Shuicheng YAN", "Si Liu" ]
Recent advances in vision-language models (VLMs) have enabled instruction-conditioned robotic systems with improved generalization. However, most existing work focuses on reactive System 1 policies, underutilizing VLMs’ strengths in semantic reasoning and long-horizon planning. These System 2 capabilities—characterized by deliberative, goal-directed thinking—remain underexplored due to the limited temporal scale and structural complexity of current benchmarks. To address this gap, we introduce RoboCerebra, a benchmark for evaluating high-level reasoning in long-horizon robotic manipulation. RoboCerebra includes: (1) a large-scale simulation dataset with extended task horizons and diverse subtask sequences in household environments; (2) a hierarchical framework combining a high-level VLM planner with a low-level vision-language-action (VLA) controller; and (3) an evaluation protocol targeting planning, reflection, and memory through structured System 1–System 2 interaction. The dataset is constructed via a top-down pipeline, where GPT generates task instructions and decomposes them into subtask sequences. Human operators execute the subtasks in simulation, yielding high-quality trajectories with dynamic object variations. Compared to prior benchmarks, RoboCerebra features significantly longer action sequences and denser annotations. We further benchmark state-of-the-art VLMs as System 2 modules and analyze their performance across key cognitive dimensions, advancing the development of more capable and generalizable robotic planners.
https://openreview.net/forum?id=0JtNyaHbNx
Datasets and Benchmarks
Poster
0JtNyaHbNx
PanTS: The Pancreatic Tumor Segmentation Dataset
[ "Wenxuan Li", "Xinze Zhou", "Qi Chen", "Tianyu Lin", "Pedro R. A. S. Bassi", "Xiaoxi Chen", "Chen Ye", "Zheren Zhu", "Kai Ding", "Heng Li", "Kang Wang", "Yang Yang", "Yucheng Tang", "Daguang Xu", "Alan Yuille", "Zongwei Zhou" ]
PanTS is a large-scale, multi-institutional dataset curated to advance research in pancreatic CT analysis. It contains 36,390 CT scans from 145 medical centers, with expert-validated, voxel-wise annotations of over 993,000 anatomical structures, covering pancreatic tumors, pancreas head, body, and tail, and 24 surrounding anatomical structures such as vascular/skeletal structures and abdominal/thoracic organs. Each scan includes metadata such as patient age, sex, diagnosis, contrast phase, in-plane spacing, slice thickness, etc. AI models trained on PanTS achieve significantly better performance in pancreatic tumor detection, localization, and segmentation than those trained on existing public datasets. Our analysis indicates that these gains are directly attributable to the 16× larger-scale tumor annotations and indirectly supported by the 24 additional surrounding anatomical structures. As the largest and most comprehensive resource of its kind, PanTS offers a new benchmark for developing and evaluating AI models in pancreatic CT analysis.
https://openreview.net/forum?id=0BCUXg40r7
Datasets and Benchmarks
Poster
0BCUXg40r7
SceneSplat++: A Large Dataset and Comprehensive Benchmark for Language Gaussian Splatting
[ "Mengjiao Ma", "Qi Ma", "Yue Li", "Jiahuan Cheng", "Runyi Yang", "Bin Ren", "Nikola Popovic", "Mingqiang Wei", "Nicu Sebe", "Ender Konukoglu", "Luc Van Gool", "Theo Gevers", "Martin R. Oswald", "Danda Pani Paudel" ]
3D Gaussian Splatting (3DGS) serves as a highly performant and efficient encoding of scene geometry, appearance, and semantics. Moreover, grounding language in 3D scenes has proven to be an effective strategy for 3D scene understanding. Current Language Gaussian Splatting line of work fall into three main groups: (i) per-scene optimization-based, (ii) per-scene optimization-free, and (iii) generalizable approach. However, most of them are evaluated only on rendered 2D views of a handful of scenes and viewpoints close to the training views, limiting ability and insight into holistic 3D understanding. To address this gap, we propose the first large-scale benchmark that systematically assesses these three groups of methods directly in 3D space, evaluating on 1060 scenes across three indoor datasets and one outdoor dataset. Benchmark results demonstrate a clear advantage of the generalizable paradigm, particularly in relaxing the scene-specific limitation, enabling fast feed-forward inference on novel scenes, and achieving superior segmentation performance. We further introduce SceneSplat-49K -- a carefully curated 3DGS dataset comprising of around 49K diverse indoor and outdoor scenes trained from multiple sources, with which we demonstrate generalizable approach could harness strong data priors. Our codes, benchmark, and datasets are available.
https://openreview.net/forum?id=02ymnxlypN
Datasets and Benchmarks
Poster
02ymnxlypN