Papers
arxiv:2605.10616

MulTaBench: Benchmarking Multimodal Tabular Learning with Text and Image

Published on May 11
· Submitted by
Eilam Shapira
on May 14
#2 Paper of the day
Authors:
,
,
,
,
,
,
,
,

Abstract

Multimodal tabular learning benchmarks reveal that task-specific embedding tuning improves performance over frozen pretrained embeddings, particularly when modalities provide complementary predictive signals.

AI-generated summary

Tabular Foundation Models have recently established the state of the art in supervised tabular learning, by leveraging pretraining to learn generalizable representations of numerical and categorical structured data. However, they lack native support for unstructured modalities such as text and image, and rely on frozen, pretrained embeddings to process them. On established Multimodal Tabular Learning benchmarks, we show that tuning the embeddings to the task improves performance. Existing benchmarks, however, often focus on the mere co-occurrence of modalities; this leads to high variance across datasets and masks the benefits of task-specific tuning. To address this gap, we introduce MulTaBench, a benchmark of 40 datasets, split equally between image-tabular and text-tabular tasks. We focus on predictive tasks where the modalities provide complementary predictive signal, and where generic embeddings lose critical information, necessitating Target-Aware Representations that are aligned with the task. Our experimental results demonstrate that the gains from target-aware representation tuning generalize across both text and image modalities, several tabular learners, encoder scales, and embedding dimensions. MulTaBench constitutes the largest image-tabular benchmarking effort to date, spanning high-impact domains such as healthcare and e-commerce. It is designed to enable the research of novel architectures which incorporate joint modeling and target-aware representations, paving the way for the development of novel Multimodal Tabular Foundation Models.

Community

Many real-world prediction problems combine structured features with text or images: clinical records with X-rays, real-estate metadata with street-view photos, product listings with descriptions and images. But current approaches usually force one side to adapt: either LLMs/VLMs handle the unstructured data but struggle with tabular inductive biases, or tabular models use frozen generic embeddings that were not optimized for the actual prediction target.

In this paper, we introduce MulTaBench, a benchmark for multimodal tabular learning with text and image. The key idea is to focus on datasets where the modalities are genuinely complementary, and where target-aware representations improve over frozen embeddings.

Our results suggest that the next step is not just better preprocessing, but models that can natively combine structured and unstructured modalities while learning representations aligned with the downstream target.
Screenshot 2026-05-14 at 10.42.11

What a great idea and even better implementation!
Would definitely implement in my next project!!!

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2605.10616
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2605.10616 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2605.10616 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2605.10616 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.