Dataset Viewer (First 5GB)
Auto-converted to Parquet
index
int64
image_name
string
visible_image
unknown
infrared_image
unknown
question
string
segmentation
unknown
0
00329
"iVBORw0KGgoAAAANSUhEUgAAAyAAAAJYCAIAAAAVFBUnAAEAAElEQVR4nLT955NlSXYfCLq++j4dOrWqrCwtuhsNNhSHoBiQwHB(...TRUNCATED)
"iVBORw0KGgoAAAANSUhEUgAAAyAAAAJYCAIAAAAVFBUnAAEAAElEQVR4nMy923LcuLalDZBMnSW7anfEjn7/x/u7q2ydbCmTxH/(...TRUNCATED)
Tall pole in the center
"k05VTVBZAQB2AHsnZGVzY3InOiAnfHUxJywgJ2ZvcnRyYW5fb3JkZXInOiBGYWxzZSwgJ3NoYXBlJzogKDYwMCwgODAwKSwgfSA(...TRUNCATED)
1
00971
"iVBORw0KGgoAAAANSUhEUgAAAyAAAAJYCAIAAAAVFBUnAAEAAElEQVR4nJT9abrsuo4EigWY2wPz5wl4/lN5Jwn/QBcAmOuWVef(...TRUNCATED)
"iVBORw0KGgoAAAANSUhEUgAAAyAAAAJYCAIAAAAVFBUnAAEAAElEQVR4nMR92ZbjSI6sM0K5Z9ds//+Ld+mqyso1eB98hAbNDOY(...TRUNCATED)
Sky above the building
"k05VTVBZAQB2AHsnZGVzY3InOiAnfHUxJywgJ2ZvcnRyYW5fb3JkZXInOiBGYWxzZSwgJ3NoYXBlJzogKDYwMCwgODAwKSwgfSA(...TRUNCATED)
2
01121
"iVBORw0KGgoAAAANSUhEUgAAAyAAAAJYCAIAAAAVFBUnAAEAAElEQVR4nJz97ZLmOA8rgJGeflO5/wtJKrd3zrSZH7RgCKA8m3h(...TRUNCATED)
"iVBORw0KGgoAAAANSUhEUgAAAyAAAAJYCAIAAAAVFBUnAAEAAElEQVR4nMz9XXbkSnIsjAaK3L1brTOAM/8xfktaUncV7gNEb4O(...TRUNCATED)
the car parked on the far right
"k05VTVBZAQB2AHsnZGVzY3InOiAnfHUxJywgJ2ZvcnRyYW5fb3JkZXInOiBGYWxzZSwgJ3NoYXBlJzogKDYwMCwgODAwKSwgfSA(...TRUNCATED)
3
00458D
"iVBORw0KGgoAAAANSUhEUgAAAoAAAAHgCAIAAAC6s0uzAAAAB3RJTUUH5QgSAzcw9KXahQAAIABJREFUeJzsvVuTJcdxJvi5e0R(...TRUNCATED)
"iVBORw0KGgoAAAANSUhEUgAAAoAAAAHgCAAAAAAQuoM4AAAAB3RJTUUH5QgXBzAp71i8bAAAIABJREFUeJycve2O5EiSLXaOmbs(...TRUNCATED)
The leftmost car
"k05VTVBZAQB2AHsnZGVzY3InOiAnfHUxJywgJ2ZvcnRyYW5fb3JkZXInOiBGYWxzZSwgJ3NoYXBlJzogKDQ4MCwgNjQwKSwgfSA(...TRUNCATED)
4
01406
"iVBORw0KGgoAAAANSUhEUgAAAyAAAAJYCAIAAAAVFBUnAAEAAElEQVR4nMz9Ta8sO44eCpORkWvvU+hyowEP3PbEQHtiwH/93sH(...TRUNCATED)
"iVBORw0KGgoAAAANSUhEUgAAAyAAAAJYCAIAAAAVFBUnAAEAAElEQVR4nMT9y5bjurIkioIzo/o16v8/dWXyNrjDp9FegCLXPhe(...TRUNCATED)
all cars on road
"k05VTVBZAQB2AHsnZGVzY3InOiAnfHUxJywgJ2ZvcnRyYW5fb3JkZXInOiBGYWxzZSwgJ3NoYXBlJzogKDYwMCwgODAwKSwgfSA(...TRUNCATED)
5
01027
"iVBORw0KGgoAAAANSUhEUgAAAyAAAAJYCAIAAAAVFBUnAAEAAElEQVR4nIz9S5ckO3ImCH4iUDVzj3svM8msWdSwq+qwFt2r6dP(...TRUNCATED)
"iVBORw0KGgoAAAANSUhEUgAAAyAAAAJYCAIAAAAVFBUnAAEAAElEQVR4nJyd23bmuJGlAeqXlFlVdnv6rp9h3v+5ZvWyXZlKScR(...TRUNCATED)
Center truck near the trees
"k05VTVBZAQB2AHsnZGVzY3InOiAnfHUxJywgJ2ZvcnRyYW5fb3JkZXInOiBGYWxzZSwgJ3NoYXBlJzogKDYwMCwgODAwKSwgfSA(...TRUNCATED)
6
00413D
"iVBORw0KGgoAAAANSUhEUgAAAoAAAAHgCAIAAAC6s0uzAAAAB3RJTUUH5QgSAzcnd3ZfQgAAIABJREFUeJzsvVuPZElyJvaZmbu(...TRUNCATED)
"iVBORw0KGgoAAAANSUhEUgAAAoAAAAHgCAAAAAAQuoM4AAAAB3RJTUUH5QgXBzAmf+eh/QAAIABJREFUeJy8vdmWJMluJCgQmEV(...TRUNCATED)
The cars on the right
"k05VTVBZAQB2AHsnZGVzY3InOiAnfHUxJywgJ2ZvcnRyYW5fb3JkZXInOiBGYWxzZSwgJ3NoYXBlJzogKDQ4MCwgNjQwKSwgfSA(...TRUNCATED)
7
00733
"iVBORw0KGgoAAAANSUhEUgAAAyAAAAJYCAIAAAAVFBUnAAEAAElEQVR4nIz9W5MsSXIeCH6qZh6Zp6ovZAPYlsEIuTuyQz7tisz(...TRUNCATED)
"iVBORw0KGgoAAAANSUhEUgAAAyAAAAJYCAIAAAAVFBUnAAEAAElEQVR4nNT9W3MmPXIdCieKx+6Z2bItSwo7/E/9bx2OcMjeVni(...TRUNCATED)
Leftmost two vegetation behind the car
"k05VTVBZAQB2AHsnZGVzY3InOiAnfHUxJywgJ2ZvcnRyYW5fb3JkZXInOiBGYWxzZSwgJ3NoYXBlJzogKDYwMCwgODAwKSwgfSA(...TRUNCATED)
8
01068
"iVBORw0KGgoAAAANSUhEUgAAAyAAAAJYCAIAAAAVFBUnAAEAAElEQVR4nIz9WdMtSXIYiLl7ROY559vuVlXd1dULq5tAYyFAEtw(...TRUNCATED)
"iVBORw0KGgoAAAANSUhEUgAAAyAAAAJYCAIAAAAVFBUnAAEAAElEQVR4nMS9W7Mlt3ElDOyzz7W7eRVFUrI9MiVr5Ilw2DGj//8(...TRUNCATED)
Person on the left sidewalk
"k05VTVBZAQB2AHsnZGVzY3InOiAnfHUxJywgJ2ZvcnRyYW5fb3JkZXInOiBGYWxzZSwgJ3NoYXBlJzogKDYwMCwgODAwKSwgfSA(...TRUNCATED)
9
01267
"iVBORw0KGgoAAAANSUhEUgAAAyAAAAJYCAIAAAAVFBUnAAEAAElEQVR4nLT92ZvlOJIviBnA5ezuHu6xZEZkVlVnVVd192j03au(...TRUNCATED)
"iVBORw0KGgoAAAANSUhEUgAAAyAAAAJYCAIAAAAVFBUnAAEAAElEQVR4nJz927LlOJKkCRuWLz/EITtzZkRGZN7/8aanOqsiI9w(...TRUNCATED)
Rightmost person near the road
"k05VTVBZAQB2AHsnZGVzY3InOiAnfHUxJywgJ2ZvcnRyYW5fb3JkZXInOiBGYWxzZSwgJ3NoYXBlJzogKDYwMCwgODAwKSwgfSA(...TRUNCATED)
End of preview. Expand in Data Studio

MM-RIS: Multimodal Referring Image Segmentation Dataset

The MM-RIS dataset was introduced in the paper RIS-FUSION: Rethinking Text-Driven Infrared and Visible Image Fusion from the Perspective of Referring Image Segmentation.

This large-scale benchmark supports the multimodal referring image segmentation (RIS) task by providing a goal-aligned approach to supervise and evaluate how effectively natural language contributes to infrared and visible image fusion outcomes.

Paper

RIS-FUSION: Rethinking Text-Driven Infrared and Visible Image Fusion from the Perspective of Referring Image Segmentation

Code

The official code repository for the associated RIS-FUSION project can be found on GitHub: https://github.com/SijuMa2003/RIS-FUSION

Introduction

Text-driven infrared and visible image fusion has gained attention for enabling natural language to guide the fusion process. However, existing methods often lack a goal-aligned task to supervise and evaluate how effectively the input text contributes to the fusion outcome.

We observe that referring image segmentation (RIS) and text-driven fusion share a common objective: highlighting the object referred to by the text. Motivated by this, we propose RIS-FUSION, a cascaded framework that unifies fusion and RIS through joint optimization.

To support the multimodal referring image segmentation task, we introduce MM-RIS, a large-scale benchmark with 12.5k training and 3.5k testing triplets, each consisting of an infrared-visible image pair, a segmentation mask, and a referring expression.

Dataset Structure

The MM-RIS dataset is available in this Hugging Face repository and consists of the following Parquet files:

  • mm_ris_test.parquet
  • mm_ris_val.parquet
  • mm_ris_train_part1.parquet
  • mm_ris_train_part2.parquet

These files together comprise 12.5k training and 3.5k testing triplets. Each triplet includes an infrared image, a visible image, a segmentation mask, and a natural language referring expression.

Sample Usage

To prepare the MM-RIS dataset for use with the RIS-FUSION code, you will need to download all the dataset files from this repository and merge the training partitions.

  1. Download the dataset files: Download mm_ris_test.parquet, mm_ris_val.parquet, mm_ris_train_part1.parquet, and mm_ris_train_part2.parquet from this Hugging Face repository and place them under a data/ directory in your project, ideally within a cloned RIS-FUSION GitHub repository.

  2. Merge partitioned parquet files: The RIS-FUSION GitHub repository provides a script to merge the partitioned training data. Assuming you have cloned the repository and placed the parquet files in ./data/:

    python ./data/merge_parquet.py
    

    This script will combine mm_ris_train_part1.parquet and mm_ris_train_part2.parquet into a single mm_ris_train.parquet file.

Once the dataset is prepared, you can use it for training and testing models as shown in the examples below.

Training Example

python train_with_lavt.py      \
  --train_parquet ./data/mm_ris_train.parquet    \
  --val_parquet   ./data/mm_ris_val.parquet     \
  --prefusion_model unet_fuser --prefusion_base_ch 32     \
  --epochs 10 -b 16 -j 16     \
  --img_size 480     \
  --swin_type base \
  --pretrained_swin_weights ./pretrained_weights/swin_base_patch4_window12_384_22k.pth     \
  --bert_tokenizer ./bert/pretrained_weights/bert-base-uncased \
  --ck_bert ./bert/pretrained_weights/bert-base-uncased     \
  --init_from_lavt_one ./pretrained_weights/lavt_one_8_cards_ImgNet22KPre_swin-base-window12_refcoco+_adamw_b32lr0.00005wd1e-2_E40.pth     \
  --lr_seg 5e-5 --wd_seg 1e-2 --lr_pf 1e-4 --wd_pf 1e-2     \
  --lambda_prefusion 3.0     \
  --w_sobel_vis 0.0 \
  --w_sobel_ir 1.0     \
  --w_grad 1.0     \
  --w_ssim_vis 0.5 \
  --w_ssim_ir 0.0     \
  --w_mse_vis 0.5 \
  --w_mse_ir 2.0     
  --eval_vis_dir ./eval_vis \
  --output-dir ./ckpts/risfusion

Testing Example

python test.py   \
  --ckpt  ./ckpts/risfusion/model_best_lavt.pth   \
  --test_parquet ./data/mm_ris_test.parquet   \
  --out_dir ./your_output_dir  \
  --bert_tokenizer ./bert/pretrained_weights/bert-base-uncased   \
  --ck_bert ./bert/pretrained_weights/bert-base-uncased

Citation

If you find this dataset or the associated paper useful, please consider citing:

@article{RIS-FUSION2025,
  title   = {RIS-FUSION: Rethinking Text-Driven Infrared and Visible Image Fusion from the Perspective of Referring Image Segmentation},
  author  = {Ma, Siju and Gong, Changsiyu and Fan, Xiaofeng and Ma, Yong and Jiang, Chengjie},
  journal = {...},
  year    = {2025}
}

Acknowledgements

Downloads last month
27