EN | 中文

SenseNova-SI: Scaling Spatial Intelligence with Multimodal Foundation Models

Code arXiv Code Leaderboard

🔥Please check out our newly released SenseNova-SI-1.1-InternVL3-2B and SenseNova-SI-1.1-InternVL3-8B.

The current model will be deprecated in due course.

Overview

Despite remarkable progress, leading multimodal models still exhibit notable deficiencies in spatial intelligence: the ability to make metric estimations, understand spatial relationships, handle viewpoint changes, and integrate information across complex scenes. We take a scaling perspective: constructing and curating a large-scale, comprehensive collection of spatial intelligence data, and through continued training on powerful multimodal foundations, cultivating multi-faceted spatial understanding within the SenseNova-SI family of models. In the future, SenseNova-SI will be integrated with larger-scale in-house models.

Release Information

Currently, we build SenseNova-SI upon popular open-source foundation models to maximize compatibility with existing research pipelines. In this release, we present SenseNova-SI-InternVL3-2B and SenseNova-SI-InternVL3-8B, which achieve state-of-the-art performance among open-source models of comparable size across four recent spatial intelligence benchmarks: VSI, MMSI, MindCube, and ViewSpatial.

Model VSI MMSI MindCube-Tiny ViewSpatial
Open-source Models (~2B)
InternVL3-2B32.9826.5037.5032.56
Qwen3-VL-2B-Instruct50.3628.9034.5236.97
MindCube-3B-RawQA-SFT17.241.7051.7324.14
MindCube-3B-Aug-CGMap-FFR-Out-SFT29.6029.1041.0630.90
MindCube-3B-Plain-CGMap-FFR-Out-SFT29.9330.4039.9031.20
SpatialLadder-3B44.8627.4043.4639.85
SpatialMLLM-4B45.9826.1033.4634.66
SenseNova-SI-InternVL3-2B 58.47 35.50 71.35 40.62
Open-source Models (~8B)
InternVL3-8B42.1428.0041.5438.66
Qwen3-VL-8B-Instruct57.9031.1029.4242.20
BAGEL-7B30.9033.1034.7141.32
SpaceR-7B36.2927.4037.9835.85
ViLaSR-7B44.6330.2035.1035.71
SenseNova-SI-InternVL3-8B 62.80 37.90 89.33 53.92
Proprietary Models
Gemini-2.5-pro-2025-0653.5738.0057.6046.06
Grok-4-2025-07-0947.9237.8063.5643.23
GPT-5-2025-08-0755.0341.8056.3045.59

What's Next?

We will release the accompanying technical report shortly. Please stay tuned!

🛠️ QuickStart

Installation

We recommend using uv to manage the environment.

uv installation guide: https://docs.astral.sh/uv/getting-started/installation/#installing-uv

git clone git@github.com:OpenSenseNova/SenseNova-SI.git
cd SenseNova-SI/
uv sync --extra cu124 # or one of [cu118|cu121|cu124|cu126|cu128|cu129], depending on your CUDA version
uv sync
source .venv/bin/activate

How to Use

Here's an example demonstrating how to use the SenseNova-SI model for multi-image visual question answering with the transformers library.

import torch
from PIL import Image
from transformers import AutoModel, AutoProcessor

model_path = "sensenova/SenseNova-SI-1.1-InternVL3-8B"

# Load processor and model
processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)
model = AutoModel.from_pretrained(model_path, torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True).eval()

# Example: Pos-Obj-Obj subset of MMSI-Bench (from GitHub examples)
# These images should be available locally or loaded from a source
# For demonstration, assuming they are accessible via relative paths:
# (Note: In a real scenario, ensure 'examples/Q1_1.png' and 'examples/Q1_2.png' exist from the GitHub repo)
try:
    image1 = Image.open("./examples/Q1_1.png").convert("RGB")
    image2 = Image.open("./examples/Q1_2.png").convert("RGB")
except FileNotFoundError:
    print("Example images not found. Please ensure 'examples/Q1_1.png' and 'examples/Q1_2.png' are available, or provide your own images.")
    # Fallback for demonstration if images are not present
    image1 = Image.new('RGB', (500, 500), color = 'red')
    image2 = Image.new('RGB', (500, 500), color = 'blue')

question = "<image><image>
You are standing in front of the dice pattern and observing it. Where is the desk lamp approximately located relative to you?
Options: A: 90 degrees counterclockwise, B: 90 degrees clockwise, C: 135 degrees counterclockwise, D: 135 degrees clockwise"

# Prepare inputs
inputs = processor(text=question, images=[image1, image2], return_tensors="pt").to(model.device)

# Generate response
with torch.no_grad():
    output = model.generate(**inputs, max_new_tokens=100)
    response = processor.batch_decode(output, skip_special_tokens=True)[0]

print(f"Question: {question}")
print(f"Answer: {response}")

🖊️ Citation

@article{sensenova-si,
  title = {Scaling Spatial Intelligence with Multimodal Foundation Models},
  author = {Cai, Zhongang and Wang, Ruisi and Gu, Chenyang and Pu, Fanyi and Xu, Junxiang and Wang, Yubo and Yin, Wanqi and Yang, Zhitao and Wei, Chen and Sun, Qingping and Zhou, Tongxi and Li, Jiaqi and Pang, Hui En and Qian, Oscar and Wei, Yukun and Lin, Zhiqian and Shi, Xuanke and Deng, Kewang and Han, Xiaoyang and Chen, Zukai and Fan, Xiangyu and Deng, Hanming and Lu, Lewei and Pan, Liang and Li, Bo and Liu, Ziwei and Wang, Quan and Lin, Dahua and Yang, Lei},
  journal = {arXiv preprint arXiv:2511.13719},
  year = {2025}
}
Downloads last month
751
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for sensenova/SenseNova-SI-InternVL3-8B

Finetuned
(18)
this model
Quantizations
2 models

Collection including sensenova/SenseNova-SI-InternVL3-8B