CogSense-8B

This repository contains the weights for CogSense-8B, a Multimodal Large Language Model (MLLM) introduced in the paper Toward Cognitive Supersensing in Multimodal Large Language Model.

Project Page | Code | Paper

Introduction

CogSense-8B is trained using Cognitive Supersensing, a novel training paradigm that endows MLLMs with human-like visual imagery capabilities. By integrating a Latent Visual Imagery Prediction (LVIP) head, the model learns sequences of visual cognitive latent embeddings and aligns them with answers, forming vision-based internal reasoning chains. This approach aims to bridge the gap between perceptual recognition and complex cognitive understanding.

CogSense-Bench

The model's cognitive capabilities are evaluated on CogSense-Bench, a comprehensive visual question answering (VQA) benchmark assessing five cognitive dimensions:

  • Fluid intelligence
  • Crystallized intelligence
  • Visuospatial cognition
  • Mental simulation
  • Visual routines

Citation

If you find this work useful, please consider citing:

@misc{li2026cognitivesupersensingmultimodallarge,
      title={Toward Cognitive Supersensing in Multimodal Large Language Model}, 
      author={Boyi Li and Yifan Shen and Yuanzhe Liu and Yifan Xu and Jiateng Liu and Xinzhuo Li and Zhengyuan Li and Jingyuan Zhu and Yunhan Zhong and Fangzhou Lan and Jianguo Cao and James M. Rehg and Heng Ji and Ismini Lourentzou and Xu Cao},
      year={2026},
      eprint={2602.01541},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2602.01541}, 
}
Downloads last month
21
Safetensors
Model size
9B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for PediaMedAI/CogSense-8B