Orion-MSP: Multi-Scale Sparse Attention for Tabular In-Context Learning
Abstract
Orion-MSP, a tabular in-context learning architecture, addresses limitations in current models by incorporating multi-scale processing, block-sparse attention, and a Perceiver-style memory, achieving state-of-the-art performance on diverse benchmarks.
Tabular data remain the predominant format for real-world applications. Yet, developing effective neural models for tabular data remains challenging due to heterogeneous feature types and complex interactions occurring at multiple scales. Recent advances in tabular in-context learning (ICL), such as TabPFN and TabICL, have achieved state-of-the-art performance comparable to gradient-boosted trees (GBTs) without task-specific fine-tuning. However, current architectures exhibit key limitations: (1) single-scale feature processing that overlooks hierarchical dependencies, (2) dense attention with quadratic scaling in table width, and (3) strictly sequential component processing that prevents iterative representation refinement and cross-component communication. To address these challenges, we introduce Orion-MSP, a tabular ICL architecture featuring three key innovations: (1) multi-scale processing to capture hierarchical feature interactions; (2) block-sparse attention combining windowed, global, and random patterns for scalable efficiency and long-range connectivity; and (3) a Perceiver-style memory enabling safe bidirectional information flow across components. Across diverse benchmarks, Orion-MSP matches or surpasses state-of-the-art performance while scaling effectively to high-dimensional tables, establishing a new standard for efficient tabular in-context learning. The model is publicly available at https://github.com/Lexsi-Labs/Orion-MSP .
Community
Orion-MSP is a tabular foundation model that combines multi-scale sparse attention with Perceiver-style memory for efficient in-context learning on tabular data. The model processes features at multiple resolutions simultaneously, capturing both local feature interactions and global dataset-level patterns through hierarchical attention mechanisms.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- BiSparse-AAS: Bilinear Sparse Attention and Adaptive Spans Framework for Scalable and Efficient Text Summarization (2025)
- ReSSFormer: A Recursive Sparse Structured Transformer for Scalable and Long-Context Reasoning (2025)
- MTmixAtt: Integrating Mixture-of-Experts with Multi-Mix Attention for Large-Scale Recommendation (2025)
- TabGemma: Text-Based Tabular ICL via LLM using Continued Pretraining and Retrieval (2025)
- Hierarchical Resolution Transformers: A Wavelet-Inspired Architecture for Multi-Scale Language Understanding (2025)
- Long-Context Modeling with Dynamic Hierarchical Sparse Attention for On-Device LLMs (2025)
- VIPAMIN: Visual Prompt Initialization via Embedding Selection and Subspace Expansion (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper