Abstract
A 15-billion parameter multimodal reasoning model achieves competitive performance through a progressive training methodology without reinforcement learning, demonstrating efficient use of computational resources.
We present Apriel-1.5-15B-Thinker, a 15-billion parameter open-weights multimodal reasoning model that achieves frontier-level performance through training design rather than sheer scale. Starting from Pixtral-12B, we apply a progressive three-stage methodology: (1) depth upscaling to expand reasoning capacity without pretraining from scratch, (2) staged continual pre-training that first develops foundational text and vision understanding, then enhances visual reasoning through targeted synthetic data generation addressing spatial structure, compositional understanding, and fine-grained perception, and (3) high-quality text-only supervised fine-tuning on curated instruction-response pairs with explicit reasoning traces spanning mathematics, coding, science, and tool use. Notably, our model achieves competitive results without reinforcement learning or preference optimization, isolating the contribution of our data-centric continual pre-training approach. On the Artificial Analysis Intelligence Index, Apriel-1.5-15B-Thinker attains a score of 52, matching DeepSeek-R1-0528 despite requiring significantly fewer computational resources. Across ten image benchmarks, its performance is on average within five points of Gemini-2.5-Flash and Claude Sonnet-3.7, a key achievement for a model operating within single-GPU deployment constraints. Our results demonstrate that thoughtful mid-training 2 design can close substantial capability gaps without massive scale, making frontier-level multimodal reasoning accessible to organizations with limited infrastructure. We release the model checkpoint, all training recipes, and evaluation protocols under the MIT license to to advance open-source research.
Community
Introducing ServiceNowโs 15B-parameter model that matches ๐๐ฒ๐ฒ๐ฝ๐ฆ๐ฒ๐ฒ๐ธโ๐ฅ๐ญโ๐ฌ๐ฑ๐ฎ๐ด, ๐ ๐ถ๐๐๐ฟ๐ฎ๐นโ๐บ๐ฒ๐ฑ๐ถ๐๐บโ๐ญ.๐ฎ and ๐๐ฒ๐บ๐ถ๐ป๐ถ ๐๐น๐ฎ๐๐ต ๐ฎ.๐ฑ on the Artificial Analysis Index (๐๐๐ ๐ฑ๐ฎ) โ delivering comparable results at a ๐ณ๐ฟ๐ฎ๐ฐ๐๐ถ๐ผ๐ป ๐ผ๐ณ ๐๐ต๐ฒ ๐๐ถ๐๐ฒ (at least 8-10 times smaller)
๐๐ฟ๐ผ๐ป๐๐ถ๐ฒ๐ฟ-๐น๐ฒ๐๐ฒ๐น ๐ฟ๐ฒ๐ฎ๐๐ผ๐ป๐ถ๐ป๐ด on a single GPU
๐ก๐ผ ๐ฅ๐ ๐ฝ๐ต๐ฎ๐๐ฒ โ the step-change comes from mid-training
๐ฅ๐ฒ๐ฎ๐๐ผ๐ป๐ ๐ผ๐๐ฒ๐ฟ ๐ถ๐บ๐ฎ๐ด๐ฒ๐ - Image + Text mid training enables model to reason over images without additional training
๐๐ฟ๐ฒ๐ฎ๐ ๐ฎ๐ ๐ฟ๐ฒ๐ฎ๐๐ผ๐ป๐ถ๐ป๐ด โ AIME2025: 88, GPQA: 71, LCB: 73
๐๐ผ๐น๐น๐ผ๐๐ ๐ถ๐ป๐๐๐ฟ๐๐ฐ๐๐ถ๐ผ๐ป๐ reliably โ IFBench: 62
T๐ฎ๐๐ฎ ๐๐ฒ๐ป๐ฐ๐ต (Telecom): 68 โ ready for real-world workflows
๐ข๐ฝ๐ฒ๐ป ๐๐ฒ๐ถ๐ด๐ต๐๐ model to further research and reproducibility (MIT license)
will the data be released?
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Apriel-Nemotron-15B-Thinker (2025)
- InternVL3.5: Advancing Open-Source Multimodal Models in Versatility, Reasoning, and Efficiency (2025)
- K2-Think: A Parameter-Efficient Reasoning System (2025)
- MobileLLM-R1: Exploring the Limits of Sub-Billion Language Model Reasoners with Open Training Recipes (2025)
- VARCO-VISION-2.0 Technical Report (2025)
- SAIL-VL2 Technical Report (2025)
- VLA-R1: Enhancing Reasoning in Vision-Language-Action Models (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Thanks for your work! I have a quick question: how do you organize the data formats for tasks like Image Reconstruction and Visual Matching in CPT Stage 2? I think this synthetic augmentation approach is particularly interesting.
Thank you!
arXiv explained breakdown of this paper ๐ https://arxivexplained.com/papers/apriel-15-15b-thinker
Models citing this paper 2
Datasets citing this paper 0
No dataset linking this paper