Abstract
The study explores pretraining methodology through an open scientific approach, demonstrating that data processing depth and adaptive curriculum strategies significantly impact model capability development.
The foundational pretraining phase determines a model's capability ceiling, as post-training struggles to overcome capability foundations established during pretraining, yet it remains critically under-explored. This stems from a structural paradox: organizations with computational resources operate under commercial pressures that inhibit transparent disclosure, while academic institutions possess research freedom but lack pretraining-scale computational resources. daVinci-LLM occupies this unexplored intersection, combining industrial-scale resources with full research freedom to advance the science of pretraining. We adopt a fully-open paradigm that treats openness as scientific methodology, releasing complete data processing pipelines, full training processes, and systematic exploration results. Recognizing that the field lacks systematic methodology for data processing, we employ the Data Darwinism framework, a principled L0-L9 taxonomy from filtering to synthesis. We train a 3B-parameter model from random initialization across 8T tokens using a two-stage adaptive curriculum that progressively shifts from foundational capabilities to reasoning-intensive enhancement. Through 200+ controlled ablations, we establish that: processing depth systematically enhances capabilities, establishing it as a critical dimension alongside volume scaling; different domains exhibit distinct saturation dynamics, necessitating adaptive strategies from proportion adjustments to format shifts; compositional balance enables targeted intensification while preventing performance collapse; how evaluation protocol choices shape our understanding of pretraining progress. By releasing the complete exploration process, we enable the community to build upon our findings and systematic methodologies to form accumulative scientific knowledge in pretraining.
Community
Pretraining still sets the ceiling for model capability, yet the process itself remains surprisingly under-explored. In this work, we try to address that gap with daVinci-LLM: a fully open pretraining study that releases not only model weights, but also the complete data processing pipeline, training process, and 200+ controlled ablations.
A core motivation behind this project is a structural gap in the field: industry often has the compute but not the freedom to disclose, while academia has the freedom but rarely the compute to run pretraining at meaningful scale. We aim to explore the intersection of both.
Using our Data Darwinism framework, we study pretraining data as a progressive L0-L9 processing pipeline, from filtering to synthesis, and train a 3B model from scratch on 8T tokens with a two-stage adaptive curriculum. Our experiments suggest several broader lessons: processing depth is a major scaling dimension alongside data volume, different domains saturate in different ways, compositional balance is critical for targeted intensification without collapse, and even evaluation protocols can meaningfully affect how we interpret pretraining progress.
We hope releasing the full exploration process helps make pretraining research more cumulative, transparent, and scientifically grounded.
Models citing this paper 1
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper