The Butterfly Effect: Neural Network Training Trajectories Are Highly Sensitive to Initial Conditions Paper • 2506.13234 • Published Jun 16
FineWeb2: One Pipeline to Scale Them All -- Adapting Pre-Training Data Processing to Every Language Paper • 2506.20920 • Published Jun 26 • 64
Improving Dense Contrastive Learning with Dense Negative Pairs Paper • 2210.05063 • Published Oct 11, 2022
SimpleClick: Interactive Image Segmentation with Simple Vision Transformers Paper • 2210.11006 • Published Oct 20, 2022
GEXIA: Granularity Expansion and Iterative Approximation for Scalable Multi-grained Video-language Learning Paper • 2412.07704 • Published Dec 10, 2024
The Common Pile v0.1: An 8TB Dataset of Public Domain and Openly Licensed Text Paper • 2506.05209 • Published Jun 5 • 44
The Common Pile v0.1: An 8TB Dataset of Public Domain and Openly Licensed Text Paper • 2506.05209 • Published Jun 5 • 44
SmolLM2: When Smol Goes Big -- Data-Centric Training of a Small Language Model Paper • 2502.02737 • Published Feb 4 • 241
The FineWeb Datasets: Decanting the Web for the Finest Text Data at Scale Paper • 2406.17557 • Published Jun 25, 2024 • 98
Arcee's MergeKit: A Toolkit for Merging Large Language Models Paper • 2403.13257 • Published Mar 20, 2024 • 20
Distributed Inference and Fine-tuning of Large Language Models Over The Internet Paper • 2312.08361 • Published Dec 13, 2023 • 28
Git-Theta: A Git Extension for Collaborative Development of Machine Learning Models Paper • 2306.04529 • Published Jun 7, 2023 • 1
Git-Theta: A Git Extension for Collaborative Development of Machine Learning Models Paper • 2306.04529 • Published Jun 7, 2023 • 1
Crosslingual Generalization through Multitask Finetuning Paper • 2211.01786 • Published Nov 3, 2022 • 2