Running on CPU Upgrade Featured 2.5k The Smol Training Playbook 📚 2.5k The secrets to building world-class LLMs
Running Featured 1.2k FineWeb: decanting the web for the finest text data at scale 🍷 1.2k Generate high-quality text data for LLMs using FineWeb
Running 3.53k The Ultra-Scale Playbook 🌌 3.53k The ultimate guide to training LLM on large GPU Clusters
view article Article DualPipe Explained: A Comprehensive Guide to DualPipe That Anyone Can Understand—Even Without a Distributed Training Background Feb 28 • 14
Zephyr ORPO Collection Models and datasets to align LLMs with Odds Ratio Preference Optimisation (ORPO). Recipes here: https://github.com/huggingface/alignment-handbook • 3 items • Updated Apr 12, 2024 • 18