Scaling Reasoning can Improve Factuality in Large Language Models Paper • 2505.11140 • Published 23 days ago • 6
Beyond 'Aha!': Toward Systematic Meta-Abilities Alignment in Large Reasoning Models Paper • 2505.10554 • Published 24 days ago • 119
Insights into DeepSeek-V3: Scaling Challenges and Reflections on Hardware for AI Architectures Paper • 2505.09343 • Published 25 days ago • 64
Kimi-VL-A3B Collection Moonshot's efficient MoE VLMs, exceptional on agent, long-context, and thinking • 6 items • Updated Apr 12 • 65
Llama Nemotron Collection Open, Production-ready Enterprise Models • 8 items • Updated 1 day ago • 60
Gemma 3 QAT Collection Quantization Aware Trained (QAT) Gemma 3 checkpoints. The model preserves similar quality as half precision while using 3x less memory • 15 items • Updated 9 days ago • 195
Llama 3.2 Collection This collection hosts the transformers and original repos of the Llama 3.2 and Llama Guard 3 • 15 items • Updated Dec 6, 2024 • 611
Training Language Models to Self-Correct via Reinforcement Learning Paper • 2409.12917 • Published Sep 19, 2024 • 139
Minitron Collection A family of compressed models obtained via pruning and knowledge distillation • 12 items • Updated 1 day ago • 61
Mutual Reasoning Makes Smaller LLMs Stronger Problem-Solvers Paper • 2408.06195 • Published Aug 12, 2024 • 74
Gemma 2 2B Release Collection The 2.6B parameter version of Gemma 2. • 6 items • Updated 9 days ago • 80
view article Article Llama 3.1 - 405B, 70B & 8B with multilinguality and long context By philschmid and 7 others • Jul 23, 2024 • 234