Zero-Overhead Introspection for Adaptive Test-Time Compute Paper • 2512.01457 • Published Dec 1, 2025 • 2
view post Post 8791 New family of 1B models just dropped!> LiquidAI/LFM2.5-1.2B-Base: 10T → 28T tokens> LiquidAI/LFM2.5-1.2B-Instruct: new large-scale multi-stage RL> LiquidAI/LFM2.5-1.2B-JP: our most polite model> LiquidAI/LFM2.5-VL-1.6B: multi-image multilingual> LiquidAI/LFM2.5-Audio-1.5B: 8x times faster, no quality lossSuper proud of this release 🤗 See translation 3 replies · 🚀 18 18 👀 1 1 + Reply
Domain Adaptation of Llama3-70B-Instruct through Continual Pre-Training and Model Merging: A Comprehensive Evaluation Paper • 2406.14971 • Published Jun 21, 2024
Training-Free Tokenizer Transplantation via Orthogonal Matching Pursuit Paper • 2506.06607 • Published Jun 7, 2025 • 2
MultiBanana: A Challenging Benchmark for Multi-Reference Text-to-Image Generation Paper • 2511.22989 • Published Nov 28, 2025 • 16
EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models Paper • 2312.06281 • Published Dec 11, 2023 • 2
Democratizing Diplomacy: A Harness for Evaluating Any Large Language Model on Full-Press Diplomacy Paper • 2508.07485 • Published Aug 10, 2025 • 10