-
Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-GGUF
Image-Text-to-Text β’ 27B β’ Updated β’ 921k β’ 609 -
mradermacher/gemma-3-27b-it-ultra-uncensored-heretic-i1-GGUF
27B β’ Updated β’ 4.21k β’ 4 -
byteshape/Devstral-Small-2-24B-Instruct-2512-GGUF
Text Generation β’ 24B β’ Updated β’ 3.77k β’ 27 -
mradermacher/Qwen3.5-24B-A3B-Claude-Opus-Gemini-3.1-Pro-Reasoning-Distilled-heretic-i1-GGUF
24B β’ Updated β’ 12.7k β’ 3
CYGDEN
CYGDEN
AI & ML interests
β
Recent Activity
liked a model about 2 hours ago
LiquidAI/LFM2.5-VL-450M reacted to SeaWolf-AI's post with π€ about 2 hours ago
𧬠Darwin-27B-Opus: 86.9% on GPQA Diamond β World #5, Zero Training
We are excited to share Darwin-27B-Opus, a 27B model that achieved 86.9% on GPQA Diamond β ranking #5 globally on the HuggingFace leaderboard β without a single gradient update.
How? Darwin breeds pretrained models through evolutionary FFN crossbreeding. The father (Qwen3.5-27B) provides the reasoning architecture; the mother (Claude 4.6 Opus Reasoning Distilled) contributes structured chain-of-thought knowledge. CMA-ES automatically discovers optimal per-layer blending ratios β no human tuning required.
The result surpasses the original Qwen3.5-27B (85.5%), GLM-5.1 (744B, 86.2%), and Qwen3.5-122B (86.6%). A 27B model outperforming 744B β with zero training, zero data, one GPU, ~2 hours.
We also confirmed hybrid vigor on Korean benchmarks: Darwin-27B-KR (2nd generation offspring) surpassed both parents on CLIcK, winning 7 out of 11 categories. The evolutionary optimizer independently assigned 93% of FFN from the Korean-specialized mother while preserving 93% of attention from the reasoning-specialized father β autonomously validating our core principle: FFN carries knowledge, Attention carries reasoning.
π Public release: 10 days β 300+ community derivatives, 120K+ downloads.
π Links:
Darwin-27B-Opus: https://huggingface.co/FINAL-Bench/Darwin-27B-Opus
article: https://huggingface.co/blog/FINAL-Bench/darwin-gpqa
Darwin Family Collection: https://huggingface.co/collections/FINAL-Bench/darwin-family
If foundation models are raw ore, Darwin is the forge. We are just getting started. π₯ reacted to SeaWolf-AI's post with π§ about 2 hours ago
𧬠Darwin-27B-Opus: 86.9% on GPQA Diamond β World #5, Zero Training
We are excited to share Darwin-27B-Opus, a 27B model that achieved 86.9% on GPQA Diamond β ranking #5 globally on the HuggingFace leaderboard β without a single gradient update.
How? Darwin breeds pretrained models through evolutionary FFN crossbreeding. The father (Qwen3.5-27B) provides the reasoning architecture; the mother (Claude 4.6 Opus Reasoning Distilled) contributes structured chain-of-thought knowledge. CMA-ES automatically discovers optimal per-layer blending ratios β no human tuning required.
The result surpasses the original Qwen3.5-27B (85.5%), GLM-5.1 (744B, 86.2%), and Qwen3.5-122B (86.6%). A 27B model outperforming 744B β with zero training, zero data, one GPU, ~2 hours.
We also confirmed hybrid vigor on Korean benchmarks: Darwin-27B-KR (2nd generation offspring) surpassed both parents on CLIcK, winning 7 out of 11 categories. The evolutionary optimizer independently assigned 93% of FFN from the Korean-specialized mother while preserving 93% of attention from the reasoning-specialized father β autonomously validating our core principle: FFN carries knowledge, Attention carries reasoning.
π Public release: 10 days β 300+ community derivatives, 120K+ downloads.
π Links:
Darwin-27B-Opus: https://huggingface.co/FINAL-Bench/Darwin-27B-Opus
article: https://huggingface.co/blog/FINAL-Bench/darwin-gpqa
Darwin Family Collection: https://huggingface.co/collections/FINAL-Bench/darwin-family
If foundation models are raw ore, Darwin is the forge. We are just getting started. π₯Organizations
None yet