
Coder / Programming - MOE, Reasoning, Reg, Imatrix, Fused.
Models (0.8B to 87B) in regular, "reasoning", "Brainstorm", MOE (1x to 8x / 128 experts), and expanded to create better and stronger code, faster.
Text Generation • 39B • Updated • 8.65k • 1Note Repo with multiple (41) Coding models in 1 or 2 quants ; many of these models now have full repos and full quants - listed below. LISTINGS ORDER OF THIS COLLECTION: MOES, in terms of raw power / size. Brainstorm - An Adapter by DavidAU Standard Models in terms of raw power/size. QUANTS: I strongly suggest for complex coding / long coding projects you use the highest quant(s) you can in both Imatrix and reg; with Imatrix being preferred. Likewise; higher parameter count models AND/OR MOEs.
DavidAU/Qwen3-53B-A3B-TOTAL-RECALL-MASTER-CODER-v1.4-256k-ctx
Text Generation • 53B • Updated • 18Note 128 experts MOE Model , with 256 k context. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Uses Brainstorm adapter (40x) by DavidAU to extend model function/performance.
DavidAU/Qwen3-53B-A3B-TOTAL-RECALL-MASTER-CODER-v1.4-128k
Text Generation • 53B • Updated • 129 • 1Note 128 experts MOE Model , with 128 k context. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Uses Brainstorm adapter (40x) by DavidAU to extend model function/performance.
DavidAU/Qwen3-53B-A3B-TOTAL-RECALL-MASTER-CODER-v1.4
Text Generation • 53B • Updated • 63 • 3Note 128 experts MOE Model. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Uses Brainstorm adapter (40x) by DavidAU to extend model function/performance.
DavidAU/Qwen2.5-2X32B-CoderInstruct-OlympicCoder-87B-v1.2
Text Generation • 87B • Updated • 15Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen2.5-2X32B-CoderInstruct-OlympicCoder-87B-v1.1
Text Generation • 87B • Updated • 57 • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Mistral-2x24B-MOE-Power-CODER-Magistral-Devstral-Reasoning-Ultimate-NEO-MAX-44B-gguf
Text Generation • 44B • Updated • 1.1kNote Devstral (coder) with Reasoning, which can be turned on or off. 128 k context.
DavidAU/Mistral-2x24B-MOE-Power-Magistral-Devstral-Reasoning-Ultimate-44B
Text Generation • 44B • Updated • 95Note Devstral (coder) with Reasoning, which can be turned on or off. 128 k context.
DavidAU/Mistral-2x24B-MOE-Power-Devstral-Magistral-Reasoning-Ultimate-44B
Text Generation • 44B • Updated • 24Note Devstral (coder) with Reasoning, which can be turned on or off. 128 k context.
DavidAU/Mistral-2x22B-MOE-Power-Codestral-Ultimate-39B
Text Generation • 39B • Updated • 24 • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen2.5-8x7B-Vee-Eight-Coder-Instruct-53B-128k-ctx
Text Generation • 53B • Updated • 10 • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. 128 k context.
DavidAU/Qwen2.5-8x7B-Vee-Eight-Coder-Instruct-53B
Text Generation • 53B • Updated • 16Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen2.5-6x7B-Six-Pack-Coder-Instruct-42B-128k-ctx
Text Generation • 42B • Updated • 8Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. 128 k context.
DavidAU/Qwen2.5-6x7B-Six-Pack-Coder-Instruct-42B
Text Generation • 42B • Updated • 10 • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen2.5-4x7B-Quad-Coder-Instruct-30B-128k-ctx
Text Generation • 30B • Updated • 8Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. 128 k context.
DavidAU/Qwen2.5-4x7B-Quad-Coder-Instruct-30B
Text Generation • 30B • Updated • 15 • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen2.5-3X7B-CoderInstruct-OlympicCoder-MS-Next-Coder-25B-v1-128k-ctx
Text Generation • 25B • Updated • 8 • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. 128k context.
DavidAU/Qwen2.5-3X7B-CoderInstruct-OlympicCoder-MS-Next-Coder-25B-v1
Text Generation • 25B • Updated • 12 • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen2.5-2X7B-Coder-Instruct-OlympicCoder-19B
Text Generation • 19B • Updated • 16 • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen2.5-2X7B-Coder-CodeV-R1-Coder-Instruct-OlympicCoder-19B
Text Generation • 19B • Updated • 15 • 1Note Specialized 2 model with MOE with additional shared expert. Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen2.5-2X7B-Coder-VisCoder-Coder-Instruct-OlympicCoder-19B
Text Generation • 19B • Updated • 10Note Specialized 2 model with MOE with additional shared expert. Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen2.5-2X7B-Coder-Soar-qwen-Coder-Instruct-OlympicCoder-19B
Text Generation • 19B • Updated • 16 • 1Note Specialized 2 model with MOE with additional shared expert. Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen2.5-2X11B-CODER-Dueling-Wolverines-V2-28B
Text Generation • 28B • Updated • 12Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen2.5-2X11B-CODER-Dueling-Wolverines-28B-gguf
Text Generation • 28B • Updated • 185
DavidAU/Qwen2.5-2X11B-CODER-Dueling-Wolverines-28B
Text Generation • 28B • Updated • 4Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen2.5-Godzilla-Coder-51B-128k
Text Generation • 51B • Updated • 19 • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Context at 128k.
DavidAU/Qwen2.5-Godzilla-Coder-51B-gguf
Text Generation • 51B • Updated • 1.36k • 3
DavidAU/Qwen2.5-Godzilla-Coder-51B
Text Generation • 51B • Updated • 51 • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen2.5-Godzilla-Coder-V2-51B-128k
Text Generation • 51B • Updated • 22 • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Context at 128k.
DavidAU/Qwen2.5-Godzilla-Coder-V2-51B
Text Generation • 51B • Updated • 23Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Mistral-Devstral-2507-CODER-Brainstorm40x-44B
Text Generation • 44B • Updated • 12Note Newest Devstral version, with even better coding abilities. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Context at 128k. Uses Brainstorm adapter by DavidAU to extend model function/performance.
DavidAU/Mistral-Devstral-2507-CODER-Brainstorm20x-34B
Text Generation • 34B • Updated • 17Note Newest Devstral version, with even better coding abilities. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Context at 128k. Uses Brainstorm adapter by DavidAU to extend model function/performance.
DavidAU/Mistral-Devstral-2505-CODER-Brainstorm40x-44B
Text Generation • 44B • Updated • 14 • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Context at 128k. Uses Brainstorm adapter by DavidAU to extend model function/performance.
DavidAU/Mistral-Devstral-2505-CODER-Brainstorm20x-34B
Text Generation • 34B • Updated • 15Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Context at 128k. Uses Brainstorm adapter by DavidAU to extend model function/performance.
DavidAU/Qwen2.5-Microsoft-NextCoder-Brainstorm20x-128k-ctx-42B
Text Generation • 42B • Updated • 12Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Context at 128k. Uses Brainstorm adapter by DavidAU to extend model function/performance.
DavidAU/Qwen2.5-Microsoft-NextCoder-Brainstorm20x-42B
Text Generation • 42B • Updated • 13Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Uses Brainstorm adapter by DavidAU to extend model function/performance.
DavidAU/Qwen2.5-Microsoft-NextCoder-Brainstorm20x-128k-ctx-20B
Text Generation • 20B • Updated • 14Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Context at 128k. Uses Brainstorm adapter by DavidAU to extend model function/performance.
DavidAU/Qwen2.5-Microsoft-NextCoder-Brainstorm20x-20B
Text Generation • 20B • Updated • 18Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Uses Brainstorm adapter by DavidAU to extend model function/performance.
DavidAU/Qwen2.5-Microsoft-NextCoder-Brainstorm20x-128k-ctx-12B
Text Generation • 12B • Updated • 54 • 4Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Context at 128k. Uses Brainstorm adapter by DavidAU to extend model function/performance.
DavidAU/Qwen2.5-Microsoft-NextCoder-Brainstorm20x-12B
Text Generation • 12B • Updated • 29 • 1Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen3-Esper3-Reasoning-CODER-Instruct-12B-Brainstorm20x
Text Generation • 12B • Updated • 12Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen3-Esper3-Reasoning-CODER-Instruct-12B-Brainstorm20x-128k-ctx
Text Generation • 12B • Updated • 5Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen3-Jan-Nano-128k-6B-Brainstorm20x
Text Generation • 6B • Updated • 16 • 3Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. This is a general use AND coder/programming model.
DavidAU/Qwen3-Blitzar-Coder-F1-6B-Brainstorm20x
Text Generation • 6B • Updated • 8 • 1Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen3-Polaris-Preview-128k-6B-Brainstorm20x
Text Generation • 6B • Updated • 10Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. This is a general use AND coder/programming model.
DavidAU/Qwen3-Instruct-6B-Brainstorm20x
Text Generation • 6B • Updated • 12Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. This is a general use AND coder/programming model.
DavidAU/Qwen3-Instruct-F16-6B-Brainstorm20x
Text Generation • 6B • Updated • 9Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. This is a general use AND coder/programming model.
DavidAU/Qwen3-Instruct-6B-Brainstorm20x-128k-ctx
Text Generation • 6B • Updated • 4Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. This is a general use AND coder/programming model. 128 k context.
DavidAU/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x
Text Generation • 6B • Updated • 8Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. This is a general use AND coder/programming model.
DavidAU/Qwen3-Instruct-F16-6B-Brainstorm20x-128k-ctx
Text Generation • 6B • Updated • 4Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. This is a general use AND coder/programming model. 128k context.
DavidAU/Qwen3-Bootes-Quick-Coder-Instruct-6B-Brainstorm20x
Text Generation • 6B • Updated • 9Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen3-Esper3-Reasoning-CODER-Instruct-6B-Brainstorm20x
Text Generation • 6B • Updated • 11
DavidAU/Qwen3-Esper3-Reasoning-Instruct-6B-Brainstorm20x-Enhanced-E32
Text Generation • 6B • Updated • 6Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Float 32 enhanced.
DavidAU/Qwen3-Esper3-Reasoning-Instruct-6B-Brainstorm20x-Enhanced-E32-128k-ctx
Text Generation • 6B • Updated • 6Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Float 32 enhanced, and 128k context.
DavidAU/Qwen3-Esper3-Reasoning-Instruct-6B-Brainstorm20x-Enhanced-E32-192k-ctx
Text Generation • 6B • Updated • 7Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Float 32 and 192k context.
DavidAU/Qwen2.5-Wolverine-CODER-11B-V2-128k-ctx
Text Generation • 11B • Updated • 8Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. 128k context. Model is fused together from 2 coder models.
DavidAU/Qwen2.5-Wolverine-CODER-11B-V2
Text Generation • 11B • Updated • 7 • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Model is fused together from 2 coder models
DavidAU/Qwen2.5-Wolverine-CODER-11B-128k-ctx
Text Generation • 11B • Updated • 6Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. 128k context.
DavidAU/Qwen2.5-Wolverine-CODER-11B-gguf
Text Generation • 11B • Updated • 1.11k • 2Note Model is fused together from 2 coder models
DavidAU/Qwen2.5-Wolverine-CODER-11B
Text Generation • 11B • Updated • 12Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Model is fused together from 2 coder models
DavidAU/Qwen2.5-OpenCodeReasoning-Nemotron-1.1-7B-NEO-imatix-gguf
Text Generation • 8B • Updated • 1.11kNote Uses NEO Imatrix dataset (by DavidAU) to augment model performance.
DavidAU/Qwen3-Zero-Coder-Reasoning-0.8B-NEO-EX-GGUF
Text Generation • 0.8B • Updated • 3.7k • 7Note Uses NEO Imatrix dataset (by DavidAU) to augment model performance. 40k context. Good for draft, simple, or code blocks - including complex. Model has full thinking/reasoning too. Model is fused together from 2 coder models
DavidAU/Qwen3-Zero-Coder-Reasoning-0.8B
Text Generation • 0.8B • Updated • 55Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. 40k context. Good for draft, simple, or code blocks - including complex. Model has full thinking/reasoning too. Model is fused together from 2 coder models
DavidAU/Qwen3-Zero-Coder-Reasoning-V2-0.8B
Text Generation • 0.8B • Updated • 7Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. 40k context. Good for draft, simple, or code blocks - including complex. Model has full thinking/reasoning too. Stronger than V1. Model is fused together from 2 coder models.
DavidAU/Qwen3-Zero-Coder-Reasoning-V2-0.8B-NEO-EX-GGUF
Text Generation • 0.8B • Updated • 34Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. 40k context. Good for draft, simple, or code blocks - including complex. Model has full thinking/reasoning too. Stronger than V1. Model is fused together from 2 coder models.