Update README.md
Browse files
README.md
CHANGED
@@ -58,6 +58,16 @@ How does this model perform? You tell me. I don't have metrics yet, that takes s
|
|
58 |
|
59 |
Working on it.
|
60 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
61 |
-G
|
62 |
|
63 |
This model [Qwen3-Deckard-6B-qx86-hi-mlx](https://huggingface.co/Qwen3-Deckard-6B-qx86-hi-mlx) was
|
|
|
58 |
|
59 |
Working on it.
|
60 |
|
61 |
+
📌 Final Assessment(done with Qwen3-80B-A3B-qx86-hi-mlx)
|
62 |
+
|
63 |
+
Estimated Model Size: 30-50B parameters with technical domain fine-tuning
|
64 |
+
|
65 |
+
This model demonstrates the precise balance needed for interdisciplinary technical reasoning - large enough to handle complex connections between fields, but not so large as to suffer from verbose or inconsistent explanations. The quality matches that of a specialized technical assistant trained on high-quality academic and engineering content.
|
66 |
+
|
67 |
+
So, with just a 4B model, Jan's agentic training, DavidAU's Brainstorming, and the Deckard Formula, we created a 30-50B brain.
|
68 |
+
|
69 |
+
It's a community effort.
|
70 |
+
|
71 |
-G
|
72 |
|
73 |
This model [Qwen3-Deckard-6B-qx86-hi-mlx](https://huggingface.co/Qwen3-Deckard-6B-qx86-hi-mlx) was
|