Transformers
GGUF
programming
code generation
code
codeqwen
Mixture of Experts
coding
coder
qwen2
chat
qwen
qwen-coder
Qwen3-Coder-30B-A3B-Instruct
Qwen3-30B-A3B
mixture of experts
128 experts
8 active experts
1 million context
qwen3
finetune
brainstorm 40x
brainstorm
optional thinking
qwen3_moe
conversational
| base_model: DavidAU/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL | |
| language: | |
| - en | |
| - fr | |
| - zh | |
| - de | |
| library_name: transformers | |
| license: apache-2.0 | |
| mradermacher: | |
| readme_rev: 1 | |
| quantized_by: mradermacher | |
| tags: | |
| - programming | |
| - code generation | |
| - code | |
| - codeqwen | |
| - programming | |
| - code generation | |
| - code | |
| - codeqwen | |
| - moe | |
| - coding | |
| - coder | |
| - qwen2 | |
| - chat | |
| - qwen | |
| - qwen-coder | |
| - chat | |
| - qwen | |
| - qwen-coder | |
| - moe | |
| - Qwen3-Coder-30B-A3B-Instruct | |
| - Qwen3-30B-A3B | |
| - mixture of experts | |
| - 128 experts | |
| - 8 active experts | |
| - 1 million context | |
| - qwen3 | |
| - finetune | |
| - brainstorm 40x | |
| - brainstorm | |
| - optional thinking | |
| - qwen3_moe | |
| ## About | |
| <!-- ### quantize_version: 2 --> | |
| <!-- ### output_tensor_quantised: 1 --> | |
| <!-- ### convert_type: hf --> | |
| <!-- ### vocab_type: --> | |
| <!-- ### tags: --> | |
| <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> | |
| <!-- ### quants_skip: --> | |
| <!-- ### skip_mmproj: --> | |
| static quants of https://huggingface.co/DavidAU/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL | |
| <!-- provided-files --> | |
| ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL-GGUF).*** | |
| weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. | |
| ## Usage | |
| If you are unsure how to use GGUF files, refer to one of [TheBloke's | |
| READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for | |
| more details, including on how to concatenate multi-part files. | |
| ## Provided Quants | |
| (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | |
| | Link | Type | Size/GB | Notes | | |
| |:-----|:-----|--------:|:------| | |
| | [GGUF](https://huggingface.co/mradermacher/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL-GGUF/resolve/main/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL.Q2_K.gguf) | Q2_K | 19.5 | | | |
| | [GGUF](https://huggingface.co/mradermacher/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL-GGUF/resolve/main/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL.Q3_K_S.gguf) | Q3_K_S | 23.1 | | | |
| | [GGUF](https://huggingface.co/mradermacher/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL-GGUF/resolve/main/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL.Q3_K_M.gguf) | Q3_K_M | 25.5 | lower quality | | |
| | [GGUF](https://huggingface.co/mradermacher/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL-GGUF/resolve/main/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL.Q3_K_L.gguf) | Q3_K_L | 27.6 | | | |
| | [GGUF](https://huggingface.co/mradermacher/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL-GGUF/resolve/main/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL.Q4_K_S.gguf) | Q4_K_S | 30.3 | fast, recommended | | |
| | [GGUF](https://huggingface.co/mradermacher/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL-GGUF/resolve/main/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL.Q4_K_M.gguf) | Q4_K_M | 32.2 | fast, recommended | | |
| | [GGUF](https://huggingface.co/mradermacher/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL-GGUF/resolve/main/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL.Q6_K.gguf) | Q6_K | 43.6 | very good quality | | |
| | [PART 1](https://huggingface.co/mradermacher/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL-GGUF/resolve/main/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL-GGUF/resolve/main/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL.Q8_0.gguf.part2of2) | Q8_0 | 56.4 | fast, best quality | | |
| Here is a handy graph by ikawrakow comparing some lower-quality quant | |
| types (lower is better): | |
|  | |
| And here are Artefact2's thoughts on the matter: | |
| https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 | |
| ## FAQ / Model Request | |
| See https://huggingface.co/mradermacher/model_requests for some answers to | |
| questions you might have and/or if you want some other model quantized. | |
| ## Thanks | |
| I thank my company, [nethype GmbH](https://www.nethype.de/), for letting | |
| me use its servers and providing upgrades to my workstation to enable | |
| this work in my free time. | |
| <!-- end --> | |