Deep Thought
Collection
Models with depth
•
57 items
•
Updated
This is a new-old-stock version of the model, with embeddings at 8 bit.
The original Qwen3-Yoyo-V3-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-II-qx86-hi-mlx is using 6 bit embeddings
Perplexity: 4.429 ± 0.031
Peak memory: 43.43 GB
Metrics coming soon. If this proves better than the qx86-hi, it will replace it in the catalog.
-G
This model Qwen3-Yoyo-V3-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-II-qx86x-hi-mlx was converted to MLX format from DavidAU/Qwen3-Yoyo-V3-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-II using mlx-lm version 0.28.3.
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("Qwen3-Yoyo-V3-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-II-qx86x-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)