GGUFs split in ~5GB chunks
build/bin/llama-cli -hf lefromage/Qwen3-Next-80B-A3B-Instruct-split-GGUF:Q2_K --prompt 'What is the capital of France?' --no-mmap -st
Another way to download the Q2_K quant model pieces:
pip install hf_transfer 'huggingface_hub[cli]'
time hf download lefromage/Qwen3-Next-80B-A3B-Instruct-split-GGUF --include "*Q2_K*.gguf" --local-dir Q2_K
build/bin/llama-cli -m Q2_K/Qwen3-Next-80B-A3B-Instruct-Q2_K-00001-of-*.gguf --no-mmap --prompt 'what is the capital of france' -st
check https://huggingface.co/lefromage/Qwen3-Next-80B-A3B-Instruct-GGUF for more details
currently getting 6 tokens per second for generation for simple prompt:
time build/bin/llama-cli -hf lefromage/Qwen3-Next-80B-A3B-Instruct-split-GGUF:Q2_K --no-mmap --prompt 'explain quantum computing in a paragraph' -st
...
user
explain quantum computing in a paragraph
assistant
Quantum computing is a revolutionary approach to computation that leverages the principles of quantum mechanics—such as superposition, entanglement, and interference—to process information in fundamentally different ways than classical computers. Instead of using binary bits (0 or 1), quantum computers use quantum bits, or qubits, which can exist in a combination of 0 and 1 simultaneously thanks to superposition. This allows a quantum computer to explore many possible solutions at once. When qubits become entangled, their states become interdependent, meaning the state of one instantly influences the other, even at a distance. By manipulating these qubits with precise microwave or laser pulses, quantum algorithms can solve certain problems—like factoring large numbers, simulating molecules, or optimizing complex systems—exponentially faster than classical computers. While still in early development and highly sensitive to environmental noise, quantum computing holds the potential to transform fields like cryptography, drug discovery, artificial intelligence, and financial modeling. [end of text]
llama_perf_sampler_print: sampling time = 13.05 ms / 210 runs ( 0.06 ms per token, 16093.19 tokens per second)
llama_perf_context_print: load time = 12190.98 ms
llama_perf_context_print: prompt eval time = 5201.06 ms / 14 tokens ( 371.50 ms per token, 2.69 tokens per second)
llama_perf_context_print: eval time = 31579.94 ms / 195 runs ( 161.95 ms per token, 6.17 tokens per second)
llama_perf_context_print: total time = 36857.21 ms / 209 tokens
llama_perf_context_print: graphs reused = 0
llama_memory_breakdown_print: | memory breakdown [MiB] | total free self model context compute unaccounted |
llama_memory_breakdown_print: | - Metal (Apple M4 Max) | 98304 = 70034 + (28151 = 27675 + 171 + 304) + 117 |
llama_memory_breakdown_print: | - Host | 167 = 97 + 0 + 70 |
ggml_metal_free: deallocating
- Downloads last month
- 77
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
Model tree for lefromage/Qwen3-Next-80B-A3B-Instruct-split-GGUF
Base model
Qwen/Qwen3-Next-80B-A3B-Instruct