Instructions to use axyn/axe-blade-4b with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use axyn/axe-blade-4b with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="axyn/axe-blade-4b", filename="axe-blade-4b.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use axyn/axe-blade-4b with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf axyn/axe-blade-4b # Run inference directly in the terminal: llama-cli -hf axyn/axe-blade-4b
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf axyn/axe-blade-4b # Run inference directly in the terminal: llama-cli -hf axyn/axe-blade-4b
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf axyn/axe-blade-4b # Run inference directly in the terminal: ./llama-cli -hf axyn/axe-blade-4b
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf axyn/axe-blade-4b # Run inference directly in the terminal: ./build/bin/llama-cli -hf axyn/axe-blade-4b
Use Docker
docker model run hf.co/axyn/axe-blade-4b
- LM Studio
- Jan
- vLLM
How to use axyn/axe-blade-4b with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "axyn/axe-blade-4b" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "axyn/axe-blade-4b", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/axyn/axe-blade-4b
- Ollama
How to use axyn/axe-blade-4b with Ollama:
ollama run hf.co/axyn/axe-blade-4b
- Unsloth Studio new
How to use axyn/axe-blade-4b with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for axyn/axe-blade-4b to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for axyn/axe-blade-4b to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for axyn/axe-blade-4b to start chatting
- Pi new
How to use axyn/axe-blade-4b with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf axyn/axe-blade-4b
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "axyn/axe-blade-4b" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use axyn/axe-blade-4b with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf axyn/axe-blade-4b
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default axyn/axe-blade-4b
Run Hermes
hermes
- Docker Model Runner
How to use axyn/axe-blade-4b with Docker Model Runner:
docker model run hf.co/axyn/axe-blade-4b
- Lemonade
How to use axyn/axe-blade-4b with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull axyn/axe-blade-4b
Run and chat with the model
lemonade run user.axe-blade-4b-{{QUANT_TAG}}List all available models
lemonade list
AXE-BLADE-4B
Precision code specialist. Runs on your hardware. No cloud required.
AXE-BLADE is a distilled reasoning model purpose-built for fast, accurate code generation, refactoring, and tool-calling. It delivers production-quality code output in a 2.3GB package that runs at full speed on consumer hardware.
Part of the AXE Fleet โ sovereign AI designed to run entirely on local hardware with zero cloud dependency.
Model Details
| Property | Value |
|---|---|
| Base Architecture | Qwen3-4B |
| Training Method | Multi-stage distillation from frontier reasoning models |
| Parameters | 4 billion |
| Format | GGUF (Q4_K_M) |
| Download Size | 2.3 GB |
| Context Window | 32,768 tokens |
| Specialization | Code generation, refactoring, tool-calling |
| Target Hardware | Apple Silicon (M1/M2/M3/M4), CUDA GPUs, CPU |
What Makes BLADE Different
Most small models sacrifice quality for size. BLADE doesn't.
- Thinks before it codes. Step-by-step reasoning produces correct solutions, not plausible-looking ones.
- Native tool-calling. First-class
<tool_call>support for agentic workflows, IDE integrations, and autonomous coding pipelines. - Clean output by default. No filler, no preamble. Just the solution.
- Type-safe and idiomatic. Type annotations, proper naming conventions, and production patterns out of the box.
- Multi-language. Python, TypeScript, Rust, Go, C++, Bash, SQL, and more.
Benchmarks
Evaluated across our internal fleet of 21 models with 8 sampling profiles:
| Metric | Score |
|---|---|
| Overall Grade | A |
| Fleet Score | 97.5 / 100 |
| Ranking | Top 3 out of 21 models |
| Code Accuracy | Consistently correct across function-level and module-level tasks |
| Tool-Call Compliance | Structured output follows schema reliably |
BLADE outperforms several 7B and 8B models on code tasks while using less than half the memory.
Quick Start
Ollama (Recommended)
ollama run axe-blade-4b
llama.cpp
./llama-cli -m axe-blade-4b.gguf \
-p "Write a Python async web scraper with proper error handling" \
-n 1024 --temp 0.7
Python (llama-cpp-python)
from llama_cpp import Llama
llm = Llama(model_path="axe-blade-4b.gguf", n_ctx=8192, n_gpu_layers=-1)
response = llm.create_chat_completion(
messages=[
{"role": "system", "content": "You are a precise code assistant."},
{"role": "user", "content": "Write a Redis-backed rate limiter in Python"}
],
temperature=0.7,
max_tokens=1024
)
print(response["choices"][0]["message"]["content"])
Sample Output
Prompt: "Write a function to validate email addresses with proper regex"
BLADE Response:
import re
from typing import Optional
def validate_email(email: str) -> Optional[str]:
"""Validate an email address. Returns normalized email or None."""
pattern = r'^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$'
email = email.strip().lower()
if re.match(pattern, email):
return email
return None
No boilerplate. No explanation you didn't ask for. Just working code.
Hardware Requirements
| Setup | RAM Required | Speed |
|---|---|---|
| Apple Silicon (M1+) | 3 GB | ~40 tok/s |
| NVIDIA GPU (8GB+) | 3 GB VRAM | ~50 tok/s |
| CPU-only | 4 GB RAM | ~8 tok/s |
BLADE fits comfortably alongside your other applications. Run AI-assisted coding without sending your code to any cloud.
Use Cases
- Local coding assistant โ IDE integration without API keys or subscriptions
- Agentic pipelines โ Tool-calling support for autonomous code review, refactoring, and generation
- Air-gapped environments โ Full capability with zero network access
- Edge deployment โ Small enough for embedded systems and field devices
- CI/CD integration โ Automated code review and generation in your pipeline
The AXE Fleet
AXE Technology builds sovereign AI systems. Local models that run on your hardware, no cloud required.
The fleet includes specialized models for code, research, strategy, security, and general intelligence. Each model is distilled and optimized for its domain, then benchmarked against the full fleet to ensure quality.
- Website: axe.onl
- Mission: Free intelligence. No gatekeepers. No subscriptions.
License
Apache 2.0 โ use it however you want, commercially or otherwise.
Citation
@misc{axe-blade-4b,
title={AXE-BLADE-4B: Distilled Code Specialist},
author={AXE Technology},
year={2026},
url={https://huggingface.co/axyn/axe-blade-4b}
}
- Downloads last month
- 2
We're not able to determine the quantization variants.