Instructions to use Vishvjit2001/autonomusHDL with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use Vishvjit2001/autonomusHDL with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="Vishvjit2001/autonomusHDL", filename="Qwen2.5 coder-14B-Q3_K_L.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use Vishvjit2001/autonomusHDL with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Vishvjit2001/autonomusHDL:Q3_K_L # Run inference directly in the terminal: llama-cli -hf Vishvjit2001/autonomusHDL:Q3_K_L
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Vishvjit2001/autonomusHDL:Q3_K_L # Run inference directly in the terminal: llama-cli -hf Vishvjit2001/autonomusHDL:Q3_K_L
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf Vishvjit2001/autonomusHDL:Q3_K_L # Run inference directly in the terminal: ./llama-cli -hf Vishvjit2001/autonomusHDL:Q3_K_L
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf Vishvjit2001/autonomusHDL:Q3_K_L # Run inference directly in the terminal: ./build/bin/llama-cli -hf Vishvjit2001/autonomusHDL:Q3_K_L
Use Docker
docker model run hf.co/Vishvjit2001/autonomusHDL:Q3_K_L
- LM Studio
- Jan
- Ollama
How to use Vishvjit2001/autonomusHDL with Ollama:
ollama run hf.co/Vishvjit2001/autonomusHDL:Q3_K_L
- Unsloth Studio new
How to use Vishvjit2001/autonomusHDL with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Vishvjit2001/autonomusHDL to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Vishvjit2001/autonomusHDL to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for Vishvjit2001/autonomusHDL to start chatting
- Pi new
How to use Vishvjit2001/autonomusHDL with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf Vishvjit2001/autonomusHDL:Q3_K_L
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "Vishvjit2001/autonomusHDL:Q3_K_L" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use Vishvjit2001/autonomusHDL with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf Vishvjit2001/autonomusHDL:Q3_K_L
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default Vishvjit2001/autonomusHDL:Q3_K_L
Run Hermes
hermes
- Docker Model Runner
How to use Vishvjit2001/autonomusHDL with Docker Model Runner:
docker model run hf.co/Vishvjit2001/autonomusHDL:Q3_K_L
- Lemonade
How to use Vishvjit2001/autonomusHDL with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull Vishvjit2001/autonomusHDL:Q3_K_L
Run and chat with the model
lemonade run user.autonomusHDL-Q3_K_L
List all available models
lemonade list
π AutonomusHDL β Verilog-Finetuned Qwen2.5-Coder-14B (GGUF)
AutonomusHDL is a fine-tuned version of Qwen2.5-Coder-14B-Instruct specifically optimized for Hardware Description Language (HDL) tasks, with a focus on Verilog code generation, completion, and reasoning. The model is provided in GGUF format for efficient local inference via llama.cpp and compatible runtimes.
π¦ Available Files
| File | Quantization | Size | Use Case |
|---|---|---|---|
qwen2.5_coder_14b_instruct_verilog_finetuned_q8.gguf |
Q8_0 | 15.7 GB | Highest quality, more VRAM/RAM |
Qwen2.5 coder-14B-Q3_K_L.gguf |
Q3_K_L | 7.9 GB | Lighter, faster, lower memory footprint |
Recommendation: Use the Q8 model if you have β₯16 GB RAM/VRAM for best output quality. Use Q3_K_L for systems with limited resources.
π§ Model Details
| Property | Value |
|---|---|
| Base Model | Qwen2.5-Coder-14B-Instruct |
| Fine-tune Domain | Verilog / HDL Code Generation |
| Format | GGUF |
| License | Apache 2.0 |
| Parameters | 14B |
| Context Length | Up to 128K tokens (base model) |
π Quickstart
With llama.cpp
# Clone and build llama.cpp
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp && make
# Run inference
./llama-cli \
-m qwen2.5_coder_14b_instruct_verilog_finetuned_q8.gguf \
-p "Write a Verilog module for a 4-bit synchronous counter with reset." \
-n 512 \
--temp 0.2
With Ollama
# Create a Modelfile
echo 'FROM ./qwen2.5_coder_14b_instruct_verilog_finetuned_q8.gguf' > Modelfile
# Import and run
ollama create autonomusHDL -f Modelfile
ollama run autonomusHDL
With LM Studio
- Download one of the
.gguffiles above. - Open LM Studio β Load Model β select the downloaded file.
- Start chatting with Verilog prompts directly.
π‘ Example Prompts
Module generation:
Write a Verilog module for a parameterized FIFO with configurable depth and width.
Debugging:
The following Verilog code has a timing issue. Identify and fix it:
[paste your code]
Testbench generation:
Generate a SystemVerilog testbench for a 32-bit ALU module with add, sub, AND, OR, and XOR operations.
FSM design:
Implement a Moore FSM in Verilog for a traffic light controller with states: RED, GREEN, YELLOW.
π― Intended Use Cases
- RTL design and Verilog code generation
- HDL code completion and auto-suggestions
- Testbench and assertion generation
- Debugging and explaining existing Verilog/VHDL code
- Learning and educational HDL workflows
- Integration into EDA tool pipelines
βοΈ Hardware Requirements
| Quantization | Min RAM/VRAM | Recommended |
|---|---|---|
| Q8_0 (15.7 GB) | 16 GB | 24 GB+ |
| Q3_K_L (7.9 GB) | 8 GB | 12 GB+ |
For CPU-only inference, ensure you have sufficient system RAM. GPU offloading via llama.cpp is supported with CUDA/Metal/Vulkan.
π License
This model is released under the Apache 2.0 License. The base model weights are subject to the Qwen2.5 license.
π Acknowledgements
- Base model: Qwen2.5-Coder-14B-Instruct by Alibaba Cloud
- GGUF conversion tooling: llama.cpp by Georgi Gerganov
βοΈ Contact
For questions, issues, or collaboration, reach out via the Community tab on this repository.
- Downloads last month
- -
3-bit
Model tree for Vishvjit2001/autonomusHDL
Base model
Qwen/Qwen2.5-14B