πŸš€ omriX

A well-trained small coding agent that is very quick.

omriX is a lightweight, open-source coding-focused model designed for speed, efficiency, and practical developer workflows. With a compact disk size of roughly 3 GB, it’s ideal for local inference, low-cost deployments, and experimentation without heavy hardware requirements.


✨ Key Features

  • ⚑ Fast inference β€” optimized for quick responses
  • 🧠 Coding-focused β€” tuned for programming and code-related tasks
  • πŸ“¦ Lightweight β€” ~3 GB disk size
  • πŸ”“ Open source β€” Apache-2.0 license
  • πŸ’Έ Cheap to run β€” suitable for low-resource environments
  • πŸ€– Agent-friendly β€” works well as a small coding agent component

🧩 Use Cases

  • Code understanding & classification
  • Lightweight coding assistants
  • Student projects / FYPs
  • Local developer tools
  • Agent pipelines requiring fast, small models
  • Prototyping and experimentation

πŸ“₯ Model Details

  • Model ID: xlelords/omriX
  • License: Apache-2.0
  • Language: English
  • Pipeline Tag: Text Classification
  • Disk Size: ~3 GB
  • Status: Fully open-source

πŸš€ Quick Start

Example usage with πŸ€— Transformers:

from transformers import AutoTokenizer, AutoModelForSequenceClassification

model_id = "xlelords/omriX"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSequenceClassification.from_pretrained(model_id)

text = "Explain what this function does: def add(a, b): return a + b"

inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)

print(outputs.logits)

⚠️ Note: The pipeline interface depends on your downstream setup.
omriX is commonly used as a component model inside agents or custom pipelines.


πŸ› οΈ Agent Integration

omriX is well-suited for use in lightweight agent frameworks such as:

  • Custom Python agents
  • Tool-calling pipelines
  • smolagents-style workflows
  • Local or edge deployments

Its small size and fast responses make it ideal for chaining with tools or running alongside other models.


πŸ“„ License

This model is released under the Apache-2.0 License.
You are free to use, modify, distribute, and build upon it β€” even commercially.


🀝 Contributing

Contributions are welcome!
Feel free to open issues or pull requests for:

  • Documentation improvements
  • Benchmarks
  • Agent examples
  • Optimized inference setups

⭐ Final Notes

If you’re looking for a fast, cheap, and capable small coding agent, omriX is built to get out of your way and let you ship.

Enjoy πŸš€

Downloads last month
11
GGUF
Model size
2B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support