image

๐ŸŒŒ Orion Agent (Duchifat-2 Based)

Welcome to the official repository of Orion, a high-performance AI agent engineered for advanced community management, server security, and intelligent interaction.

๐Ÿค– Model Overview

Orion is not just a chatbot; it is a specialized AI Agent built upon the robust Duchifat-2 architecture. Through a rigorous alignment process consisting of intensive Supervised Fine-Tuning (SFT) and targeted behavioral conditioning, Orion has been transformed into a dedicated guardian for digital communities.

The model is specifically designed to balance professional authority with a modern, approachable persona, making it the ideal solution for high-traffic environments where safety and engagement are paramount.

๐Ÿ› ๏ธ Key Capabilities

  • Strategic Guardian: Orion is programmed to monitor and maintain a safe, high-quality environment, acting as a digital layer of security.
  • Identity-Centric Logic: Developed by Raziel, the model possesses a strong sense of self-awareness and mission, consistently identifying as the Orion Agent.
  • Multilingual Fluidity: Optimized for seamless transitions between Hebrew and English, ensuring a natural conversational flow in diverse communities.
  • Contextual Awareness: Unlike standard rule-based bots, Orion leverages its large-scale pre-training to understand nuance, intent, and community dynamics.

๐ŸŽฏ The Orion Alignment

The fine-tuning of Orion focused on achieving a specific "Sweet Spot" in Large Language Model optimization:

  1. High Generalization: Retaining the vast knowledge base and linguistic intelligence of the foundation model.
  2. Behavioral Locking: Ensuring strict adherence to the <|instruction|> and <|assistant|> interaction format.
  3. Safety First: Integrating proactive safety protocols to prevent toxicity and maintain community standards.

๐Ÿ—๏ธ Architecture & Pedigree

  • Base Model: Duchifat-2 (Advanced Transformer Architecture)
  • Developer: Raziel
  • Specialization: Community Management & Security Orchestration
  • Inference Format: Custom Orion-Format (Optimized for Agentic workflows)

๐Ÿ’ฌ A Word from the Engine

Orion represents a leap forward in how we manage digital spaces. By merging the raw power of LLMs with a focused, mission-driven alignment, we have created an entity that doesn't just respondโ€”it protects and serves.

Use Example

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
import os

# ืงื‘ื™ืขืช ื ืชื™ื‘ ื”ืžื•ื“ืœ ื‘-Hugging Face Hub (Public Repo)
MODEL_PATH = "razielAI/Orion-1"

class OrionEngine:
    def __init__(self, model_path):
        # ื‘ื˜ืขื™ื ื” ืžื”-Hub, ื”ืกืคืจื™ื™ื” ืชื ื”ืœ ืืช ื”-Caching ื‘ืื•ืคืŸ ืื•ื˜ื•ืžื˜ื™
        print(f"๐Ÿš€ Loading Orion Engine from Hugging Face Hub: {model_path}...")
        
        # 1. ื˜ืขื™ื ืช ื”-Tokenizer - ื™ื•ื•ื“ื ืฉื›ืœ ื”-Special Tokens ืฉื”ื•ื’ื“ืจื• ื‘-Hub ื ื˜ืขื ื™ื
        self.tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
        
        # 2. ื˜ืขื™ื ืช ื”ืžื•ื“ืœ ื‘-Precision ื”ืžืชืื™ื (BF16 ืื•ืคื˜ื™ืžืœื™ ืœ-Duchifat-2)
        # ืฉื™ืžื•ืฉ ื‘-device_map="auto" ืœืžื™ืคื•ื™ ืื•ื˜ื•ืžื˜ื™ ืขืœ ื”-GPU/s
        self.model = AutoModelForCausalLM.from_pretrained(
            model_path,
            trust_remote_code=True,
            torch_dtype=torch.bfloat16 if torch.cuda.is_available() and torch.cuda.is_bf16_supported() else torch.float16,
            device_map="auto"
        )
        
        # ื•ื™ื“ื•ื™ ืฉื”-pad_token ืžื•ื’ื“ืจ ื›-eos_token ื›ืคื™ ืฉื”ื’ื“ืจืช ื‘ืื™ืžื•ืŸ
        if self.tokenizer.pad_token is None:
            self.tokenizer.pad_token = self.tokenizer.eos_token
            
        self.model.eval()
        print("โœ… Orion is ready for inference.")

    def generate(self, instruction, max_new_tokens=512, temperature=0.4):
        # 3. ื‘ื ื™ื™ืช ื”-Prompt ื‘ืคื•ืจืžื˜ ื”ืžื“ื•ื™ืง ืขืœื™ื• ื”ืžื•ื“ืœ ืื•ืžืŸ
        # ื”ืžื‘ื ื”: <|instruction|>\n{text}\n<|assistant|>\n
        prompt = f"<|instruction|>\n{instruction}\n<|assistant|>\n"
        
        # 4. Encoding
        inputs = self.tokenizer(prompt, return_tensors="pt").to(self.model.device)
        
        # 5. ื”ื’ื“ืจืช ื”-EOS Token ID ืœืขืฆื™ืจื” ืžื•ื—ืœื˜ืช
        # ื‘ืื™ืžื•ืŸ ื”ื’ื“ืจืช: tokenizer.eos_token = "<|eos|>"
        eos_id = self.tokenizer.convert_tokens_to_ids("<|eos|>")

        with torch.no_grad():
            output_tokens = self.model.generate(
                **inputs,
                max_new_tokens=max_new_tokens,
                do_sample=True,
                temperature=temperature, # ื‘ื˜ื™ื—ื•ืช ืื’ืจืกื™ื‘ื™ืช ื“ื•ืจืฉืช ื˜ืžืคืจื˜ื•ืจื” ื ืžื•ื›ื”
                top_p=0.9,               # Nucleus sampling
                repetition_penalty=1.15,  # ืžื ื™ืขืช ื—ื–ืจืชื™ื•ืช ืขืœ ืกืœื ื’
                eos_token_id=eos_id,     # ืฉื™ืžื•ืฉ ื‘-Token ื”ืžืคื•ืจืฉ ืœืกื™ื•ื
                pad_token_id=self.tokenizer.pad_token_id
            )

        # 6. ื—ื™ืชื•ืš ื”-Input IDs (ื”ืคืจื•ืžืคื˜) ืžื”ืคืœื˜ ื”ืกื•ืคื™
        input_length = inputs.input_ids.shape[1]
        generated_tokens = output_tokens[0][input_length:]
        
        # 7. Decoding ื›ื•ืœืœ ื”ืกืจืช special tokens ืœืชืฉื•ื‘ื” ื ืงื™ื™ื”
        response = self.tokenizer.decode(generated_tokens, skip_special_tokens=True).strip()
        
        return response

# --- ื”ืจืฆื” ืื™ื ื˜ืจืืงื˜ื™ื‘ื™ืช ---
if __name__ == "__main__":
    # ื”ืžื•ื“ืœ ื™ื•ืจื“ ื›ืขืช ืžื”ืขื ืŸ ืฉืœ Hugging Face
    orion = OrionEngine(MODEL_PATH)
    
    print("\n" + "="*50)
    print("Orion Agent Chat Interface (Remote Hub)")
    print("="*50)

    while True:
        user_input = input("\n๐Ÿ“ฉ [User]: ").strip()
        if user_input.lower() in ["exit", "quit", "exit()"]:
            break
            
        print("\n๐Ÿค– [Orion]: ", end="", flush=True)
        response = orion.generate(user_input)
        print(response)

Developed by Raziel | Powered by Innovation.

Downloads last month
32
Safetensors
Model size
0.1B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for razielAI/Orion-1

Finetuned
(7)
this model