TurkishReasoner-Gemma3-12B

Model Description

TurkishReasoner-Gemma3-12B is a specialized reasoning model fine-tuned from Google's Gemma3-12B specifically for Turkish language reasoning tasks. This model excels at structured problem-solving with step-by-step reasoning capabilities, making it ideal for complex mathematical, logical, and analytical problems in Turkish.

Key Features

  • Built on Google's multimodal Gemma3-12B foundation
  • Fine-tuned specifically for Turkish reasoning using GRPO (Group Relative Policy Optimization)
  • Supports both text and image inputs for comprehensive reasoning tasks
  • Delivers structured, step-by-step reasoning with clear solution formatting
  • Maintains the base model's 128K token context window
  • Trained on high-quality Turkish reasoning datasets including GSM8K-tr

Technical Specifications

  • Base Model: Google/Gemma3-12B
  • Parameters: 12 billion
  • Input: Text and images (multimodal capabilities)
  • Hardware Requirements: ~20GB VRAM (NVIDIA RTX 6000 Ada or equivalent)
  • Training Infrastructure: NVIDIA Ada6000 GPU

Usage

This model is optimized for reasoning-intensive applications in Turkish, including:

  • Educational tools requiring detailed mathematical explanations
  • Research applications exploring complex problem-solving
  • Applications requiring structured reasoning with visual components
  • Turkish-language AI assistants with advanced reasoning capabilities

Example Usage

from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
from peft import PeftModel
import torch

base_model = AutoModelForCausalLM.from_pretrained("unsloth/gemma-3-12b-it")
model = PeftModel.from_pretrained(base_model, "Chan-Y/TurkishReasoner-Gemma3-12B").to("cuda")
tokenizer = AutoTokenizer.from_pretrained("unsloth/gemma-3-12b-it")

pipe = pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
    max_new_tokens=512,
    do_sample=True,
    temperature=0.7,
    top_p=0.95,
)

messages = [
    {"role": "system", "content": """Sen kullanıcıların isteklerine Türkçe cevap veren bir asistansın ve sana bir problem verildi.
Problem hakkında düşün ve çalışmanı göster.
Çalışmanı <start_working_out> ve <end_working_out> arasına yerleştir.
Sonra, çözümünü <SOLUTION> ve </SOLUTION> arasına yerleştir.
Lütfen SADECE Türkçe kullan."""},
    {"role": "user", "content": "121'in karekökü kaçtır?"},
]

response = pipe(messages, return_full_text=False)[0]["generated_text"]
print(response)

For more information or assistance with this model, please contact the developers:

Downloads last month
6
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Chan-Y/TurkishReasoner-Gemma3-12B

Adapter
(30)
this model

Collection including Chan-Y/TurkishReasoner-Gemma3-12B