TurkishReasoner
Collection
Models that are trained on reasoning task in Turkish language.
•
4 items
•
Updated
•
1
TurkishReasoner-Gemma3-12B is a specialized reasoning model fine-tuned from Google's Gemma3-12B specifically for Turkish language reasoning tasks. This model excels at structured problem-solving with step-by-step reasoning capabilities, making it ideal for complex mathematical, logical, and analytical problems in Turkish.
This model is optimized for reasoning-intensive applications in Turkish, including:
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
from peft import PeftModel
import torch
base_model = AutoModelForCausalLM.from_pretrained("unsloth/gemma-3-12b-it")
model = PeftModel.from_pretrained(base_model, "Chan-Y/TurkishReasoner-Gemma3-12B").to("cuda")
tokenizer = AutoTokenizer.from_pretrained("unsloth/gemma-3-12b-it")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
)
messages = [
{"role": "system", "content": """Sen kullanıcıların isteklerine Türkçe cevap veren bir asistansın ve sana bir problem verildi.
Problem hakkında düşün ve çalışmanı göster.
Çalışmanı <start_working_out> ve <end_working_out> arasına yerleştir.
Sonra, çözümünü <SOLUTION> ve </SOLUTION> arasına yerleştir.
Lütfen SADECE Türkçe kullan."""},
{"role": "user", "content": "121'in karekökü kaçtır?"},
]
response = pipe(messages, return_full_text=False)[0]["generated_text"]
print(response)
For more information or assistance with this model, please contact the developers: