Kasawa — SmolLM2-135M Fine-tuned on Twi
kasawa is a compact Twi language model capable of generating coherent twi text as casual language model. It should be noted that this model serves as a base model and it has not undergone intruction tuning.
Built on SmolLM2-135M and trained on the Pristine Twi Dataset — ~999k rows of clean, naturally sounding Twi text spanning four styles: narrative, dialogue, monologue, and storytelling, grounded in real Ghanaian news topics and named entities.
Try it the yourself in the demo.
Intended Use
This model is released for research and non-profit use only. The primary goal is to lower the barrier for experimentation in the Twi/Akan NLP space — particularly as a base for instruction-tuned models targeting more advanced tasks such as summarization, question answering, and dialogue in Twi.
It can also be very useful as a backbone for LLM-based text-to-speech (TTS) and automatic speech recognition (ASR) systems, as well as machine translation (MT) models involving Twi.
Quick Start
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model = AutoModelForCausalLM.from_pretrained(
"ghananlpcommunity/kasawa",
torch_dtype=torch.bfloat16,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("ghananlpcommunity/kasawa")
gen = pipeline("text-generation", model=model, tokenizer=tokenizer)
prompt = "Ɔpɛnimaa bosome, 2025,"
out = gen(prompt, max_new_tokens=80, do_sample=True, temperature=0.8, top_p=0.9)
print(out[0]["generated_text"])
Training Details
| Base model | SmolLM2-135M |
| Dataset | Pristine Twi (~999k rows, ~250M tokens) |
| Epochs | ~2 |
| Hardware | 1× 80GB GPU |
Citation & Community
Created by Mich-Seth Owusu for the Ghana NLP Community.
If you build on this model, please credit the original work and consider sharing
your results back with the community.
License: Research and non-profit use only.
- Downloads last month
- 90
Model tree for ghananlpcommunity/kasawa
Base model
HuggingFaceTB/SmolLM2-135M