Building on Sawtone, weβve been testing a different way to feed language into an LLM to build the next generation of multilingual AI.
The usual setup gives the model tokenized text and asks it to perform various linguistic tasks. That works surprisingly well, until it doesnβt. Accents disappear. Words get mangled. Internal structure gets blurred away. And the cost of that gets higher once you move into multilingual and lower-resource settings.
So we tried adding a second path.
In addition to the normal text input, the model also receives Sawtone: a byte-level word representation that preserves how a word is written, how it sounds, and how it is structured.
Same LLM. Better interface.
In this proof of concept with Qwen 3.5 0.8B, that pushed our eval from 64% to 88%. The gains showed up exactly where tokenized models usually get shaky: diacritics, character order, exact spelling, and other form-sensitive behavior.
Sawtone itself is tokenizer-free, byte-level, and pre-trained across 507 languages.
Still early, but promising!


