--- license: mit tags: - tokenizer - sentencepiece - monolingual - ori - vocab-128000 --- # Monolingual Tokenizer - Odia (Vocab 128000) This is a monolingual tokenizer trained on Odia text with vocabulary size 128000. ## Usage ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("monolingual-tokenizer-native-ori-vocab-128000") ``` ## Files - `ori.model`: SentencePiece model file - `ori.vocab`: Vocabulary file - `config.json`: Tokenizer configuration ## Training Details - Language: Odia (ori) - Vocabulary Size: 128000 - Model Type: SentencePiece Unigram