Monolingual Tokenizer - Konkani (Vocab 128000)
This is a monolingual tokenizer trained on Konkani text with vocabulary size 128000.
Usage
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("monolingual-tokenizer-iso-gom-vocab-128000")
Files
gom.model
: SentencePiece model filegom.vocab
: Vocabulary fileconfig.json
: Tokenizer configuration
Training Details
- Language: Konkani (gom)
- Vocabulary Size: 128000
- Model Type: SentencePiece Unigram
- Downloads last month
- 10
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support