Prototype Gujarati Fine-Tuned Model
This is an experimental Gujarati language model, fine-tuned from a [unsloth/Llama-3.2-1B]. It was trained on a small dataset (~10k samples) and evaluated with BLEU score, which shows the accuracy is not great at this stage.
⚠️ Disclaimer: This is only a prototype for research and testing purposes. It may produce incorrect or inconsistent results and should not be used in production.
Future work will focus on expanding the dataset, tuning hyperparameters, and improving performance.
- Downloads last month
- 6
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support