Prototype Gujarati Fine-Tuned Model

This is an experimental Gujarati language model, fine-tuned from a [unsloth/Llama-3.2-1B]. It was trained on a small dataset (~10k samples) and evaluated with BLEU score, which shows the accuracy is not great at this stage.

⚠️ Disclaimer: This is only a prototype for research and testing purposes. It may produce incorrect or inconsistent results and should not be used in production.

Future work will focus on expanding the dataset, tuning hyperparameters, and improving performance.

Downloads last month
6
Safetensors
Model size
1B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Dhyey3559/gujarati-finetune-llama3b

Finetuned
(118)
this model
Quantizations
1 model

Dataset used to train Dhyey3559/gujarati-finetune-llama3b