--- language: - en license: apache-2.0 tags: - text-generation - instruct - manim - lora - gguf datasets: - ArunKr/verified-data-manim base_model: HuggingFaceTB/SmolLM-135M-Instruct library_name: transformers pipeline_tag: text-generation --- # gemma-3-270m-it-web-agent - Fine-tuned This repository contains three variants of the model: - **LoRA adapters** → [ArunKr/gemma-3-270m-it-web-agent-lora](https://huggingface.co/ArunKr/gemma-3-270m-it-web-agent-lora) - **Merged FP16 weights** → [ArunKr/gemma-3-270m-it-web-agent-16bit](https://huggingface.co/ArunKr/gemma-3-270m-it-web-agent-16bit) - **GGUF quantizations** → [ArunKr/gemma-3-270m-it-web-agent-gguf](https://huggingface.co/ArunKr/gemma-3-270m-it-web-agent-gguf) ### Training - Base model: `unsloth/gemma-3-270m-it` - Dataset: `ArunKr/gui_grounding_dataset-100` - Method: LoRA fine-tuning with [Unsloth](https://github.com/unslothai/unsloth) ### Quantizations We provide `f16`, `bf16`, `f32`, and `q8_0` GGUF files for llama.cpp / Ollama. ### Usage Example ```python from transformers import AutoModelForCausalLM, AutoTokenizer tok = AutoTokenizer.from_pretrained("ArunKr/gemma-3-270m-it-web-agent-16bit") model = AutoModelForCausalLM.from_pretrained("ArunKr/gemma-3-270m-it-web-agent-16bit") print(model.generate(**tok("Hello", return_tensors="pt"))) ``` ### Ollama Example ```bash ollama run ArunKr/SmolLM-135M-Instruct-manim-gguf:.gguf ``` www.ollama.com