Instruction Residuals
This repository contains instruction residuals (delta weights) computed as the parameter-wise difference between google/gemma-3-1b-it and google/gemma-3-1b-pt.
Apply these residuals to the base model to reconstruct the instruction-tuned weights without retraining.
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
from residuals import Residuals
base = AutoModelForCausalLM.from_pretrained("google/gemma-3-1b-pt")
tok = AutoTokenizer.from_pretrained("google/gemma-3-1b-pt")
res = Residuals.from_pretrained("residuals/gemma-3-1b")
res.apply(base, base_tokenizer=tok)
Provenance
- Created at: 2025-10-25T18:43:38.287673+00:00
- DType: float32
- Parameters: 341
- Shapes hash: 50bd0b958c5645bee4bc25d35eaa9c3bba79e2f7144e49db51f1fcd37440dada
- Names hash: 54c326e0a2a310020e89db320f7c0d9a3cef063d8b1251f78cbf7c0db721b3cd
- Base model:
google/gemma-3-1b-pt - Instruction model:
google/gemma-3-1b-it
Files
- model.safetensors: Serialized residual tensors (safetensors format).
- (optional) model.safetensors.index.json + shard files
model-00001-of-000N.safetensors, ... for multi-part weights. - config.json: Residuals metadata and provenance.
- tokenizer files: Saved tokenizer for compatibility.
About this format
These are additive residuals (task vectors). Applying them to the base model's parameters reconstructs the instruction-tuned model.
Tools
Generated with the residuals Python package. Install via: pip install residuals.
- Downloads last month
- 9
Model tree for residuals/gemma-3-1b
Base model
google/gemma-3-1b-pt