MedGemma-4B Anatomy v2.1 (Optimized)

Improved version with better generalization - fixes overfitting from v2.0.

Key Improvements

  • โœ… Reduced epochs: 6 โ†’ 3 (prevents overfitting)
  • โœ… Early stopping: Stops when validation loss plateaus
  • โœ… Stronger regularization: Increased dropout and weight decay
  • โœ… Better convergence: Higher learning rate with more warmup

Model Details

  • Base Model: google/medgemma-4b-it (4B parameters)
  • Training Data: 895 anatomy Q&A pairs
  • Method: LoRA (r=32, ฮฑ=64, dropout=0.1)
  • Epochs: 3 (with early stopping)
  • Training Time: ~0.2 hours
  • Hardware: A100 40GB GPU
  • Final Train Loss: 1.3326
  • Best Val Loss: 1.2016

Training Configuration

CONFIG = {
    'max_seq_length': 1024,
    'num_epochs': 3,
    'batch_size': 4 (effective 16),
    'learning_rate': 0.0001,
    'lora_r': 32,
    'lora_alpha': 64,
    'lora_dropout': 0.1,
    'weight_decay': 0.03,
    'early_stopping_patience': 5
}

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_name = "krishna195/medgemma-anatomy-v2.1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype=torch.bfloat16,
    device_map="auto"
)

question = "What is the carpal tunnel?"
prompt = f"<start_of_turn>user\n{question}<end_of_turn>\n<start_of_turn>model\n"

inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

License

Apache 2.0

Downloads last month
6
Safetensors
Model size
4B params
Tensor type
F32
ยท
BF16
ยท
U8
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for krishna195/medgemma-anatomy-v2.1

Quantized
(39)
this model