kaiyuyue's picture
Improve model card: Update license, add abstract, usage, and citation (#2)
dac777a verified
---
base_model:
- meta-llama/Llama-3.2-3B-Instruct
- meta-llama/Llama-3.1-8B-Instruct
- meta-llama/Llama-3.1-70B-Instruct
- openai/clip-vit-large-patch14
library_name: transformers
license: cc-by-nc-4.0
pipeline_tag: image-text-to-text
tags:
- pytorch
- llama-3
- zero-shot-vision-encoder-grafting
---
## Zero-Shot Vision Encoder Grafting via LLM Surrogates
*International Conference on Computer Vision (ICCV), 2025*
This repository contains vision encoders and surrogate models, which are described in [Zero-Shot Vision Encoder Grafting via LLM Surrogates](https://huggingface.co/papers/2505.22664).
<p align="left" style="display: flex; gap: 8px; align-items: center;">
<a href="https://arxiv.org/abs/2505.22664"><img src="https://img.shields.io/badge/arXiv%20-paper-b31b1b.svg?style=flat-square" /></a>
<a href="https://github.com/facebookresearch/zero"><img src="https://img.shields.io/badge/GitHub%20-facebookresearch/zero-0081fb.svg?style=flat-square" /></a>
<a href="https://github.com/facebookresearch/zero/blob/main/LICENSE"><img src="https://img.shields.io/badge/License-CC--BY--NC%204.0-black.svg?style=flat-square" /></a>
</p>
### Paper Abstract
Vision language models (VLMs) typically pair a modestly sized vision encoder with a large language model (LLM), e.g., Llama-70B, making the decoder the primary computational burden during training. To reduce costs, a potential promising strategy is to first train the vision encoder using a small language model before transferring it to the large one. We construct small "surrogate models" that share the same embedding space and representation language as the large target LLM by directly inheriting its shallow layers. Vision encoders trained on the surrogate can then be directly transferred to the larger model, a process we call zero-shot grafting -- when plugged directly into the full-size target LLM, the grafted pair surpasses the encoder-surrogate pair and, on some benchmarks, even performs on par with full decoder training with the target LLM. Furthermore, our surrogate training approach reduces overall VLM training costs by ~45% when using Llama-70B as the decoder.
### Sample Usage
This model is a full Vision-Language Model (`LlavaLlamaForCausalLM`) incorporating a surrogate-trained vision encoder, and can be used directly with the `transformers` library.
```python
from transformers import AutoProcessor, AutoModelForCausalLM
import torch
from PIL import Image
import requests
from io import BytesIO
# Load model and processor
# Example model ID for a surrogate-trained 8B Llama-3.1 encoder
model_id = "tomg-group-umd/llama3.1-8b_surrogate-trained-encoder"
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True # Required as it uses a custom LlavaLlamaForCausalLM architecture
)
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
# Prepare inputs
image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/bird_sized.jpg"
image = Image.open(BytesIO(requests.get(image_url).content))
messages = [
{"role": "user", "content": "What is in this image?"},
]
# Apply chat template and process inputs
prompt = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
inputs = processor(text=prompt, images=image, return_tensors="pt").to(model.device)
# Generate response
output_ids = model.generate(**inputs, max_new_tokens=50)
output = processor.decode(output_ids[0], skip_special_tokens=True)
print(output)
# Expected output (may vary slightly):
# "A bird with a blue head and green body is perched on a branch. The bird has a long tail and is facing to the right."
```
### Citation
If you find our work helpful in your research, please cite it as:
```bibtex
@inproceedings{yue2025zero,
title = {Zero-Shot Vision Encoder Grafting via LLM Surrogates},
author = {Yue, Kaiyu and Singla, Vasu and Jia, Menglin and Kirchenbauer, John and Qadri, Rifaa and Cai, Zikui and Bhatele, Abhinav and Huang, Furong and Goldstein, Tom},
booktitle = {International Conference on Computer Vision (ICCV)},
year = {2025}
}
```