Improve model card: Update license, add abstract, usage, and citation

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +60 -2
README.md CHANGED
@@ -5,7 +5,7 @@ base_model:
5
  - meta-llama/Llama-3.1-70B-Instruct
6
  - openai/clip-vit-large-patch14
7
  library_name: transformers
8
- license: cc
9
  pipeline_tag: image-text-to-text
10
  tags:
11
  - pytorch
@@ -23,4 +23,62 @@ This repository contains vision encoders and surrogate models, which are describ
23
  <a href="https://arxiv.org/abs/2505.22664"><img src="https://img.shields.io/badge/arXiv%20-paper-b31b1b.svg?style=flat-square" /></a>
24
  <a href="https://github.com/facebookresearch/zero"><img src="https://img.shields.io/badge/GitHub%20-facebookresearch/zero-0081fb.svg?style=flat-square" /></a>
25
  <a href="https://github.com/facebookresearch/zero/blob/main/LICENSE"><img src="https://img.shields.io/badge/License-CC--BY--NC%204.0-black.svg?style=flat-square" /></a>
26
- </p>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  - meta-llama/Llama-3.1-70B-Instruct
6
  - openai/clip-vit-large-patch14
7
  library_name: transformers
8
+ license: cc-by-nc-4.0
9
  pipeline_tag: image-text-to-text
10
  tags:
11
  - pytorch
 
23
  <a href="https://arxiv.org/abs/2505.22664"><img src="https://img.shields.io/badge/arXiv%20-paper-b31b1b.svg?style=flat-square" /></a>
24
  <a href="https://github.com/facebookresearch/zero"><img src="https://img.shields.io/badge/GitHub%20-facebookresearch/zero-0081fb.svg?style=flat-square" /></a>
25
  <a href="https://github.com/facebookresearch/zero/blob/main/LICENSE"><img src="https://img.shields.io/badge/License-CC--BY--NC%204.0-black.svg?style=flat-square" /></a>
26
+ </p>
27
+
28
+ ### Paper Abstract
29
+ Vision language models (VLMs) typically pair a modestly sized vision encoder with a large language model (LLM), e.g., Llama-70B, making the decoder the primary computational burden during training. To reduce costs, a potential promising strategy is to first train the vision encoder using a small language model before transferring it to the large one. We construct small "surrogate models" that share the same embedding space and representation language as the large target LLM by directly inheriting its shallow layers. Vision encoders trained on the surrogate can then be directly transferred to the larger model, a process we call zero-shot grafting -- when plugged directly into the full-size target LLM, the grafted pair surpasses the encoder-surrogate pair and, on some benchmarks, even performs on par with full decoder training with the target LLM. Furthermore, our surrogate training approach reduces overall VLM training costs by ~45% when using Llama-70B as the decoder.
30
+
31
+ ### Sample Usage
32
+
33
+ This model is a full Vision-Language Model (`LlavaLlamaForCausalLM`) incorporating a surrogate-trained vision encoder, and can be used directly with the `transformers` library.
34
+
35
+ ```python
36
+ from transformers import AutoProcessor, AutoModelForCausalLM
37
+ import torch
38
+ from PIL import Image
39
+ import requests
40
+ from io import BytesIO
41
+
42
+ # Load model and processor
43
+ # Example model ID for a surrogate-trained 8B Llama-3.1 encoder
44
+ model_id = "tomg-group-umd/llama3.1-8b_surrogate-trained-encoder"
45
+ model = AutoModelForCausalLM.from_pretrained(
46
+ model_id,
47
+ torch_dtype=torch.bfloat16,
48
+ device_map="auto",
49
+ trust_remote_code=True # Required as it uses a custom LlavaLlamaForCausalLM architecture
50
+ )
51
+ processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
52
+
53
+ # Prepare inputs
54
+ image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/bird_sized.jpg"
55
+ image = Image.open(BytesIO(requests.get(image_url).content))
56
+
57
+ messages = [
58
+ {"role": "user", "content": "What is in this image?"},
59
+ ]
60
+
61
+ # Apply chat template and process inputs
62
+ prompt = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
63
+ inputs = processor(text=prompt, images=image, return_tensors="pt").to(model.device)
64
+
65
+ # Generate response
66
+ output_ids = model.generate(**inputs, max_new_tokens=50)
67
+ output = processor.decode(output_ids[0], skip_special_tokens=True)
68
+
69
+ print(output)
70
+ # Expected output (may vary slightly):
71
+ # "A bird with a blue head and green body is perched on a branch. The bird has a long tail and is facing to the right."
72
+ ```
73
+
74
+ ### Citation
75
+ If you find our work helpful in your research, please cite it as:
76
+
77
+ ```bibtex
78
+ @inproceedings{yue2025zero,
79
+ title = {Zero-Shot Vision Encoder Grafting via LLM Surrogates},
80
+ author = {Yue, Kaiyu and Singla, Vasu and Jia, Menglin and Kirchenbauer, John and Qadri, Rifaa and Cai, Zikui and Bhatele, Abhinav and Huang, Furong and Goldstein, Tom},
81
+ booktitle = {International Conference on Computer Vision (ICCV)},
82
+ year = {2025}
83
+ }
84
+ ```