Qwen2.5-VL 7B (LoRA merged)

  • Base: Qwen/Qwen2.5-VL-7B-Instruct
  • LoRA adapters merged into base weights.
  • Inference example:
from transformers import AutoProcessor
from qwen_vl_utils import process_vision_info  # if needed by qwen
from transformers import Qwen2_5_VLForConditionalGeneration
model = Qwen2_5_VLForConditionalGeneration.from_pretrained("mangsgi/azu", device_map="auto", torch_dtype="auto", trust_remote_code=True)
processor = AutoProcessor.from_pretrained("mangsgi/azu", trust_remote_code=True)
# build messages with image+text, then processor(...), model.generate(...)
Downloads last month
1
Safetensors
Model size
8B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support