We just released TRL v0.20 with major multimodal upgrades!
ποΈ VLM support for GRPO (highly requested by the community!) ποΈ New GSPO trainer (from @Qwen, released last week, VLM-ready) π New MPO trainer (multimodal by design, as in the paper)
Yet Another New Multimodal Fine-Tuning Recipe π₯§
π§βπ³ In this @HuggingFace Face Cookbook notebook, we demonstrate how to align a multimodal model (VLM) using Mixed Preference Optimization (MPO) using trl.
π‘ This recipe is powered by the new MPO support in trl, enabled through a recent upgrade to the DPO trainer!
We align the multimodal model using multiple optimization objectives (losses), guided by a preference dataset (chosen vs. rejected multimodal pairs).
π§βπ³ New Multimodal Fine-Tuning Recipe π§βπ³
β‘οΈ In this new @huggingface Cookbook recipe, I walk you though the process of fine tuning a Visual Language Model (VLM) for Object Detection with Visual Grounding, using TRL.
π Object detection typically involves detecting categories in images (e.g., vase).
By combining it with visual grounding, we add contextual understanding so instead of detecting just "vase", we can detect "middle vase" in an image.
VLMs are super powerful!
In this case, I use PaliGemma 2 which already supports object detection and extend it to also add visual grounding.
Fine-tune Gemma3n on videos with audios inside with Colab A100 π₯ Just dropped the notebook where you can learn how to fine-tune Gemma3n on images+audio+text at the same time!
keep in mind, it's made for educational purposes π«‘ we do LoRA, audio resampling & video downsampling to be able to train <40GB VRAM stretch modalities and unfreeze layers as you wish! ππ» merve/smol-vision