AI & ML interests

None defined yet.

Recent Activity

marc-thibault-hΒ  updated a Space about 18 hours ago
Hcompany/Holo1-Navigation
plcedoz38Β  updated a collection about 2 months ago
Holo1
philmod-hΒ  updated a model 2 months ago
Hcompany/Holo1-7B
View all activity

Articles

sergiopaniegoΒ 
posted an update 2 days ago
view post
Post
248
Latest TRL release brings major upgrades for multimodal alignment!

We dive into 3 new techniques to improve VLM post-training in our new blog:

πŸŒ‹ GRPO
🎞️ GSPO
πŸ™ MPO
βž• vLLM integration for online training w/ transformers backend\

🐑 Blog: https://huggingface.co/blog/trl-vlm-alignment
merveΒ 
posted an update 2 days ago
view post
Post
1991
GPT-4.1-mini level model right in your iPhone 🀯

openbmb/MiniCPM-V-4 is only 4B while surpassing GPT-4.1-mini in vision benchmarks πŸ”₯

allows commercial use as well!
sergiopaniegoΒ 
posted an update 4 days ago
merveΒ 
posted an update 4 days ago
view post
Post
901
we're all sleeping on this OCR model rednote-hilab/dots.ocr πŸ”₯

dots.ocr is a new 3B model with sota performance, support for 100 languages & allowing commercial use! 🀯

single e2e model to extract image, convert tables, formula, and more into markdown πŸ“
try it MohamedRashad/Dots-OCR
sergiopaniegoΒ 
posted an update 5 days ago
view post
Post
3271
Want to learn how to align a Vision Language Model (VLM) for reasoning using GRPO and TRL? πŸŒ‹

πŸ§‘β€πŸ³ We've got you covered!!

NEW multimodal post training recipe to align a VLM using TRL in @HuggingFace 's Cookbook.

Go to the recipe πŸ‘‰https://huggingface.co/learn/cookbook/fine_tuning_vlm_grpo_trl

Powered by the latest TRL v0.20 release, this recipe shows how to teach Qwen2.5-VL-3B-Instruct to reason over images πŸŒ‹
merveΒ 
posted an update 5 days ago
view post
Post
510
massive releases and tons of Flux 1. Krea LoRas past week!
here's some of the picks, find more models in collection 🫑 merve/releases-august-2-6890c14248203522b7d0267f

LLMs πŸ’¬
> Tencent dropped tencent/Hunyuan-7B-Instruct
> Qwen released Qwen/Qwen3-Coder-30B-A3B-Instruct, 30B MoE with 3B params for coding (OS)

vision/multimodal
> RedNote released rednote-hilab/dots.ocr - 3B OCR model (OS)
> Cohere released CohereLabs/command-a-vision-07-2025 - 112B (dense!) VLM for 6 languages
> StepFun-AI shipped stepfun-ai/step3 - 321B MoE VLM (OS)
> Skywork shipped Skywork/Skywork-UniPic-1.5B - new any-to-any model (image+text β†’ image+text) (OS)
sergiopaniegoΒ 
posted an update 5 days ago
view post
Post
4427
Just included example scripts for aligning models using GSPO (including VLM example) πŸ™†β€β™‚οΈπŸ™†β€β™‚οΈ

GSPO is the latest RL alignment algo by @Alibaba_Qwen and it's already supported in the latest TRL v0.20 release.

Super-easy-to-get-started example scripts below, GO run them!πŸ‘©β€πŸ’»πŸ‘©β€πŸ’»

πŸ§‘β€πŸŽ¨ Script: https://github.com/huggingface/trl/blob/main/examples/scripts/gspo.py
πŸ¦„ VLM script: https://github.com/huggingface/trl/blob/main/examples/scripts/gspo_vlm.py
🧩 More TRL examples: https://huggingface.co/docs/trl/main/en/example_overview
πŸ§™β€β™‚οΈ GSPO paper: Group Sequence Policy Optimization (2507.18071)
merveΒ 
posted an update 9 days ago
sergiopaniegoΒ 
posted an update 10 days ago
view post
Post
301
Did you miss this? πŸ‘“

πŸ§™β€β™‚οΈvLLM + transformers integration just got upgraded with direct VLM support.

Select a VLM + model_impl=transformers and play via vLLM!
merveΒ 
posted an update 10 days ago
view post
Post
3519
past week in open AI was insane πŸ”₯ here's some of picks, find more here merve/releases-july-25-688768ca47fe3693407e02d1

πŸ’¬ LLMs & VLMs
> Qwen/Qwen3-235B-A22B-Thinking-2507 had a new update (OS)
> Qwen/Qwen3-Coder-480B-A35B-Instruct is out with 480B total 35B active params 🀯 (OS)
> AllenAI dropped an update to allenai/olmOCR-7B-0725 πŸ“
> InternLM released internlm/Intern-S1 - 235B Qwen3 MoE + 6B InternViT encoder (OS)
> OmniSVG/OmniSVG is a new SVG generation VLM (OS)

πŸ–ΌοΈ image/video/3D generation
> WanAI released Wan2.2 series - both T2V and I2V 14B models for high-quality video generation (OS) multimodalart/wan-22-688767e313337b434ed55112
> Tencent dropped tencent/HunyuanWorld-1 - image-to-3D scene generation model
  • 1 reply
Β·
sergiopaniegoΒ 
posted an update 11 days ago
view post
Post
2572
We just released TRL v0.20 with major multimodal upgrades!

πŸ‘οΈ VLM support for GRPO (highly requested by the community!)
🎞️ New GSPO trainer (from @Qwen , released last week, VLM-ready)
πŸ™ New MPO trainer (multimodal by design, as in the paper)

πŸ“ Full release notes here: https://github.com/huggingface/trl/releases/tag/v0.20.0
merveΒ 
posted an update 12 days ago
view post
Post
4321
🀯 241B VLM with apache-2.0 license internlm/Intern-S1

internlm released Intern-S1: multimodal reasoning model based on 235B MoE Qwen3 and 6B InternViT 😍

benchmarks look great (πŸ‘‘ best model βœ… best open model)
sergiopaniegoΒ 
posted an update 17 days ago
view post
Post
1170
Yet Another New Multimodal Fine-Tuning Recipe πŸ₯§

πŸ§‘β€πŸ³ In this @HuggingFace Face Cookbook notebook, we demonstrate how to align a multimodal model (VLM) using Mixed Preference Optimization (MPO) using trl.

πŸ’‘ This recipe is powered by the new MPO support in trl, enabled through a recent upgrade to the DPO trainer!

We align the multimodal model using multiple optimization objectives (losses), guided by a preference dataset (chosen vs. rejected multimodal pairs).

Check it out! ➑️ https://huggingface.co/learn/cookbook/fine_tuning_vlm_mpo
  • 2 replies
Β·
merveΒ 
posted an update 17 days ago
view post
Post
782
so many open LLMs and image LoRAs dropped past week, here's some picks for you 🫑 merve/releases-july-18-687e3fbd2ab9b39c51f9238b

LLMs
> ByteDance released a bunch of translation models called Seed-X-RM (7B) ByteDance-Seed/Seed-X-RM-7B
> NVIDIA released reasoning models of which 32B surpassing the giant Qwen3-235B with cc-by-4.0 license πŸ‘ nvidia/openreasoning-nemotron-687730dae0170059860f1f01
> LG released a new EXAONE model (32B) LGAI-EXAONE/EXAONE-4.0-32B

VLMs/any-to-any
> vidore/colqwen-omni-v0.1 is a new any-to-any retriever (MIT)
> HiDream-ai/HiDream-E1-1 is image+text in image+text out model (MIT)

LoRAs
> There's a bunch of LoRAs based on Flux Kontext, gotta check out the collection 🀠
merveΒ 
posted an update 19 days ago
sergiopaniegoΒ 
posted an update 22 days ago
view post
Post
1654
πŸ§‘β€πŸ³ New Multimodal Fine-Tuning Recipe πŸ§‘β€πŸ³

⚑️ In this new @huggingface Cookbook recipe, I walk you though the process of fine tuning a Visual Language Model (VLM) for Object Detection with Visual Grounding, using TRL.

πŸ” Object detection typically involves detecting categories in images (e.g., vase).

By combining it with visual grounding, we add contextual understanding so instead of detecting just "vase", we can detect "middle vase" in an image.

VLMs are super powerful!

In this case, I use PaliGemma 2 which already supports object detection and extend it to also add visual grounding.

πŸ€— Check it out here: https://huggingface.co/learn/cookbook/fine_tuning_vlm_object_detection_grounding
sergiopaniegoΒ 
posted an update 23 days ago
view post
Post
1618
Multiple NEW notebooks and scripts added to the Hugging Face Gemma recipes repo!

Thanks to the community 🫢, we're adding more and more recipes using Gemma πŸ’Ž

Fine tuning for all modalities, function calling, RAG...

Repo: https://github.com/huggingface/huggingface-gemma-recipes

We're also open to new ideas from the community πŸ€—!
  • 1 reply
Β·
merveΒ 
posted an update 23 days ago
merveΒ 
posted an update 24 days ago
view post
Post
2607
Fine-tune Gemma3n on videos with audios inside with Colab A100 πŸ”₯
Just dropped the notebook where you can learn how to fine-tune Gemma3n on images+audio+text at the same time!

keep in mind, it's made for educational purposes 🫑 we do LoRA, audio resampling & video downsampling to be able to train <40GB VRAM

stretch modalities and unfreeze layers as you wish! πŸ™πŸ» merve/smol-vision
  • 1 reply
Β·