Image-Text-to-Text
Transformers
Jarvis1111's picture
Add pipeline tag, library name, and project page link (#1)
ce48519 verified
metadata
datasets:
  - Jarvis1111/RobustVLGuard
license: mit
pipeline_tag: image-text-to-text
library_name: transformers

πŸš€ Safeguarding Vision-Language Models: Mitigating Vulnerabilities to Gaussian Noise in Perturbation-based Attacks

Welcome! This repository hosts the official implementation of our paper, "Safeguarding Vision-Language Models: Mitigating Vulnerabilities to Gaussian Noise in Perturbation-based Attacks."

Paper link: arxiv.org/abs/2504.01308

Project page:


🌟 What’s New?

We propose state-of-the-art solutions to enhance the robustness of Vision-Language Models (VLMs) against Gaussian noise and adversarial attacks. Key highlights include:

  • 🎯 Robust-VLGuard: A pioneering multimodal safety dataset covering both aligned and misaligned image-text pair scenarios.

  • πŸ›‘οΈ DiffPure-VLM: A novel defense framework that leverages diffusion models to neutralize adversarial noise by transforming it into Gaussian-like noise, significantly improving VLM resilience.


✨ Key Contributions

  • πŸ” Conducted a comprehensive vulnerability analysis revealing the sensitivity of mainstream VLMs to Gaussian noise.
  • πŸ“š Developed Robust-VLGuard, a dataset designed to improve model robustness without compromising helpfulness or safety alignment.
  • βš™οΈ Introduced DiffPure-VLM, an effective pipeline for defending against complex optimization-based adversarial attacks.
  • πŸ“ˆ Demonstrated strong performance across multiple benchmarks, outperforming existing baseline methods.