--- license: apache-2.0 --- # Dataset Card for Self-Bench ## Overview **Self-Bench** is a diagnostic benchmark designed to explore the relationship between the generative and discriminative capabilities of diffusion models. It consists of images generated by different diffusion models, enabling controlled evaluations where the image domain is well-defined and consistent. The goal is to assess how well models can understand images that are most "familiar" to them—that is, images they themselves have generated. ## Diffusion Models Used We use three popular versions of Stable Diffusion to generate the dataset: - [Stable Diffusion v1.5](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) - [Stable Diffusion v2.0](https://huggingface.co/stabilityai/stable-diffusion-2) - [Stable Diffusion 3 Medium](https://huggingface.co/stabilityai/stable-diffusion-3-medium) All prompts are adapted from the [GenEval benchmark](https://github.com/djghosh13/geneval), which is designed for evaluating compositional generation. ## Data Structure Each image is annotated with: - `prompt`: the text prompt used for generation - `model`: the diffusion model used - `tag`: the task type (e.g., `single_object`, `position`, etc.) - `class`: the object class present in the image ## Access and Filtering This repository contains the full dataset. If you are looking for the filtered versions curated by three annotators, please visit our GitHub repository: 🔗 [https://github.com/self-bench/-9793055367](https://github.com/self-bench/-9793055367)