Datasets:
metadata
license: cc0-1.0
task_categories:
- text-to-image
pretty_name: DiffusionDB Captions
size_categories:
- 1M<n<10M
dataset_info:
features:
- name: description
dtype: string
splits:
- name: train
num_bytes: 50735678
num_examples: 967163
- name: validation
num_bytes: 101940
num_examples: 1948
- name: test
num_bytes: 256037
num_examples: 4870
download_size: 35930405
dataset_size: 51093655
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
DiffusionDB Captions
Captions corpora processed from DiffusionDB dataset, the goal is to create a dataset similar to flickr30k but much larger
Filtered from diffusionDB + diffusionDB_large, total 14M data, into ~1M pre-processing includes:
- remove style prompts
- remove too long and specific prompts
- remove prompts with many specific names
Citation
If you find this curated dataset helpful, feel free to give us a cite.
@misc{rodriguez2025renderingawarereinforcementlearningvector,
title={Rendering-Aware Reinforcement Learning for Vector Graphics Generation},
author={Juan A. Rodriguez and Haotian Zhang and Abhay Puri and Aarash Feizi and Rishav Pramanik and Pascal Wichmann and Arnab Mondal and Mohammad Reza Samsami and Rabiul Awal and Perouz Taslakian and Spandana Gella and Sai Rajeswar and David Vazquez and Christopher Pal and Marco Pedersoli},
year={2025},
eprint={2505.20793},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2505.20793},
}