MagicData340k / README.md
wj-inf's picture
Update README.md
e0068e7 verified
metadata
license: mit
task_categories:
  - image-text-to-text
  - text-to-image
tags:
  - text-to-image
  - evaluation
  - artifacts

MagicData340K: A Large-Scale Dataset for Fine-Grained Artifacts Assessment in Text-to-Image Generation

This repository hosts MagicData340K, a large-scale human-annotated dataset central to the MagicMirror framework. The MagicMirror framework introduces a comprehensive approach for the systematic and fine-grained evaluation of physical artifacts (such as anatomical and structural flaws) in Text-to-Image (T2I) generation.

MagicData340K is the first human-annotated large-scale dataset, comprising 340,000 generated images, each with fine-grained artifact labels. These annotations are guided by a detailed taxonomy of generated image artifacts, making the dataset crucial for understanding and improving the perceptual quality of T2I models.

Paper: MagicMirror: A Large-Scale Dataset and Benchmark for Fine-Grained Artifacts Assessment in Text-to-Image Generation

Project Page: https://wj-inf.github.io/MagicMirror-page/

Code (MagicMirror Benchmark): https://github.com/wj-inf/MagicMirror

Related Hugging Face Assets

Sample Usage

The MagicMirror framework, which utilizes this dataset, allows for the assessment of Text-to-Image (T2I) models. After setting up the environment as detailed in the MagicMirror GitHub repository, you can organize your image data (e.g., as ./output/sdxl/merged_result_sdxl.jsonl) and run the assessment script:

bash run.sh flux-schnell sdxl

Citation

If you find MagicData340K or the MagicMirror framework useful for your research, please cite the paper:

@article{wang2025magicmirror,
  title   = {MagicMirror: A Large-Scale Dataset and Benchmark for Fine-Grained Artifacts Assessment in Text-to-Image Generation},
  author  = {Wang, Jia and Hu, Jie and Ma, Xiaoqi and Ma, Hanghang and Zeng, Yanbing and Wei, Xiaoming},
  journal = {arXiv preprint arXiv:2509.10260},
  year    = {2025}
}