zhzhen23's picture
Update README.md
d1f15f8 verified

A newer version of the Streamlit SDK is available: 1.49.1

Upgrade
metadata
title: OmniSearchLeaderboard
emoji: 🐨
colorFrom: pink
colorTo: green
sdk: streamlit
sdk_version: 1.44.1
app_file: app.py
pinned: false
license: apache-2.0

πŸ“š Dyn-VQA Dataset

πŸ“‘ Dataset for Benchmarking Multimodal Retrieval Augmented Generation with Dynamic VQA Dataset and Self-adaptive Planning Agent

🌟 This dataset is linked to GitHub at this URL.

The json item of Dyn-VQA dataset is organized in the following format:

{
    "image_url": "https://www.pcarmarket.com/static/media/uploads/galleries/photos/uploads/galleries/22387-pasewark-1986-porsche-944/.thumbnails/IMG_7102.JPG.jpg/IMG_7102.JPG-tiny-2048x0-0.5x0.jpg",
    "question": "What is the model of car from this brand?",
    "question_id": 'qid',
    "answer": ["保既捷 944", "Porsche 944."]
}

πŸ”₯ The Dyn-VQA will be updated regularly. Laset version: 202502.

πŸ“ Citation

@article{li2024benchmarkingmultimodalretrievalaugmented,
      title={Benchmarking Multimodal Retrieval Augmented Generation with Dynamic VQA Dataset and Self-adaptive Planning Agent}, 
      author={Yangning Li and Yinghui Li and Xinyu Wang and Yong Jiang and Zhen Zhang and Xinran Zheng and Hui Wang and Hai-Tao Zheng and Pengjun Xie and Philip S. Yu and Fei Huang and Jingren Zhou},
      year={2024},
      eprint={2411.02937},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2411.02937}, 
}

When citing our work, please kindly consider citing the original papers. The relevant citation information is listed here.

Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference