|
--- |
|
license: apache-2.0 |
|
task_categories: |
|
- question-answering |
|
language: |
|
- ja |
|
size_categories: |
|
- n<1K |
|
--- |
|
|
|
|
|
# JGraphQA |
|
|
|
## Introduction |
|
We introduce JGraphQA, a multimodal benchmark designed to evaluate the chart understanding capabilities of Large Multimodal Models (LMMs) in Japanese. |
|
To create JGraphQA, we first conducted a detailed analysis of the existing ChartQA benchmark. Then, focusing on Japanese investor relations (IR) materials, we collected a total of 100 images consisting of four types: pie charts, line charts, bar charts, and tables. For each image, we created two question-answer pairs. |
|
All questions and answers were manually crafted and verified to ensure accurate and meaningful evaluation. |
|
|
|
## Installation |
|
These code snippets were created for evaluation using [lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval/tree/main). Please make sure to install lmms-eval before using this benchmark. |
|
```sh |
|
conda create --prefix ./lmms-eval python=3.10 -y |
|
conda activate ./lmms-eval |
|
pip install --upgrade pip |
|
git clone --branch v0.3.0 https://github.com/EvolvingLMMs-Lab/lmms-eval |
|
cd lmms-eval |
|
pip install -e . |
|
``` |
|
|
|
|
|
- Access the URLs listed in the "citation_pdf_url" column of "source.csv" and download the corresponding PDF files. |
|
Rename each downloaded file according to the file name specified in the "local_file_name" column of "source.csv". |
|
(Alternatively, you may keep the original file names of the downloaded files and instead update the file names in the "local_file_name" column accordingly.) |
|
Please place the downloaded PDF files in the ./pdf directory. |
|
- Run "create_dataset_for_lmms-eval.ipynb" to generate "jgraphqa.parquet". |
|
- Copy "jgraphqa.yaml", "utils.py", and the generated "jgraphqa.parquet" file into the [lmms_eval/tasks](https://github.com/EvolvingLMMs-Lab/lmms-eval/tree/main/lmms_eval/tasks)/jgraphqa directory. |
|
(You will need to create the jgraphqa directory if it does not already exist.) |
|
- Please add the path to the jgraphqa.parquet file on line 3 of the jgraphqa.yaml file. |
|
|
|
### Optional |
|
- If you would like to evaluate [Llama-3.1-70B-Instruct-multimodal-JP-Graph-v0.1](https://huggingface.co/r-g2-2024/Llama-3.1-70B-Instruct-multimodal-JP-Graph-v0.1), after installing lmms-eval, first follow the instructions on the r-g2-2024/Llama-3.1-70B-Instruct-multimodal-JP-Graph-v0.1 page to install LLaVA and other necessary components. Then, please overwrite [lmms_eval/models/llava_onevision.py](https://github.com/EvolvingLMMs-Lab/lmms-eval/blob/main/lmms_eval/models/llava_onevision.py) with the attached "llava_onevision.py". |
|
- If you encounter an error related to wandb, please run the following command: |
|
```sh |
|
pip install wandb==0.18.5 |
|
``` |
|
|
|
## Usage |
|
- Using the lmms-eval framework, please run the following command: |
|
```bash |
|
CUDA_VISIBLE_DEVICES=0,1 python -m lmms_eval \ |
|
--model llava_onevision \ |
|
--model_args pretrained="r-g2-2024/Llama-3.1-70B-Instruct-multimodal-JP-Graph-v0.1",model_name=llava_llama_3,conv_template=llava_llama_3,device_map=auto \ |
|
--tasks jgraphqa \ |
|
--batch_size=1 \ |
|
--log_samples \ |
|
--log_samples_suffix llava-onevision \ |
|
--output_path ./logs/ \ |
|
--wandb_args=project=lmms-eval,job_type=eval,name=Llama-3.1-70B-Instruct-multimodal-JP-Graph-v0.1 |
|
``` |
|
|
|
|
|
|