The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Dataset Card for GitTaskBench
The dataset was presented in the paper GitTaskBench: A Benchmark for Code Agents Solving Real-World Tasks Through Code Repository Leveraging.
Dataset Details
Dataset Description
GitTaskBench is a benchmark dataset designed to evaluate the capabilities of code-based intelligent agents in solving real-world tasks by leveraging GitHub repositories.
It contains 54 representative tasks across 7 domains, carefully curated to reflect real-world complexity and economic value. Each task is associated with a fixed GitHub repository to ensure reproducibility and fairness in evaluation.
- Curated by: QuantaAlpha Research Team
- Funded by [optional]: Not specified
- Shared by [optional]: GitTaskBench Team
- Language(s): Primarily English (task descriptions, documentation)
- License:
cc-by-nc-sa-4.0
Dataset Sources
- Repository: GitTaskBench GitHub
- Paper: arXiv:2508.18993
- Organization: Team Homepage
Uses
Direct Use
- Evaluating LLM-based agents (e.g., RepoMaster, SWE-Agent, Aider, OpenHands).
- Benchmarking repository-level reasoning and execution.
- Training/testing frameworks for real-world software engineering tasks.
Out-of-Scope Use
- Not intended for personal data processing.
- Not designed as a dataset for training NLP models directly.
- Not suitable for commercial applications requiring private/sensitive datasets.
Dataset Structure
- Tasks: 54 total, spanning 7 domains.
- Domains include:
- Image Processing
- Video Processing
- Speech Processing
- Physiological Signals Processing
- Security and Privacy
- Web Scraping
- Office Document Processing
Each task specifies:
- Input requirements (file types, formats).
- Output expectations.
- Evaluation metrics (task-specific, e.g., accuracy thresholds, PSNR for image quality, Hasler-Bülthoff metric for video).
Usage Example
To get started with GitTaskBench, follow these steps for environment setup and evaluation.
1. Set Up ⚙️
First, create a new conda environment:
conda create -n gittaskbench python=3.10 -y
conda activate gittaskbench
pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 \
--extra-index-url https://download.pytorch.org/whl/cu113
Then, you can install gittaskbench
with pip:
git clone https://github.com/QuantaAlpha/GitTaskBench.git
cd GitTaskBench
# config
pip install -e .
Alternatively:
# config
pip install -r requirements.txt
2. Quick Start 💡
If you need to evaluate a single, specific task, you can use the following command. The example below shows how to evaluate the Trafilatura_01
task:
cd GitTaskBench
# The outputs are saved in the DEFAULT "./output" directory, for example: "./output/Trafilatura_01/output.txt"
gittaskbench grade --taskid Trafilatura_01
Running the command will produce an analysis report (.jsonl) at the DEFAULT path (./test_results/Trafilatura_01). See test_results_for_show/
for a sample.
The complete commands can be found in the 🤖 Automation Evaluation section.
When you need to evaluate all tasks, you can use the --all parameter. This command will automatically iterate through and execute the evaluation of all tasks:
gittaskbench grade --all
After completing the evaluation, if you want to analyze & summary the test results, you can use the statistics command. This command will analyze & summary the evaluation results in the specified directory and output an analysis report (.txt):
gittaskbench eval
See test_reports/
for a sample.
Each task entry contains:
- task_id: Unique task identifier (e.g.,
Trafilatura_01
) - domain: Task domain (e.g., Image Processing, Speech Processing, etc.)
- description: Natural language description of the task
- input_format: Expected input file type/format
- output_requirement: Required output specification
- evaluation_metric: Evaluation protocol and pass/fail criteria
Dataset Creation
Curation Rationale
Current agent benchmarks often lack real-world grounding. GitTaskBench fills this gap by focusing on practical, repository-driven tasks that mirror how developers solve real problems using GitHub projects.
Source Data
Data Collection and Processing
- Selected GitHub repositories that match strict criteria (stability, completeness, reproducibility).
- Curated real-world tasks mapped to fixed repositories.
- Defined consistent evaluation protocols across tasks.
Who are the source data producers?
- Source repositories come from open-source GitHub projects.
- Benchmark curated by QuantaAlpha team (researchers from CAS, Tsinghua, PKU, CMU, HKUST, etc.).
Annotations
- Task-specific evaluation metrics are provided as annotations.
- No human-labeled data annotations beyond benchmark definitions.
Personal and Sensitive Information
- Dataset does not include personally identifiable information.
- Repositories selected exclude sensitive or private data.
Bias, Risks, and Limitations
- Bias: Repository and task selection may reflect research biases toward specific domains.
- Risk: Benchmark assumes GitHub accessibility; tasks may be less relevant if repos change in future.
- Limitation: Tasks are curated and fixed; not all real-world cases are covered.
Recommendations
- Use this benchmarks for agent real-world evaluation.
- Ensure compliance with licensing before re-distribution.
Citation
If you use GitTaskBench, please cite the paper:
BibTeX:
@misc{ni2025gittaskbench,
title={GitTaskBench: A Benchmark for Code Agents Solving Real-World Tasks Through Code Repository Leveraging},
author={Ziyi Ni and Huacan Wang and Shuo Zhang and Shuo Lu and Ziyang He and Wang You and Zhenheng Tang and Yuntao Du and Bill Sun and Hongzhang Liu and Sen Hu and Ronghao Chen and Bo Li and Xin Li and Chen Hu and Binxing Jiao and Daxin Jiang and Pin Lyu},
year={2025},
eprint={2508.18993},
archivePrefix={arXiv},
primaryClass={cs.SE},
url={https://arxiv.org/abs/2508.18993},
}
More Information
- Maintainer: QuantaAlpha Research Team
- Contact: See GitTaskBench GitHub Issues
✨ Key Features:
- Multi-modal tasks (vision, speech, text, signals).
- Repository-level evaluation.
- Real-world relevance (PDF extraction, video coloring, speech analysis, etc.).
- Extensible design for new tasks.
- Downloads last month
- 362