You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

To request access to the AECBench dataset, please complete the following form.
Access is restricted to non-commercial use.
All requests will be reviewed, and approval may take up to 3 business days.

Log in or Sign Up to review the conditions and access this dataset content.

🏗️ AECBench

🇺🇸 English | 🇨🇳 中文说明

AECBench Logo

Homepage arXiv Dataset


Project Introduction

AECBench is an open-source large language model Architecture, Engineering & Construction (AEC) domain evaluation benchmark jointly released by East China Architectural Design & Research Institute Co., Ltd. (ECADI) of China Construction Group and Tongji University. This dataset aims to systematically evaluate large language models' (LLMs) knowledge mastery, understanding, reasoning, computation, and application capabilities in the fields of architecture, engineering, and construction. It is currently the first Chinese evaluation benchmark that covers the full AEC lifecycle with the most comprehensive cognitive hierarchy.

AECBench is based on real engineering business scenarios, designed by interdisciplinary engineer teams and validated through double-round expert review, ensuring data authenticity, professionalism, and high quality. Additionally, AECBench introduces an "LLM-as-a-Judge" automated evaluation mechanism to achieve scalable assessment of open-ended long-text tasks.

Core Advantages

Hierarchical Cognitive Framework: First-ever five-level cognitive evaluation system for the AEC domain—including Knowledge Memory, Knowledge Understanding, Knowledge Reasoning, Knowledge Computation, and Knowledge Application.

High-Quality Dataset: Carefully crafted by engineers based on standards, internal documents, and professional certification exam questions, containing 4,800 questions covering 23 real task types across multiple task formats including multiple choice, classification, extraction, and generation.

Automated Evaluation Mechanism: Introduces "LLM-as-a-Judge" automated review mechanism, automatically evaluating open-ended tasks based on expert scoring rubrics, significantly improving evaluation consistency and scalability.

Real Industry Scenarios: Questions sourced from building codes, design calculations, construction management, and equipment selection from real engineering processes, ensuring task authenticity and industry relevance.

Dataset Content

AECBench contains 5 cognitive levels and 23 evaluation tasks, totaling approximately 4,800 samples.

Cognitive Level ID Task Description Metric Type
Architectural Knowledge Memory 1-1 Code Memory Identify and judge the correctness and applicability of building code provisions. Accuracy Single Choice
1-2 Professional Terminology Select correct terminology based on definitions. Accuracy Single Choice
1-3 Abbreviation Recognition Identify meanings of common AEC domain abbreviations. Accuracy Single Choice
Knowledge Understanding 2-1 Code Provision Understanding Understand qualitative or quantitative requirements of building codes. Accuracy Single Choice
2-2 Design General Q&A Complete or judge fundamental architectural design knowledge. Accuracy Single Choice
2-3 Table Provision Understanding Understand code table content and parameter logic. Accuracy Single Choice
Knowledge Reasoning 3-1 Design Decision Making Conduct standardized design reasoning based on scenario descriptions. Accuracy Single Choice
3-2 Building Type Inference Infer building types from descriptions. Accuracy Single Choice
3-3 Tabular Design Reasoning Conduct reasoning and judgment based on code table information. Accuracy Single Choice
Knowledge Computation 4-1 Architectural Design Calculation Execute structural and architectural-related calculation tasks. Accuracy Single Choice
4-2 MEP Design Calculation Evaluate HVAC and electrical system calculation capabilities. Accuracy Single Choice
4-3 Construction Management Calculation Calculate construction parameters such as duration, personnel, and equipment. Accuracy Single Choice
4-4 Engineering Economics Calculation Conduct economic analysis including cost, interest rates, and NPV. Accuracy Single Choice
Knowledge Application 5-1-1 Document Classification Determine professional categories of architectural documents. F1 Classification
5-1-2 Text Proofreading Correct grammatical and semantic errors in documents. F0.5 Generation
5-1-3 Compliance Check Verify whether design parameters comply with code requirements. Accuracy Single Choice
5-1-4 Brand Consistency Verification Check whether equipment specifications match brands. soft-F1 Extraction
5-1-5 Information Extraction Extract project or bidding entity information from long texts. soft-F1 Extraction
5-2-1 Architectural Design Report Evaluation Score architectural scheme reports based on expert rubrics. Kendall τ Generation
5-2-2 Structural Design Report Evaluation Score structural design reports based on rubrics. Kendall τ Generation
5-3-1 Conceptual Scheme Generation Generate architectural conceptual design schemes compliant with codes. Expert Score Generation
5-3-2 Specialized Report Generation Generate specialized design reports compliant with industry standards. Expert Score Generation
5-3-3 Bid Evaluation Report Generation Generate evaluation reports based on equipment specifications and parameters. Expert Score Generation

Data Format Example

[
  {
    "instruction": "请你运用建筑知识从A,B,C,D中选出一个正确的答案,并写在[正确答案]和<eoa>之间。例如[正确答案]A<eoa>。请你严格按照这个格式回答。\n请你回答:",
    "question": "下列哪个术语的释义是:“地面坡度不大于1 : 20 且不设扶手的出入口”?\nA.渐变坡道\nB.坡道出入口\nC.无障碍通道\nD.平坡出入口\n\n",
    "answer": "[正确答案]D<eoa>"
  },
]

User Guide

  1. Dataset Download: Please visit the AECBench Official Repository to download the complete dataset.

  2. Model Evaluation: AECBench supports automated evaluation processes based on OpenCompass:

    • Place the data folder in {PATH_TO_OPENCOMPASS}/AECBench/;
    • Place the AECBenchData forder in /OpenCompass/configs/datasets/AECBenchData;
    • Place AECData.py in /OpenCompass/opencompass/datasets/AECData.py;
    • Run the following command for evaluation:
    python run.py --models {your-model} --datasets AECBenchData_gen
    
  3. Evaluation Metrics: We provide the evaluation metric implementation specifically designed for this dataset, which has been integrated into OpenCompass. You can find and use it in AECData.py.

Evaluation Results

We tested multiple open-source and commercial models (DeepSeek-R1, GPT-4o, Qwen, GLM-4, etc.).

License

This project follows CC BY-NC License authorization, limited to non-commercial research use only.

Acknowledgments

This project thanks East China Architectural Design & Research Institute Co., Ltd. of China Construction Group and Tongji University for their strong support of this project. We thank Shanghai AI Laboratory for providing the OpenCompass evaluation framework, and also thank all engineers and researchers who participated in data compilation, review, and experiments.

📚 Citation

@article{liang2025aecbench,
  title={AECBench: A Hierarchical Benchmark for Knowledge Evaluation of Large Language Models in the AEC Field},
  author={Chen Liang and Zhaoqi Huang and Haofen Wang and Fu Chai and Chunying Yu and Huanhuan Wei and Zhengjie Liu and Yanpeng Li and Hongjun Wang and Ruifeng Luo and Xianzhong Zhao},
  journal={arXiv preprint arXiv:2509.18776},
  year={2025}
}
Downloads last month
27